From checker at panix.com Fri Jul 1 01:32:18 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:18 -0400 (EDT) Subject: [Paleopsych] CHE: Exploring the Good That Comes From Shame Message-ID: Exploring the Good That Comes From Shame The Chronicle of Higher Education, 5.7.1 http://chronicle.com/weekly/v51/i43/43a01102.htm By PETER MONAGHAN Elspeth Probyn, professor of gender and cultural studies, University of Sydney, Australia The cultural emphasis on self-esteem and pride veils the benefits of shame, Ms. Probyn argues in Blush: Faces of Shame (University of Minnesota Press). Shame, a universal feeling, alerts us to examine what we are and would like to be, she says. When there is "public silence around shame, it doesn't get discussed, it just gets more deeply embedded." Q. Why the title "Blush"? A. All emotions are embodied, but shame feels the most embodied. Of course blushes show more clearly on freckled Celts like myself, but we all feel that heat on our face. There are some deep similarities that humans have, and we have perhaps overly fixated on the differences during the last 20 years. Q. What good comes from shame? A. A kind of painful good. It would be silly to say that shame doesn't hurt and isn't sometimes very painful, but it does make you think about what you hold dear, whether that be at an individual or collective level, or as a nation. It is one of the emotions that most clearly throw into relief the values we have. Q. Is shame, which may prompt self-improvement, more trustworthy than pride, which demands but does not always deserve respect? A. Yes. Part of my interest in shame came from thinking about the limits of pride, especially when it's used in queer pride, or fat pride, or whatever. There is a real limit to those politics. Shame could be used to highlight what we ought to be proud of but haven't quite achieved. Q. If shame is worthwhile, how about shaming? A. Shaming is very limited in its value. It requires that someone stand on high and point the finger. Some strands of feminism have used shaming, but it's the experiencing of shame rather than the wielding of shame that can be good. Q. How can shame inform ethics and politics? A. Well, if, for instance, after Abu Ghraib, we'd all just paused to say, "Oh, my God." But you have to have a political culture built up that's capable of those moments of pausing and reflecting. The most courageous governments would be capable of that -- the ones that are most deeply rooted in a democratic sense. Q. How do people who have been instilled with a harmful sense of shame early in life discern negative shame from positive, elucidating shame? A. That might be helped by distinguishing clearly between shame and guilt. In the internal, intrapsychic, intrasubjective sense, guilt can become just lodged there, whereas shame is more mercurial and doesn't seem to lodge anywhere. It returns and forces us to think again about our actions. From checker at panix.com Fri Jul 1 01:32:31 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:31 -0400 (EDT) Subject: [Paleopsych] NS: Genes blamed for fickle female orgasm Message-ID: Genes blamed for fickle female orgasm http://www.newscientist.com/article.ns?id=mg18625033.900&print=true * 11 June 2005 * Rowan Hooper IS THIS the ultimate excuse for poor performance in bed? "Sorry, darling," the man says, just before falling asleep. "It's your genes." According to a study published this week, up to 45 per cent of the differences between women in their ability to reach orgasm can be explained by their genes. Despite decades of surveys and conjecture about the role of culture, upbringing and biology in female sexual function, from Freud in 1905 to the Hite report in 1976, this is the first study of the role of a woman's genes. Its findings suggest there is an underlying biological basis to a woman's ability to achieve orgasm. Whether that basis is anatomical, physiological or psychological remains uncertain, says Tim Spector of the twin research unit at St Thomas' Hospital in London, who carried out the study. "But it is saying that it is not purely cultural, or due to peer pressure, or to differences in upbringing or religion," he says. "There are wide differences between women and a lot of these differences are due to genes." Spector's team asked more than 6000 female twins to fill out a confidential questionnaire about how often they achieved orgasm during intercourse and masturbation. They received 4037 complete replies, which included answers from 683 pairs of non-identical twins and 714 pairs of identical twins. The women's ages ranged from 19 to 83, and about 3 per cent were lesbian or bisexual. Only 14 per cent of the women reported always experiencing orgasm during intercourse. Another 32 per cent of the women reported that they were unable to achieve orgasm more than a quarter of the time, while 16 per cent never achieved it at all. Comparing the results from identical and non-identical twins suggests that 34 per cent of this variation in ability to orgasm during intercourse is genetic. The idea behind twin studies is that pairs of twins grow up in similar environments. So if identical twins are more similar in some way than non-identical twins, then that similarity must be down to their identical genes rather than the environment. Unsurprisingly, more women were able to achieve orgasm through masturbation, with 34 per cent saying they could always do so. However, the figure for those who could never achieve it was only slightly lower, at 14 per cent. The analysis suggests that 45 per cent of this variation is genetic (Biology Letters, DOI: 10.1098/rsbl.2005.0308). Spector says he was surprised by the similarity in the numbers of women unable to experience orgasm either through intercourse or masturbation. "With masturbation there are fewer external factors - i.e. men," he says. "So the higher heritability value for masturbation gives us a clearer picture of what's going on." The discovery of a genetic basis for the ability of women to orgasm raises questions about its evolution. One theory is that it is a tool for mate selection, the idea being that males best able to bring females to orgasm are also the best males to help raise children. Another is that the female orgasm produces movements that increase sperm uptake, and therefore fertility. But studies of other primates suggest otherwise. Female stump-tailed macaques have orgasms too - but mainly during female-female mountings, which hardly supports the fertility or mate-selection idea. Bonobos engage in highly promiscuous sex and mutual masturbation, complete with orgasms, a practice that is thought to promote group cohesion. This supports yet another theory: that orgasm is important in bonding. But even if orgasm does play this role, it cannot be crucial in humans. The finding that many women cannot achieve orgasm because they do not have the genes for it shows that the ability to orgasm is not a trait for which there has been strong evolutionary selection, says Elisabeth Lloyd of Indiana University in Bloomington, author of The Case of the Female Orgasm. This supports her theory that as far as orgasms are concerned, women have been riding on the genetic coat-tails of male evolution, and that the female orgasm is merely an accidental echo of the male one, the equivalent of male nipples. Lloyd says the findings also challenge the notion that the failure to achieve orgasm represents "female sexual dysfunction", an idea popular with companies keen to sell to remedies for this so-called disorder. "What definition of 'normal' could possibly justify labelling a third of women as 'abnormal'?" she asks. Even if struggling to achieve orgasm is nothing unusual, Spector says it might be possible to find ways to make it easier. Though hundreds of genes could be involved, "that doesn't mean we couldn't find the genes and pathways, if this was taken more seriously as a problem", he says. From checker at panix.com Fri Jul 1 01:32:38 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:32:38 -0400 (EDT) Subject: [Paleopsych] NS: The Case of the Female Orgasm: Bias in the science of evolution by Elisabeth A Lloyd Message-ID: The Case of the Female Orgasm: Bias in the science of evolution by Elisabeth A Lloyd http://www.newscientist.com/article.ns?id=mg18624991.800&print=true * 14 May 2005 * Gail Vines SEXUAL climax for the male is, evolutionarily speaking, rather dull. Its raison d'?tre seems crystal clear. Orgasm fosters men's reproductive success, because it is linked to the ejaculation of sperm. Only devotees of tantric yoga, apparently, can achieve orgasm without ejaculation. But women too are able to experience orgasm. Sexologists have documented the "clonic contraction of pelvic and abdominal muscles initiated by a spinal reflex", and, in Elisabeth Lloyd's favourite definition, the "combination of waves of a very pleasurable sensation and mounting of tensions, culminating in a fantastic sensation and release of tension". What has puzzled generations of thinkers, however, is why women, as well as men, should have evolved the capacity for such sexual pleasure. Over the past century, scores of biologists have sought an answer in natural selection. In The Case of the Female Orgasm, Lloyd, who is a professor of biology at Indiana University, has totted up 21 alternative explanations. All these "adaptationist" theories share one thing: the belief that women have evolved the capacity for orgasm because it fosters their reproductive success. In one of the most popular accounts, female orgasm is the cement in pair bonds: mutual pleasure fosters happy monogamous couples who share childcare ever after. The latest idea, "sperm competition", is more hydraulic in tone. It argues that orgasmic contractions of the uterus are designed to suck up sperm from the vagina, fostering the reproductive success of the male who gives pleasure to his partner. There is one big problem with all these ideas: no study has ever established a reliable link between a woman's orgasmic capabilities and her fertility or fecundity. And that, says Lloyd, should immediately set warning lights flashing. And there's another problem: the glib assumption that female orgasm is designed to happen during heterosexual intercourse. In fact, the data shows that many women struggle to climax during conventional penetrative sex, and usually do so only with direct clitoral stimulation. Yet during masturbation both women and men can achieve orgasm in about four minutes. The conclusion, Lloyd argues, must surely be that the female orgasm has no biological function. Rather, it's on a par with the male nipple - an accident of shared developmental pathways in the early embryo. Because women need nipples to suckle their babies, men end up with rudimentary versions too. They may not give milk, but like the female's they have erotic sensibilities. As for genitalia, because men need ejaculatory penises, women end up with clitorises capable of similar sexual pleasures. Lloyd reckons that biases in evolutionary thinking have blinkered generations of mostly male biologists. It is time to give up the adaptationist's fallacy and face facts. The late Stephen Jay Gould, who encouraged Lloyd's long-standing investigation, must be cheering from above. From checker at panix.com Fri Jul 1 01:33:08 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:08 -0400 (EDT) Subject: [Paleopsych] Cordis: Technology could grow beyond human control, warns Millennium report Message-ID: Technology could grow beyond human control, warns Millennium report http://dbs.cordis.lu/cgi-bin/srchidadb?CALLER=NHP_EN_NEWS&ACTION=D&SESSION=&RCN=EN_RCN_ID:24053 [Only the 2004 report, which costs $50, is on the site.] [Date: 2005-06-28] Many people still do not appreciate how fast science and technology (S&T) will change over the next 25 years, and given this rapid development along several different fronts, the possibility of technology growing beyond human control must now be taken seriously, according to a new report. The State of the Future 2005 report is produced by the United Nations University's Millennium Project - a global think tank of foresight experts, academics and policy makers. It analyses current global trends and examines in detail some of the current and future challenges facing the world. Setting the scene, the report states: 'Future synergies among nanotechnology, biotechnology, information technology and cognitive science can dramatically improve the human condition by increasing the availability of food, energy and water and by connecting people and information anywhere. The effect will be to increase collective intelligence and create value and efficiency while lowering costs.' However, it warns that 'a previous and troubling finding from the Millennium Project still remains unsolved: although it is increasingly clear that humanity has the resources to address its global challenges, unfortunately it is not increasingly clear how much wisdom, goodwill and intelligence will be focussed on these challenges.' The report argues that because the factors that caused the acceleration of S&T are themselves accelerating, the rate of change in the past 25 years will appear slow compared to the rate of change in the next 25 years. 'To help the world cope with the acceleration of change, it may be necessary to create an international S&T organisation to arrange the world's science and technology knowledge as well as forecasts of potential consequences in a better Internet-human interface,' it argues. Taking one particular example - that of nanotechnology - the report predicts that this field will deliver extraordinary benefits for humanity, but warns that little is currently known about the environmental and health risks of nanomaterials. Since the military is currently a major player in the development of nanotechnology, the report proposes military research to help understand and manage these risks. The most important questions to pursue, according to the report, are: how are nanoparticles absorbed into the body through the skin, lungs, eyes, ears and alimentary canal? Once in the body, can nanoparticles evade natural defences of humans and other animals? What are the potential exposure routes of nanomaterials - both airborne and waterborne? How biodegradable are nanotube-based structures? The authors suggest that a classification system will be needed to provide a framework within which to make research judgements and keep track of the knowledge regarding potential nanotech pollution. 'Toxicologists and pharmaceutical scientists will have to be brought together to investigate nanoparticles' ability to evade cell defences to target disease,' they add. Returning to the wider challenges facing humanity, the report notes that national decision makers are rarely trained in the theory and practice of decision making, and argues that advanced decision support software could help. 'Formalized ethics and decision training for decision makers could result in a significant improvement in the quality of global decisions,' it concludes. For further information, please consult the following web address: http://www.acunu.org/millennium/ Category: Publication Data Source Provider: American Council for the United Nations University Document Reference: Based on the State of the Future 2005 report Subject Index : Scientific Research; Social Aspects; Forecasting; Materials Technology From checker at panix.com Fri Jul 1 01:33:15 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:15 -0400 (EDT) Subject: [Paleopsych] NS: Interview: The Koran to quantum physics Message-ID: Interview: The Koran to quantum physics http://www.newscientist.com/article.ns?id=mg18625051.600&print=true [I don't think Mr. Mencken wrote enough about Islam to get into its fit with science. He was certainly opposed to all attempts to fit Christianity with science.] * 25 June 2005 * Michael Bond Iran is changing. A society once closed to the outside world has acquired a hunger for knowledge and a thirst for cutting-edge ideas. The number of publications by Iranian scientists in international journals has quadrupled over the past decade. Young people in particular want more Kuhn and less Khomeini. And they voted overwhelmingly against hardline candidates in last week's elections. But what about the clerics who have led Iran since 1979? How comfortable are they with modern science and technology? Do they oppose it? Can they learn to live with it? Do they believe it should be "Islamicised"? Western ways of thinking and doing have long held a fascination for Iran's religious leaders, from before the Islamic revolution of 1979 that deposed the Shah. When the Shah banned Ayatollah Khomeini's speeches, for example, his supporters distributed them on audio cassettes in the hundreds of thousands. Similarly, desktop publishing was eagerly adopted to produce glossy magazines extolling the virtues of post-revolution Iran. Unlike most other Muslim countries, Iran has several institutions dedicated to enabling clerics to test their knowledge and ideas against those developed in modern universities. Mofid University is the best known. In just 10 years, it has developed a reputation in the fields of philosophy and human rights, and organises exchanges with universities abroad, including the US. Michael Bond travelled to Qom - Iran's spiritual hub, birthplace of the Islamic revolution and home of Mofid University - to ask Masoud Adib, Mofid's head of philosophy, about Islam and the challenge of science. Is there such a thing as Islamic science? We cannot really talk of Islamic science. We can talk of Islamic philosophy, political science, sociology, and maybe Islamic psychology, but not Islamic physics or chemistry. Sciences like physics and chemistry are neutral. However, in science it is important to distinguish between discovery and judgement - between collecting data or experimentation, and evaluating and judging what has been collected. When researchers evaluate data, they all use the same methodology, whether they are Muslim or Christian, religious or secular. But when a researcher is collecting data or conducting experiments, things like religion, culture or even the attitude of the researcher make a difference. There might be differences between the way a Muslim collects data and someone else, just as there are differences in the way women and men collect data, or people from different cultures. But this does not mean the science produced is Islamic science. How does an Islamic approach to experimenting and data collection differ from other approaches? In an Islamic culture, the reason a person seeks knowledge is to know God, to seek a better understanding of God. That is the motivation. Someone from another culture or religion may do it for another reason: to seek particular technologies, for example, or just to know reality. People who do science for different reasons will probably look at different areas, or approach a problem from different sides. What would a Muslim scientist seek? In Islam, science or knowledge should not be sought solely for the sake of curiosity. Research should always be targeted. In a world where there is a lot of disease and many complex problems such as poverty, famine, drought and lack of education, scientists should not be allowed to just go after scientific curiosity for its own sake. It is the duty of scientists to try to solve these problems. So a scientist should not spend all his time in a laboratory working for himself, satisfying his own curiosity. He needs to always consider whether what he is doing is in line with what God wants. If people are guided so much by religion, how can they do good, objective science? Religion offers a framework for life. It helps you from the moment you get up in the morning until you go to sleep at night. You have to live within it. But that doesn't mean that in every moment of the day you have to take your instructions from religion. Rather, it means that you have to live for religious targets, and that the values from your religion should govern everything you do. In this there is no difference between Islam and other religions. Within that religious framework, you have to learn how to secularise. Day-to-day life has to be based on secular knowledge: for instance, how to eat your breakfast or work in an office. So you can have knowledge of a secular science within the framework of a religious life. How does that work in practice? Where do you draw the line between the two? People tend to make two mistakes. One is to try to derive the details of life from religion - for example, looking to religion for the answers to why everything happens, the answers to all the practical things in life. This will not help us run a society. The other mistake is to loosen the religious framework so much that you think you can derive the ultimate aim of life from the empirical. One of the major reasons a lot of Muslims do not do well in science is that they make the first mistake. A lot of modern societies run into difficulties and cannot adapt to problems in life because of the second mistake. How can Iran modernise and develop in science and technology without sacrificing the values and traditions of Islam? Iran has already modernised in some areas. One of the problems of this modernising is that it is not a result of blossoming from the inside; it has come from the outside. Over the past 150 years, a gap has opened up in Iranian society, with one group going for modernity and another for tradition. The revolution in Iran had roots in both modernity and tradition, and I believe the gap between the two has been gradually closing. However, if modernity is not based on a nation's culture it can do serious damage. This is what happened in the west. This does not mean we have to escape from modernity. Rather we have to try to minimise the damage that arises from the clash with tradition. This means that in our individual lives, and as a society, we have to keep our eyes on religious targets while at the same time making best use of modern knowledge. How would that work in science? One of the duties of a scientist from any culture is to progress in science and knowledge while preserving his moral values, not as a religious person but as a human being. You have to produce knowledge, but you may have to restrict yourself from certain areas. It is delicate - you lose if you hold back from research, you lose if you ignore your moral values. Are there any modern technologies that you think are a particular threat to Islam? On one level, technology is simply a tool and people have to learn how to use it. Of course it depends how you use it. The important thing is that people use technology in the best way for a country and minimise the damage. But there is another deeper level at which to look at technology. New technologies are deeply tied up with spirituality and morality, for they influence how we behave. For example, whether I choose to go to work on horseback or in a car will affect how I behave during the journey and the effect I have on others. Whenever a new technology arises, such as the internet, it is essential to have a dialogue about how it is going to affect us. We need time to contemplate such changes in life. From checker at panix.com Fri Jul 1 01:33:38 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:33:38 -0400 (EDT) Subject: [Paleopsych] NS: Did humans evolve in fits and starts? Message-ID: Did humans evolve in fits and starts? http://www.newscientist.com/article.ns?id=dn7539&print=true * 17:30 17 June 2005 * Gaia Vince Humans may have evolved during a few rapid bursts of genetic change, according to a new study of the human genome, which challenges the popular theory that evolution is a gradual process. Researchers studying human chromosome 2 have discovered that the bulk of its DNA changes occurred in a relatively short period of time and, since then, only minor alterations have occurred. This backs a theory called punctuated equilibrium which suggests that evolution actually occurred as a series of jumps with long static periods between them. Evolutionary stages are marked by changes to the DNA sequences on chromosomes. One of the ways in which chromosomes are altered is through the duplications of sections of the chromosomes. These DNA fragments may be duplicated and inserted back into the chromosome, resulting in two copies of the section. Evan Eichler, associate professor of genomic sciences at the University of Washington in Seattle, US, and colleagues looked at duplicated DNA sequences on a specific section of chromosome 2, to compare them with ape genomes and Old World monkey genomes. They expected to find that duplications had occurred gradually over the last few million years. Instead, they found that the big duplications had occurred in a short period of time, relatively speaking, after which only smaller rearrangements occurred. Eichler found the bulk of the duplications were present in the genomes of humans, chimpanzees, gorillas and orang-utans, but were absent in Old World monkeys - such as baboons and macaques. Narrow window An analysis of the degree of chromosomal decay for this section showed that the major duplications occurred in the narrow window of evolutionary time between 20 million and 10 million years ago, after human ancestors had split from Old World monkeys, but before the divergence of humans and great apes. It is unclear why [these duplication] events occurred so frequently during this period of human and great ape evolutionary history. It is also unclear as to why they suddenly cease, at least in this region of chromosome 2, Eichler says. Other regions may show different temporal biases. The important implication here is that episodic bursts of activity challenge the concept of gradual clock-like changes during the course of genome evolution, he says. Since duplications are important in the birth of new genes and large-scale chromosomal rearrangements, it may follow that these processes may have gone through similar episodes of activity followed by quiescence. Growing evidence Laurence Hurst, professor of evolutionary biology at the University of Bath in the UK, says the study was very interesting, although he would like to see this punctuated evolution demonstrated for other chromosomes, to be more confident that this is a general pattern. There is growing evidence that evolutionary processes may occur in bursts. We now know, for example, that 50 million years ago there was a burst of activity that resulted in lots of new genes being produced, he told New Scientist. It is unknown what effect the sudden duplication activity may have had on chromosome 2. Eichler theorises that it may have resulted in genes for increased brain size or pathogen evasion. If specific regions of chromosomes can have very punctuated events, it means our models based on gradual evolution are probably wrong, he says. The group will continue looking at the chromosome duplications to try and correlate them with changes in gene function or expression. Journal reference: Genome Research (vol 15, p 914) Related Articles Hominid inbreeding left humans vulnerable to disease http://www.newscientist.com/article.ns?id=dn6920 25 January 2005 Jumping genes help choreograph development http://www.newscientist.com/article.ns?id=mg18424702.700 23 October 2004 Wonderful spam http://www.newscientist.com/article.ns?id=mg18224496.100 29 May 2004 Weblinks * [18]Eichler Laboratory, University of Washington, Seattle, US * [19]http://eichlerlab.gs.washington.edu/ * [20]Laurence Hurst, Bath University, UK * [21]http://www.bath.ac.uk/bio-sci/hurst.htm * [22]Genome Research * [23]http://www.genome.org/ References 18. http://eichlerlab.gs.washington.edu/ 19. http://eichlerlab.gs.washington.edu/ 20. http://www.bath.ac.uk/bio-sci/hurst.htm 21. http://www.bath.ac.uk/bio-sci/hurst.htm 22. http://www.genome.org/ 23. http://www.genome.org/ E-mail me if you have problems getting the referenced articles. From checker at panix.com Fri Jul 1 01:45:53 2005 From: checker at panix.com (Premise Checker) Date: Thu, 30 Jun 2005 21:45:53 -0400 (EDT) Subject: [Paleopsych] WSJ: Imprinted Genes Offer: Key to Some Diseases-- And to Possible Cures Message-ID: Imprinted Genes Offer: Key to Some Diseases-- And to Possible Cures http://online.wsj.com/article/0,,SB111956911033268215,00.html By SHARON BEGLEY June 24, 2005; Page B1 According to the old joke, the homely but brilliant male scientist married the gorgeous but dim model figuring their children would have her looks and his brains. He was crushed when they had her brains and his looks. The scientist was clearly not among those studying a booming new area of genetics. If he had been, he would have known that whether a child's traits are shaped by mom's genes or dad's genes isn't a simple matter of recessiveness or dominance, let alone of pure luck, as the textbook wisdom says. Instead, some genes come with molecular tags saying (in biochemical-ese), "I come from mom; ignore me," or "You got me from dad; pretend I'm not here." Such genes are called imprinted. Unlike recessive or dominant genes (such as for black or blond hair), which are composed of different molecules, these genes are identical except for the silencer tag sitting atop them. The result is that if the active gene is defective, there is no working backup; a healthy but silenced gene from the other parent can't step into the breach. In the joke, mom's beauty genes and dad's brainy genes were silenced, leaving mom's dimwitted genes and dad's homely ones to call the shots. No one has reliably identified genes for beauty or for brains, let alone figured out whether mom's or dad's count (or whether this explains male-pattern baldness). But real imprinted genes are hitting the big time. Imprinting may be one reason people seem to inherit conditions such as autism, diabetes, Alzheimer's disease, male sexual orientation, obesity and schizophrenia from only one side of the family. At least one biotechnology company is planning to scan the entire human genome for imprinted genes (detectable with a biochemical test), hoping to use the data to diagnose incipient cancers. Almost all imprinting happens automatically, long before birth, but in some cases it can result from outside interference. Toxic chemicals, for instance, may eliminate the silencer tag, causing potentially harmful effects that can be transmitted to future generations. (Two points to readers who say, "Lamarck lives!") The number of human genes where the parent-of-origin matters keeps rising. According to a new computer algorithm, about 600 mouse genes are likely to be imprinted, scientists at Duke University report in Genome Research. If that 2.5% rate holds for humans -- and virtually every mouse gene has a human counterpart -- then we have hundreds of imprinted genes, too. Among the genes where the parent of origin matters are three on chromosome 10. Only the copies from mom, studies suggest, are turned on. One, expressed in the brain, is linked to late-onset Alzheimer's disease. Another is linked to male sexual orientation, and a third to obesity. With dad's contribution silenced, if there is anything unusual in the copy from mom, that will determine the child's trait. "For Alzheimer's, if the mutation is in dad's gene you'll never see an effect, but if it's in mom's you're at risk for the disease," says Duke's Randy Jirtle. A gene on chromosome 9, linked to autism, seems to count only if it came from dad. One on chromosome 2 and one on 22 are associated with schizophrenia; only the copies from dad count. Having a family tree mostly free of these diseases is therefore no assurance of good health. If the disease runs on dad's side, his gene may be defective, and that is the one that matters. As they discover more imprinted genes, scientists are seeing that the silencing tag can be knocked off, with dire consequences. An animal study published this month suggests how. When fetal rats were exposed to two toxic chemicals -- a fungicide called vinclozolin commonly used in vineyards and a pesticide called methoxychlor -- they grew up to have slower- and fewer-than-normal sperm, Michael Skinner of Washington State University and colleagues report in the journal Science. The abnormalities were inherited by the rats' sons, grandsons and great-grandsons. "That environmental toxins can induce a transgenerational genetic change is a phenomenon we never knew existed," Prof. Skinner says. How does it occur? Probably not through harmful mutations, which become rarer with each generation. But imprinting changes, of which Prof. Skinner's group has detected 50 and counting, persist through the generations. The ink is barely dry on the human genome project, but already researchers are onto the "second genetic code," or the pattern of silencers on our DNA. Using a technology called MethylScope ("methyl" is the DNA silencer), "we will map this second genetic code to see which genes are imprinted and identify any differences between normal and cancerous cells," says Nathan Lakey, chief executive of Orion Genomics, a closely held biotechnology concern. Those differences may become the foundation for molecular diagnostic tests within three years, perhaps starting with colon cancer. Normally, the copy of a gene called IGF2 that you get from dad is active, the copy from mom silenced. In 10% of us, though, mom's copy has thrown off the silencer, leading to a greater risk of colorectal cancer. Detecting that unsilencing could provide an early warning of the disease. * You can e-mail me at sciencejournal at wsj.com2. From checker at panix.com Fri Jul 1 17:37:25 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:25 -0400 (EDT) Subject: [Paleopsych] Newswise: Study Shows How Sleep Improves Memory Message-ID: Study Shows How Sleep Improves Memory http://www.newswise.com/p/articles/view/512800/ Source: [1]Beth Israel Deaconess Medical Center Released: Tue 28-Jun-2005, 11:05 ET Newswise -- A good night's sleep triggers changes in the brain that help to improve memory, according to a new study led by researchers at Beth Israel Deaconess Medical Center (BIDMC). These findings, reported in the June 30, 2005, issue of the journal Neuroscience and currently published on-linee, might help to explain why children - infants, in particular - require much more sleep than adults, and also suggest a role for sleep in the rehabilitation of stroke patients and other individuals who have suffered brain injuries. "Our previous studies demonstrated that a period of sleep could help people improve their performance of `memory tasks,' such as playing piano scales," explains the study's lead author Matthew Walker, PhD, director of BIDMC's Sleep and Neuroimaging Laboratory. "But we didn't know exactly how or why this was happening. "In this new research, by using functional magnetic resonance imaging (fMRI), we can actually see which parts of the brain are active and which are inactive while subjects are being tested, enabling us to better understand the role of sleep to memory and learning." New memories are formed within the brain when a person engages with information to be learned (for example, memorizing a list of words or mastering a piano concerto). However, these memories are initially quite vulnerable; in order to "stick" they must be solidified and improved. This process of "memory consolidation" occurs when connections between brain cells as well as between different brain regions are strengthened, and for many years was believed to develop merely as a passage of time. More recently, however, it has been demonstrated that time spent asleep also plays a key role in preserving memory. In this new study, twelve healthy, college-aged individuals were taught a sequence of skilled finger movements, similar to playing a piano scale. After a 12- hour period of either wake or sleep, respectively, the subjects were tested on their ability to recall these finger movements while an MRI measured the activity of their brain. According to Walker, who is also an Assistant Professor of Psychiatry at Harvard Medical School, the MRI results showed that while some areas of the brain were distinctly more active after a period of sleep, other areas were noticeably less active. But together, the changes brought about by sleep resulted in improvements in the subjects' motor skill performance. "The cerebellum, which functions as one of the brain's motor centers controlling speed and accuracy, was clearly more active when the subjects had had a night of sleep," he explains. At the same time, the MRIs showed reduced activity in the brain's limbic system, the region that controls for emotions, such as stress and anxiety. "The MRI scans are showing us that brain regions shift dramatically during sleep," says Walker. "When you're asleep, it seems as though you are shifting memory to more efficient storage regions within the brain. Consequently, when you awaken, memory tasks can be performed both more quickly and accurately and with less stress and anxiety." The end result is that procedural skills - for example, learning to talk, to coordinate limbs, musicianship, sports, even using and interpreting sensory and perceptual information from the surrounding world -- become more automatic and require the use of fewer conscious brain regions to be accomplished. This new research may explain why children and teenagers need more sleep than adults and, in particular, why infants sleep almost round the clock. "Sleep appears to play a key role in human development," says Walker. "At 12 months of age, infants are in an almost constant state of motor skill learning, coordinating their limbs and digits in a variety of routines. They have an immense amount of new material to consolidate and, consequently, this intensive period of learning may demand a great deal of sleep." The new findings may also prove to be important to patients who have suffered brain injuries, for example, stroke patients, who have to re-learn language, limb control, etc. "Perhaps sleep will prove to be another critical factor in a stroke patient's rehabilitation," he notes, adding that in the future he and his colleagues plan to examine sleep disorders and memory disorders to determine if there is a reciprocal relationship between the two. "If you look at modern society, there has in recent years been a considerable erosion of sleep time," says Walker. Describing this trend as "sleep bulimia" he explains that busy individuals often shortchange their sleep during the week - purging, if you will - only to try to catch up by "binging" on sleep on the weekends. "This is especially troubling considering it is happening not just among adults, but also among teenagers and children," he adds. "Our research is demonstrating that sleep is critical for improving and consolidating procedural skills and that you can't short-change your brain of sleep and still learn effectively." Study co-authors include BIDMC researchers Gottfried Schlaug, MD, PhD, Robert Stickgold, PhD, David Alsop, PhD and Nadine Gaab, PhD. This study was supported by grants from the National Institutes of Health and the Dana Foundation. Beth Israel Deaconess Medical Center is a patient care, teaching and research affiliate of Harvard Medical School, and ranks third in National Institutes of Health funding among independent hospitals nationwide. BIDMC is clinically affiliated with the Joslin Diabetes Center and is a research partner of Dana-Farber/Harvard Cancer Center. BIDMC is the official hospital of the Boston Red Sox. For more information, visit [2]http://www.bidmc.harvard.edu. From checker at panix.com Fri Jul 1 17:37:37 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:37 -0400 (EDT) Subject: [Paleopsych] The Scientist: The Uncertain Future for Central Dogma Message-ID: The Uncertain Future for Central Dogma http://www.the-scientist.com/2005/6/20/20/1/printerfriendly Volume 19 | [2]Issue 12 | Page 20 | Jun. 20, 2005 Uncertainty serves as a bridge from determinism and reductionism to a new picture of biology By [4]Arnold F. Goodman, [5]Cl?udia M. Bellato and [6]Lily Khidr Kenneth Eward/BioGrafx/Photo Researchers Inc. Nearly two decades ago, Paul H. Silverman testified before Congress to advocate the Human Genome Project. He later became frustrated when the exceptions to genetic determinism, discovered by this project and other investigations, were not sufficiently incorporated in current research and education. In "Rethinking Genetic Determinism,"^[7]1 Silverman questioned one of the pillars of molecular genetics and documented the need for determinism's expansion into a far more valid and reliable representation of reality. He would receive correspondence from all over the world that reinforced this vision. Silverman firmly believed that we needed a wider-angled model, with a new framework and terminology, to display what we know and to guide future discovery. He also viewed this model as being a catalyst for exploring uncertainty, the vast universe of chance differences on a cellular and molecular level that can considerably influence organismal variability. Uncertainty not only undermines molecular genetics' primary pillars of determinism and reductionism, but also provides a bridge to future research. PILLARS CHALLENGED [0890-3670-050620-20-1-2.jpg] Arnold Goodman (left) is an associate director of the Center for Statistical Consulting at the University of California, Irvine. Cl?udia Bellato (center) is an independent researcher at CENA, University of S?o Paulo, Brazil. Lily Khidr (right) is a PhD candidate at UC-Irvine. They dedicate this article to the memory of Paul Silverman and thank Nancy, his wife, for her assistance. Various commentaries detail deviation from determinism within the cellular cycle. Here we use the term cellular cycle not in the traditional sense, but rather to describe the cyclical program that starts with gene regulation through transcription, translation, post-processing and back into regulation. Richard Strohman at UC-Berkeley describes the program in terms of a complex regulatory paradigm, which he calls "dynamic epigenetics." The program is dynamic because regulation occurs over time, and epigenetic because it is above genetics in level of organization.^[8]2 "We thought the program was in the genes, and then in the proteins encoded by genes," he wrote, but we need to know the rules governing protein networks in a cell, as well as the individual proteins themselves. John S. Mattick at the University of Queensland focuses upon the hidden genetic program of complex organisms.^[9]3 "RNAs and proteins may communicate regulatory information in parallel," he writes. This would resemble the advanced information systems for network control in our brains and in computers. Indeed, recent demonstrations suggest that RNA might serve as a genetic backup copy superseding Mendelian inheritance.^[10]4 Gil Ast of Tel Aviv University writes: "Alternative splicing enables a minimal number of genes to produce and maintain highly complex organisms by orchestrating when, where, and what types of proteins they manufacture."^[11]5 About 5% of alternatively spliced human exons contain retrotransposon Alu sequences. These elements represent an engine for generating alternative splicing. Thus we see a genetic control system regulated by protein products, RNAs, and interventions from DNA itself. Yet throughout, the consideration of genetic uncertainty as a bridge to cellular behavior is conspicuously absent. Genetic reductionism, the other pillar of molecular genetics, has many challengers. Among them is Stephen S. Rothman at UC-Berkeley, who described the limits of reductionism in great detail within his comprehensive and well-constructed book.^[12]6 A more recent publication by Marc H.V. Van Regenmortel at France's National Center for Scientific Research updated this assessment by discussing not only the deficiencies of reductionism, but also current ways of overcoming them. "Biological systems are extremely complex and have emergent properties that cannot be explained, or even predicted, by studying their individual parts."^[13]7 NEW CELL MODEL Molecular genetics appears to be at a crossroads, since neither determinism nor reductionism is capable of accurately representing cellular behavior. In order to transition from a passive awareness of this dilemma to its active resolution, we must move from simply loosening the constraints of determinism and reductionism toward a more mature and representative combination of determinism, reductionism, and uncertainty. To facilitate this expansion, we propose a model for the cellular cycle. Although only a framework, it provides a vehicle for broader and deeper appreciation of the cell. The figure on page 25 provides a novel structure for understanding current knowledge of the cycle's biological stages, as well as a guide for acquiring new knowledge that may include genetic uncertainty. Organismal Regulation: The organism specifies its cellular needs (bottom red) for the cell to act upon. It converts the comparison of proteins with organismal needs into metabolic agents. The organism then defines its cellular needs (top red). It employs metabolic effects to alter the extra-cellular matrix and signal other needs. Cellular Regulation: Within the bounds of a cell's membrane, cellular needs transmission (top blue) directs the cell in various ways, including proliferation, differentiation, and programmed cell death. It uses such factors as receptors and enzymes to yield molecular messengers. In the cell's nucleus, chromatin remodeling (bottom blue) then rearranges DNA accessibility by uncoiling supercoiled DNA and introducing transcription factors. Transcription: Transcription (left green) DNA serves as the template for RNAs, both regulatory sequences and pre-messenger RNAs. It transcribes polymerases and binding partners into heterogeneous nuclear RNAs. Pre-messenger RNAs then undergo highly regulated splicing and processing (right green). They turn pre-messenger RNAs into mature messenger RNAs. Translation: Within the cytoplasm, messenger RNAs and ribosomes translate 2D-unfolded proteins (left magenta). Secondary structuring and thermodynamic energy (right magenta) then enable physical formations that complete the process with folded proteins and oligonucleotides. Postprocessing: Again within the cytoplasm, tertiary structuring and modification (top aqua) use assemblers, modifiers and protein subunits to supply regulated proteins. Then feedback regulation (bottom aqua) produces heritable gene expression from small RNAs, proteins and DNA. The proteins and gene expression, rather than being an endpoint, now begin the whole process over again by signaling other cells, altering and maintaining the genome, and editing RNA transcripts. CELL-BEHAVIOR BRIDGE [14][0890-3670-050620-20-1-3.gif] Model for the Cellular Cycle Helen M. Blau was a keynote speaker at the recent UC-Irvine stem-cell symposium in memory of Paul Silverman and Christopher Reeve.^[17]8 She observed: "Where we look and how we look determine what we see." Although only a brief prescription, we now propose an approach to the exploration for uncertainty that involves both where we look and how we look. We examine those cellular-cycle outputs having a relatively high likelihood of diversity and its frequent companion, uncertainty. As an example of exploring for uncertainty in a cellular cycle, consider the following example: Suppose an organismal regulatory program for cellular differentiation might alter the signaling milieu in the extracellular matrix. The signal is internalized by a cell, which might, in turn, alter transcription, produce mature messenger RNAs, produce the 3D-folded proteins, and feed back to alter gene expression for all daughter cells. Now suppose the ECM signaling milieu is altered with a probability p1; the signal is internalized by a cell with a probability p2; transcription will change with a probability p3; mature mRNAs are produced with a probability p4, producing the 3D-folded protein with a probability p5 and altering heritable gene expression with a probability p6. The probabilities p2, p3, p4, p5, and p6 are all conditional on results from the step preceding them, so that the resulting probability of altered heritable gene expression is the product of all of them. Although this probability may be small, is it not preferable to know its form and to later estimate it, than to simply ignore its existence? When we consider all possible stage alterations, the diversity of outputs and complexity of our probability calculations will increase. If we also consider all possible interactions, the diversity of outputs and complexity of probability calculations will increase quite substantially. The implications reach far beyond the regulation of a single cell or organism. Sean B. Carroll of the University of Wisconsin, Madison, summarizes evolutionary developmental biology,^[18]9 invoking Jacques Monod's landmark Chance and Necessity, and the Democritus quote upon which it is based: "Everything existing in the universe is the fruit of chance and necessity." Why wouldn't chance also be included in our observations of biology at the molecular level? We've proposed a brief overview of the "what" and "how" for constructing an uncertainty bridge from genetic determinism and reductionism to actual cellular behavior. We hope and believe it meets the spirit of Paul Silverman's prescient vision, as well as his final wishes. References 1. PH Silverman "Rethinking genetic determinism," The Scientist 18(10): 32-3. [[19]Full Text] May 24, 2004 2. R Strohman "A new paradigm for life: beyond genetic determinism," California Monthly 2001, 111: 4-27. 3. JS Mattick "The hidden genetic program of complex organisms," Sci Am 2004, 291: 60-7. [[20]PubMed Abstract] 4. SJ Lolle et al, "Genome-wide non-mendelian inheritance of extra-genomic information in Arabidopsis," Nature 434: 505-9. [[21]Publisher Full Text] March 24, 2005 5. G Ast "The alternative genome," Sci Am 2005, 292: 58-65. 6. SS Rothman Lessons from the Living Cell: The Limits of Reductionism New York: McGraw-Hill 2001. 7. MHV Van Regenmortel "Reductionism and complexity in molecular biology," EMBO Reports 2004, 5: 1016-20. [[22]PubMed Abstract][[23]Publisher Full Text] 8. HM Blau "Stem-cell scenarios: adult bone-marrow to brain and brawn," Developing Stem-Cell Therapies: A Symposium in Memory of Paul H. Silverman and Christopher Reeve University of California, Irvine October 20, 2004. 9. SB Carroll Endless Forms Most Beautiful New York: W.H. Norton 2005. References 4. mailto:agoodman at uci.edu 5. mailto:bellato at cena.usp.br 6. mailto:lkhidr at uci.edu 14. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 15. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 16. http://www.the-scientist.com/content/figures/0890-3670-050620-20-1-l3.jpg 19. http://www.the-scientist.com/2004/05/24/32/1 20. http://www.biomedcentral.com/pubmed/15487671 21. http://dx.doi.org/10.1038/nature03380 22. http://www.biomedcentral.com/pubmed/15520799 23. http://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&cmd=prlinks&retmode=ref&id=15520799 From checker at panix.com Fri Jul 1 17:37:44 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:44 -0400 (EDT) Subject: [Paleopsych] Atlantic: Benjamin M. Friedman: Meltdown: A Case Study Message-ID: Benjamin M. Friedman: Meltdown: A Case Study The Altantic, 5.7-8 [First, the summary from CHE: http://chronicle.com/prm/daily/2005/06/2005062401j.htm Friday, June 24, 2005 A glance at the July/August issue of The Atlantic Monthly: How hard times put democratic values at risk America's democratic values could be at risk if it experiences an extended economic downturn, writes Benjamin M. Friedman, an economics professor at Harvard University. History shows that intolerance and repression often accompany economic decline, he writes. While the "most familiar example is the rise of Nazism in Germany, following that country's economic chaos in the 1920s" and the worldwide Great Depression of the 1930s, there are many instances in American history in which "declining incomes over an extended period have undermined the nation's tolerance and threatened citizens' freedoms," he notes. Take the Populist Era of the 1880s and 1890s, for instance. As the economy faltered and wages fell, racism and anti-Semitism spread, and the government passed laws to keep out immigrants and to segregate blacks from whites, Mr. Friedman writes. In the 1920s, when "slow growth together with widening inequality halted improvements in living standards for many Americans," the "upshot was the revival of the Ku Klux Klan, the tightest and most discriminatory immigration restrictions in the nation's history, and the elimination of both federal and state laws designed to protect women and children," he writes. Economic prosperity "is in many ways the wellspring from which democracy and civil society flow," Mr. Friedman argues. "We should be fully cognizant," he concludes, "of the risks to our values and liberties if that nourishing source runs dry." The article, "Meltdown: A Case Study," is drawn from Mr. Friedman's forthcoming book, The Moral Consequences of Economic Growth, to be published by Knopf in October. The article is available online at [54]http://www.theatlantic.com/doc/200507/friedman --Gabriela Montell ----------------- Benjamin M. Friedman: Meltdown: A Case Study The Altantic, 5.7-8 What America a century ago can teach us about the moral consequences of economic decline Would it really be so bad if living standards in the United States stagnated ? or even declined somewhat ? for a decade or two? It might well be worse than most people imagine. History suggests that the quality of our democracy ? more fundamentally, the moral character of American society ? would be at risk if we experienced a many-year downturn. As the distinguished economic historian Alexander Gerschenkron once observed, even a country with a long democratic history can become a "democracy without democrats." Merely being rich is no bar to a society's retreat into rigidity and intolerance once enough of its citizens sense that they are no longer getting ahead. American history includes several episodes in which stagnating or declining incomes over an extended period have undermined the nation's tolerance and threatened citizens' freedoms. One that is especially vivid, and that touched many aspects of American life that remain contentious today, occurred during the Populist era, toward the end of the nineteenth century ? roughly from 1880 through the middle of the 1890s. For a decade and a half after the Civil War, economic growth was largely exuberant, society optimistic, and social progress undeniable. But all that changed over the next fifteen years, beginning with a faltering economy. From 1880 to 1890 Americans' real per capita income grew on average by just 0.4 percent a year (versus almost four percent in the 1870s). Then, after a few strong years at the start of the 1890s, the economy collapsed altogether. A severe banking panic set off a steep downturn, widely known at the time as the Great Depression. By the end of 1893,000 banks and 15,000 other businesses, including several major railroads, were bankrupt. Prices, especially farm prices, had been falling even when the economy was growing strongly. Now the declines became ruinous. Wheat dropped from an average price of $1.12 a bushel in the early 1870s to fifty cents or less in the mid-1890s, and corn went from forty-eight cents a bushel to twenty-one. By the early 1890s farmers in some western states were burning (heir nearly worthless corn for fuel. By 1895 per capita income had fallen below the level it had reached fifteen years earlier. Popular discontent followed economic distress. In 1892 labor action against the Carnegie Steel plant in Homestead, Pennsylvania, sparked an armed battle between striking workers and company-hired Pinkerton forces, leaving sixteen dead and more than ISO wounded. Two years later a strike against the Pullman Sleeping Car Company led President Grover Cleveland to call in the Army to protect the railroads. At the same time, hundreds of unemployed men, led by Ohio businessman Jacob Coxey (the group was known as "Coxey's Army"), marched on Washington to demand federal assistance. Altogether, during the course of 1894 seventeen such "industrial armies" marched on the capital. But economic concerns did not manifest themselves only, or even primarily, in labor marches and job riots; they soured many aspects of American society. As wages fell and unemployment rose, fearful citizens sought to close the country to newcomers ? particularly from areas other than northwestern Europe. The new Statue of Liberty (completed in 1886) may have proclaimed America's welcome to the world's "huddled masses" and "wretched refuse," but such popular magazines of the day as Harper's and The Atlantic Monthly were full of ethnic jokes and slurs. Beginning in the 1880s hard times catalyzed a movement to tighten immigration standards. In 1882, after riots protesting the use of Chinese labor for railroad construction, Congress barred Chinese immigrants entirely. All other immigrants were subject to a head tax. Some states adopted legislation prohibiting certain noncitizens from acquiring land. Race relations also deteriorated. In a spectacularly unfortunate coincidence that would affect American history for decades, this period of economic stagnation ? the worst up to that time ? set in just as Reconstruction ended and the federal government finally withdrew its troops from the defeated southern states. No one will ever know whether the country's race relations, both in the South and elsewhere, would have taken a different course had America enjoyed robust economic growth during this period. In the event, the result was segregation by race in practically every aspect of daily life, together with appalling racial violence. One reason for believing that economic frustrations contributed to the sad history that followed is that although the former Confederate states regained full political independence with the end of Reconstruction, in 1879, most of them did not begin to adopt what in time became pervasive "Jim Crow" laws until the 1890s. By the end of that decade most southern states had made it illegal for blacks to ride with whites in railroad cars, and some had also segregated city streetcars and railroad-station waiting rooms. The devices used to deny most black citizens their voting rights ? property and literacy requirements, poll taxes, and white-only primaries ? were likewise adopted mostly in the 1890s or after. But the legal changes enacted during this period barely capture the racist and anti-immigrant (and anti-Catholic, anti-Semitic, anti-ethnic) sentiment of the time. The 1880s saw a rise in vigilante violence in rural areas ? not only lynchings in the former Confederacy but also beatings, murders, and arson by such groups as the Bald Knobbers, in the Ozarks, and the White Caps, in Kentucky and elsewhere. Such colorful populist figures as "Pitchfork" Ben Tillman, who served as governor of South Carolina from 1890 to 1894 and then as a U.S. senator, and Tom Watson, a widely read newspaperman who ran for vice president on the Populist ticket in 1896, were outspoken white supremacists. Tillman publicly defended lynching, called for the repeal of the Fifteenth Amendment (which had given the vote to blacks), and advocated the use of force to disenfranchise blacks in the meantime. Watson's speeches and editorials were regularly devoted to sensational attacks on blacks. Catholics, Jews, and foreigners. The American Protective Association, an anti-Catholic organization founded in Iowa in 1887, spread rapidly once the 1893 depression began, and claimed to have 2.5 million members nationwide by the mid-1890s. Anti-Semitic propaganda was so common among Populists by 1896 that William Jennings Bryan felt obliged to disavow it during his campaign for the presidency. Steps that would have made America more democratic were not without advocates during this period. Many Populists favored such measures as direct primaries and the popular election of U.S. senators. Some also favored women's suffrage. Bryan was a tireless advocate for all these causes. Yet none of them advanced in the face of prolonged economic stagnation. Meanwhile, the Supreme Court only made matters worse. In two key decisions it effectively gutted the Civil Rights Act passed in 1875 (when economic growth was strong), declaring private racial segregation and then segregation legislated by the states to be constitutionally protected. Throughout the Populist era America's media, politics, and legislation all lent support to cultural exclusion, societal rigidity, and efforts to turn back the clock. These ultimately proved futile, hut for a while they poisoned both politics and society. Openness toward the future, faith in a better society for all, and support for the rights of minorities were simply not the order of the day. Economic weakness does not always produce social regress, of course: history is not so deterministic. The depression of the 1930s led, for the most part, to a reaffirmation of America's openness and generosity. But that was atypical; the Populist era was more the norm. When slow growth together with widening inequality halted improvements in living standards for many Americans in the 1920s, the upshot was the revival of the Ku Klux Klan (not just in the South ? at the Klan's peak perhaps one in ten white Protestant U.S. men was a member), the tightest and most discriminatory immigration restrictions in the nation's history, and the elimination of both federal and state laws designed to protect women and children. Similar economic conditions in the 1970s and 1980s provided the backdrop for another round of anti-immigrant agitation, the rise of the right-wing militia movement, and incidents of politically motivated domestic terrorism. Not just in America but in the other Western democracies, too, history is replete with instances in which a turn away from openness and tolerance, often accompanied by a weakening of democratic institutions, has followed economic stagnation. The most familiar example is the rise of Nazism in Germany, following that country's economic chaos in the 1920s and then the onset of worldwide depression in the early 1930s. But in Britain such nasty episodes as the repression of the suffragette movement under Asquith, the breaking of Lloyd George's promises to the returning World War I veterans, and the bloody Fascist riots in London's East End all occurred under severe economic distress. So did the ascension of the extremist Boulangist movement in late-nineteenth-century France, and the Action Fran?aise movement after World War 1. Conversely, in both America and Europe fairness and tolerance have increased, and democratic institutions have strengthened, mostly when the average citizen's standard of living has been rising. The reason is not hard to understand. When their living standards arc rising, people do not view themselves, their fellow citizens, and their society as a whole the way they do when those standards are stagnant or falling. They are more trusting, more inclusive, and more open to change when they view their future prospects and their children's with confidence rather than anxiety or fear. Economic growth is not merely the enabler of higher consumption; it is in many ways the wellspring from which democracy and civil society flow. We should be fully cognizant of the risks 10 our values and liberties if that nourishing source runs dry. History is replete with instances in which a turn away from openness and tolerance, often accompanied by a weakening of democratic institutions, has followed economic stagnation. This article is drawn from his forthcoming book, The Moral Consequences of Economic Growth, to he published by Knopf in October. ~~~~~~~~ By Benjamin M. Friedman, Professor of economics at Harvard From checker at panix.com Fri Jul 1 17:37:56 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:37:56 -0400 (EDT) Subject: [Paleopsych] Stephen Kershnar: Giving Capitalists Their Due Message-ID: Stephen Kershnar: Giving Capitalists Their Due Economics and Philosophy (2005), 21:65?87 Cambridge University Press SUNY-Fredonia Abstract In general, capitalists deserve profits and losses for their contribution to the general welfare. Market imperfections and the range of permissible prices (at least within the boundaries of exploitation) prevent the alignment from being a direct one, but the connection generally holds. In the context of the market, this thesis preserves the central place of moral responsibility in moral desert. It also satisfies the fittingness and proportionality conditions of moral desert and provides a backward-looking and pre-institutional ground of it. In addition, the focus on contribution unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. Hence, to the extent that desert-satisfaction is relevant in the selection of an economic system, this result strengthens the case for capitalism. 1. INTRODUCTION Recent discussion has focused on the notion that desert-satisfaction is a factor that determines the value of states of affairs.1 If this is correct, then an assessment of the value of capitalism will depend at least in part on the degree to which the free market satisfies desert. In this paper, I will argue for two theses. First, allowing capitalists to keep their profits and losses will in general satisfy desert to a greater degree than will other arrangements (e.g., socialism). Second, the first thesis is the result of the capitalist's contribution to the welfare of others. These theses do not justify the market if one is not a consequentialist (or a non-consequentialist who emphasizes desert) since the goodness of the free market will not directly support its being permissible. Before beginning, it is worth setting out a few notions related to capitalists' profits. The market is a system whereby production and distribution is the result of the voluntary exchange or gift by individual persons or groups. A non-market system is one in which production and distribution does not occur via the market. Socialism is a system whereby the government determines production and distribution and is the primary example of a non-market system. A capitalist is one who has two functions. First, he organizes production. That is, he chooses what, when, and how much to produce. Second, he owns the business's assets. The two combine to explain the central function of a capitalist: control over production. For example, even if the capitalist hires persons to organize production, the capitalist will delegate and approve (at some level) the actions of the organizing employee. In a market, capitalists receive a return, which is the amount of income minus costs. The costs depend on the use of productive resources such as natural resources, labor, and capital. The return is either profit or loss depending on whether the difference is positive or negative. The profits and losses can be either individual or collective depending on the way in which persons coordinate their capitalist activities. A person's well-being is the extent to which her life goes better. I shall assume but not defend the view that how well a person's life goes is a function of two factors.2 The first factor is the degree to which she experiences pleasure or desire-fulfillment. The focus on desire-fulfillment rests on the notion that various desired experiences do not have any phenomenal feature in common, e.g., appreciating a philosophical argument and eating cupcakes do not have any shared experiential features. The second factor is the degree to which she has access to various objective-list elements. Objective-list elements are things that make a person's life go better independent of whether they bring her more pleasure or result in greater desire-satisfaction. This includes things such as knowledge, agency, meaningful relationships, and virtue. My argument has three parts. In the first part, I set out an account of desert and of the desert-based criteria by which to judge capitalist profits. In the second part, I argue that capitalist profits are in general deserved. The argument begins by examining the case in the context of a perfect market and then considers the claim that market imperfections make the connection between profit and desert less direct but still reliable. In the third part, I respond to objections. 2. THE NATURE OF DESERT Desert is a type of value. I shall assume that it has the following structure: 1. A person, S, deserves benefit or cost X if and only if 1. other things being equal, it is intrinsically better that S receive X than that he not receive it, 2. S does action or has character A, and 3. (a) in virtue of (b). There might also be a causal condition (d). 1. (b) causes S to receive X. An area that illustrates this last condition is poetic justice. One can imagine a case where a person commits a battery and gets an appropriate punishment for battery but for one that he did not commit. In this case (a) through (c) are satisfied but it is not clear that the criminal gets the punishment that he deserves. The notion that desert concerns the good rather than the right has several appealing features. First, it intuitively seems that the issue of what persons deserve is distinct from the issue of who has a duty to give it to them. This is particularly attractive where desert rests on such things as virtue or interactions with persons who are now deceased. Second, viewing desert-satisfaction as a determinant of the good allows us to capture our intuitions about the relative value of certain states of affairs. In particular, it allows us to say that other things being equal, the world is a better place when the virtuous get pleasure and the vicious pain than vice versa. Again, these intuitions are appealing regardless of whether we view someone as having a duty to redirect pleasure to the virtuous. Third, viewing desert in terms of the good allows us to distinguish desert from rights, where the latter are relations in which interpersonal duties occupy a central role. Other conditions on desert are worthy of note. First, the act or character in question is something for which the agent is morally responsible. This is not obvious because we sometimes say that a person deserves compensation for an injury, where she is not responsible for receiving that injury. We also say things such as ?all children deserve an education,? where the children didn't do anything that warrants an education.3 If this broader sense of desert is correct, then the desert theorist will have a hard time explaining what distinguishes desert from other intrinsic-value judgments. In any case, I shall confine desert to intrinsic-value judgments in which the agent is morally responsible for the ground of value. If one thinks that this is unduly narrow, then she should substitute ?responsibility-type desert? where I have written ?desert.? Second, there must be an alignment between the object of desert, X, and the ground, A. In particular, if the ground, A , is morally permissible or morally good (depending on whether it is an act or character trait), then the object of desert is good for the person. The converse also holds. This feature relates institutions to desert. An institution is a rule-governed social activity.4 The goodness of an action (and hence the relevant property of it) might causally come about because of the role of institutions but it does not make essential reference to the institution. The bad- or wrong-making feature (e.g., malicious motive or duty infringement) might come about because of the person's intentional action in the context of various social conventions, but the violation of the social convention by itself doesn't explain why the act itself is wrongful. Since the relevant properties of the ground and object of desert are not institutional, desert is pre-institutional (i.e., conceptually, but not causally, independent of institutions). Moral desert is also not a forward-looking justification. That is, a person's desert rests on some past or present act, rather than on the consequences of her receiving the object. This in part explains two fundamental features of desert. First, the object of desert must fit the ground of it. That is, the object must be the right sort of thing. For example, a person who wins a track meet because he is the best runner does not on that basis deserve high grades in his academic courses. Second, the object of desert must be proportionate to the ground.5 For example, a person who does a kind act for a neighbor does not deserve an eternity in heaven for it. These features are inferred from our intuitions. It is also worth noting that, on some accounts, moral desert can be incorporated into forward-looking theories of the right. Desert is backward-looking because it makes essential reference to the past or present. However, on these accounts, desert-grounded duties might be forward-looking. This is because desert can be incorporated into the consequentialist duty to bring about the best state of affairs in the future since desert in part determines the value of a state of affairs. Thus, on these accounts, while desert is a backward-looking determinant of the good, it is compatible with forward-looking theories of the right. It might be objected that a theory of the right is backward-looking if it requires persons to bring about outcomes because they stand in a particular relation to the past. This objection raises difficult issues as to whether the bearer of value in such a theory is merely a future state of affairs, a relation between the past and future states of affairs, or both. If this objection is sound, then desert can still be incorporated into a maximization ethic even if it rules out forward-looking theories of the right. If these assumptions about moral desert are correct, then it has the following properties. It must be pre-institutional and not forward-looking. It must have the right structure and also satisfy both fit and proportionality requirements. I shall argue that capitalist profit satisfies these conditions. 3. CAPITALISTS DESERVE THEIR PROFITS AND COSTS 3.1 Thesis In this section I shall argue that in general capitalists deserve profits and losses and that this desert is grounded in their contribution to the general welfare. Here general welfare is a function (e.g., total or average) of individuals' welfare. The desert-satisfaction generally tracks profits and losses rather than directly tracking them for two reasons. First, profit and loss will not strictly track contribution to the general welfare. This is in part because of market imperfections, e.g., imperfect information and transaction costs, that lessen the degree to which resources satisfy demand. This is also in part because what satisfies demand will not necessarily enhance the general welfare ? either because the demander's practical reasoning (e.g., about her or others' interests) is defective or because the content and strength of persons' desires do not track the outcome of this practical reasoning.6 The latter misalignment creates room for akrasia.7 Second, in some isolated transactions there is a range of profits due to the arbitrary point by which persons divide the benefits of a transaction. In this case, each individual's profit need not track the contribution to overall welfare even if the aggregate profit does so. 3.2 Assumptions Before proceeding, it is worth laying out my undefended assumptions about the market. First, I assume that the market is just (i.e., it respects persons' moral rights). This might rest on claims about the value of non-coercive interference, efficiency, or both. This assumption is a weak one for it need not deny that a welfare-state supplement to the market is just or permissible. The latter might also be just, for example, if persons voluntarily consent to it by choosing to live within a particular country. Second, I assume that as a general matter current private property rights are morally permissible. This assumption rests on the value of autonomy, which is relevant since autonomy requires the use of objects.8 It also involves some mechanism by which claims of the distant past are cut off.9 This assumption is relevant because on some accounts, the distribution of private property rights might affect the distribution of wealth but not overall productive decisions. This might come about since the more efficient producer will in theory use a particular resource, but a wealth transfer might still be necessary where the more efficient producer is not the party with the legal right to the resource.10 One objection is that the first two assumptions are unnecessary. The concern here is that the thesis of the paper is that capitalism is just because it tracks desert. But if this is the case, then there is no need to assume that the market is just because it respects rights. The objector might assert that the second assumption is similarly superfluous. He might say that if I am defending capitalism, then I am defending an account of private property rights. This objection misconstrues my argument, which is for the value of capitalism, not its justness. It is not obvious that desert is a feature of justice, especially if the former is an element of the good and the latter an element of the right. This is not to say that the justice of the background institution of property and the value of capitalism are completely independent of one another. The justice of capitalist property rights is conceptually prior to the value of capitalism since the distribution of moral rights affects what individuals deserve. To see this priority consider a scenario where two persons transfer the control of cars from lone individuals who are not using them to poor families that desperately need them. The first person gets the car by repossessing it when the owner stops making payments (and the sale contract allowed for repossession); the second steals it. Here, the former act grounds positive desert, the second likely does not. In this case, the distribution of moral rights in part determines the value of the acts by affecting what people deserve. Third, I assume that price is, at least within the boundaries of exploitation, a moral-free zone.11 That is, I assume that there is no morally privileged way to distribute a transactional surplus. This assumption allows us to avoid concerns about whether a capitalist receives a disproportionate share of the transactional benefits.12 Such an account rests in part on the notion of a reservation value. This is a value at which a party's gain from the transaction is equal to her best alternative to a negotiated agreement. The reservation value is then used to set the reservation price. This is the minimum price she is willing to accept to enter into the agreement. The space between the parties' reservation prices constitutes the bargaining range. Any outcome within the bargaining range involves a transactional surplus that is the difference between the buyer's and seller's reservation prices. This surplus is the gain by the two parties from the contract. A reservation price may be judged as the party's actual reservation price (determined by a counterfactual about the minimum price that a party would accept) or a moral reservation price (the minimum price that a party ought to accept). The latter may take into account the subjective preferences of the relevant party but it is not fully subjective since the party may be mistaken about what is in her interest. The claim of exploitation in a mutually beneficial transaction involves the stronger party using the vulnerability of the weaker party to take a disproportionate share of the transactional surplus relative to each party's moral reservation price. If we set the level of vulnerability as involving a significant threat to a person's or their loved one's overall well-being, then exploitative transactions under current conditions will likely be rare. Fourth, I will assume that, in general, economic value tracks persons' well-being, i.e., the amount of money spent for a particular good generally tracks the contribution of that good to a person's well-being. This relies on the notion that persons have features such as transitive ordered preferences and the ability to select appropriate means to their ends and that such features explain their spending preferences. This generalization is defeated to the extent that aggregate demand does not track well-being due to market imperfections, or if consumers fail to properly reason about what promotes well-being, or if there is a widespread misalignment between the conclusion of such reasoning and persons' desires. With these assumptions in mind, I set out the notion that in the actual market profits track contribution. 3.3 Profits in the market track contribution The ideal market is one that is characterized by absence of externalities, zero transaction and information costs, full information, perfect competition, and rational individuals.13 These characteristics lead to the maximally efficient use of resources. Under such a system, there will be no entrepreneurial profits since the costless shifting of resources eliminates any inefficient malallocation. The capitalist will still make a profit on his capital that is equal to its marginal value. When we move to the actual market, these characteristics no longer hold. For example, there are real and in fact quite weighty transaction costs. Nevertheless the market will over time move successively closer to the ideal market since it will adopt increasingly lowered costs of production and distribution as resources will be shifted to the most efficient producers and distributors. The actual market will be more efficient than a non-market since it allows for greater specialization in the use of information and better harnesses self-interested motivation. In the actual market, the capitalist receives profits both from a return on capital and from his entrepreneurial activity. Entrepreneurial activity results from the reorganization of production. This second type of profit results because costs decrease but income stays largely fixed. The increased profit is thus the result of earlier malallocation of the factors of production. This second type of profit will decrease over time as competitors will adopt the innovations in production. On this account, the increased profit will in general track the contribution to the general welfare since the size of the additional profit is proportional to the gains in well-being that result from the correction of malallocated resources. The idea behind the claim that increased profit will in general track the contribution to general welfare rests on the following general assumptions. First, entrepreneurial profits result from innovations that decrease production and distribution costs. Second, decreased production and distribution costs increase aggregate preference satisfaction, since buyers can now buy the same or better goods for less. Third, increased aggregate preference satisfaction tracks increased welfare, since persons prefer things that increase their welfare. The value of the correction of malallocated resources is a function of some factors that are not under the producer's control, e.g., the nature of consumer demand and the productive threat of competitors. A concern is what the capitalist contributes when it is the workers who actually do the work of changing the workplace. The capitalist's ownership of workplace assets makes changes in the means of production in part attributable to him. Production systems that do not use the market will be less able to reward persons' contribution to the overall welfare. This is because without the market it becomes extraordinarily difficult to measure contribution. The market allows us to gauge the nature and strength of different consumer demands. Without such a mechanism, sales patterns are more likely to reflect governments' rather than consumers' choices as to products and prices, thus blocking any inference about the collective welfare. An objector might claim that workers or consumers can know which citizens have contributed without paying for it. They might cite the case of baseball teams that know which of their players have contributed without having to know what they get paid. The objector might conclude that non-market production systems are as capable of rewarding persons' contributions as well as market systems. The issue is not whether non-market systems are as capable of rewarding contribution as market systems, but whether they will do so. My claim is that prices are part of a system that provides for the strongest incentives to gather and analyze the relevant data. The baseball-team analogy supports my contention. The value of a player is the result of numerous factors including the value of his likely replacement and the player's value to competitor teams. Measuring these values requires sophisticated statistical studies that would probably not have been developed or applied without a competitive market for the information, a market that in turn depends on a competitive market for players. This connection between measuring contribution and prices becomes even more important for production that does not have the same type of fan interest. For example, the communication of the contribution of raw cotton to well-being when comparing its use in underwear versus towels is unlikely to be accurately measured unless persons have the incentive to be concerned with this particular comparison. 3.4 Argument No. 1: The contribution model satisfies the desert criteria The capitalist receipt of profits and losses on the basis of contribution aligns with the general pattern of desert. First, profit/loss receipt is preinstitutional (conceptually, but not causally, independent of institutions). The productive contribution to the well-being of others is not a fact that makes essential reference to institutions. The market in which this occurs may, although this is not clear, make essential reference to an institution but this merely serves as the context in which the contribution occurs. Second, the contribution is a past or current act and hence does not provide a forward-looking justification for capitalists keeping their profits and losses. Third, deserved profits and losses have the right structure for desert. They involve an in-virtue-of justification of treatment that is claimed to make the world a better place than its absence. Fourth, the capitalist keeping his profits and losses satisfies the fit and proportionality requirements. It intuitively seems appropriate that a person who creates economic value for others should receive economic value himself. The fit criterion is one that is a function of intuition rather than abstract argument. For example, consider the fittingness element of punishing culpable wrongdoers, giving good students high grades, and giving the fastest runner the prize. The consequentialist gains can in part account for these results, but they cannot justify the fittingness relation because of their forward-looking nature. Allowing capitalists to keep their profits and losses satisfies the proportionality requirement since there is a general correlation between profits (and losses) and the contribution to the well-being of others. The link to contribution also unifies the different elements (i.e., moral responsibility and the fittingness condition). Contribution as a ground of desert preserves the essential link to the agent's moral responsibility since it focuses on outcomes, which are closely linked to practical reasoning. This is because practical reasoning, at least on some accounts, has an intention as its conclusion and intentions often, if not always, refer to outcomes the agent wants to bring about. This close relation to practical reasoning is significant since it is practical reasoning that lies at the heart of responsibility. This brings in the fittingness since in the context of capitalist activity it is about the creation of economic value (although not necessarily for others) that the capitalist reasons. Entrepreneurial losses generally result from a misallocation of resources and where so caused reflect a diminishment of general welfare. There is an issue of whether all failures to use resources to contribute to the general welfare constitute such a misallocation given opportunity costs. I leave aside this issue since nothing in this paper rests on this claim. If contribution to the general welfare makes it desirable that the capitalist receive wealth, then a symmetrical account of desert would suggest that diminishment of welfare makes it desirable that she lose wealth. 3.5 Argument No. 2: The contribution model fits into a deeper explanation of desert The link to contribution also unifies the different types of act-based desert and relates it to character-based desert and in so doing provides the deeper explanation for desert. Through a person's responsible actions, he connects himself to certain values. For example, a person who culpably performs a rape connects himself to certain traits, e.g., cruelty, on which supervene value or disvalue, e.g., badness. Thus, our treatment of a person reflects the fact that he has responsibly connected himself to certain values. It is in part through the bringing about of outcomes that this connection occurs. A similar thing is true of character-based desert. One likely explanation of this unity is that character-based desert rests in the end on the responsible actions whereby a person formed his character in certain ways. The practical reasoning whereby he did this did not focus on the effects of his character but rather on particular acts. Nevertheless, in deciding to do certain acts the agent formed himself in certain ways thereby strengthening his connection to certain values. 3.6 Argument No. 3: The contribution model fits with two other areas of act-based desert I shall argue that the capitalist receipt of her profits and losses is deserved for its contribution to the overall well-being. I begin by noting that we intuitively hold that contribution is an appropriate ground of desert in two other areas: punishment and wages. 3.6.1 Deserved punishment. Punishment is deserved for the culpable unjust harm done on others. The harm factor explains why we think that persons who cause greater harm (e.g., murder and rape) deserve more punishment than do those who cause less harm (e.g., theft and battery). Harm can be interpreted as the wrongdoer's contribution to the unjust diminishment of another's well-being. Some of the other factors like culpability can't produce our intuitive sense of the proportionality of deserved punishment. Other factors, e.g., the utility produced by a punishment, cannot act as the ground since they are forward-looking. It might be thought that it is the risk of harm, not the actual harm done that could equally account for my intuitions.14 The risk of harm here is the product of its probability and magnitude. The problem with this suggestion is that likely harm is relevant only in so far as it relates to culpability, which suggests that this is just a restatement of the culpability condition. To see this, consider cases where persons pose a great risk of harm but where they are unaware of this risk. For example, imagine that persons who steal license plates generally sell them to underground garages. Unbeknownst to the license-plate thieves, the garages sell these license plates to gangs who then use them to get away with drive-by shootings. For reasons of police investigation, this fact is never publicized. It seems that the license-plate thieves do not deserve a severe punishment because even though they impose a great risk of harm on others, they are not culpable for doing so.15 The amount of deserved punishment involves such factors as the wrongdoer's culpability, the degree of injustice and, most significantly for my purposes, the harm that the wrongdoer brought about. 3.6.2 Deserved wages. The most plausible candidates for the ground of deserved wages are each party's sacrifice, hard work, or contribution. Deserved wages do not rest on a person's sacrifice. Persons with greater ability often can do tasks with considerably less sacrifice than those with lower ability. This is especially true where the cause of the greater ability is something that the more able persons enjoyed developing or which is largely the result of causal factors that did not require sacrifice, e.g., superior genetic endowment. If persons with greater ability are more able than others to accomplish the same type of tasks then it seems that they have sacrificed less to bring about the result. And if a deserved distribution tracks each individual's sacrifice, then the more able persons ought, as a matter of desert, to be given a smaller share of the social surplus. This seems counterintuitive. For example, if two builders do the same task for the same client but one is able to do it more efficiently than the other due to his greater abilities, it does not seem to be a demand of desert that the less efficient builder be paid more. A similar pattern occurs with regard to desert viewed in terms of hard work. For example, consider a case in which master and apprentice create a great work of art together and the apprentice ends up doing most of the legwork (e.g., more total brushstrokes), thereby putting in the greater effort. It intuitively seems that the master deserves a greater share of the rewards since his contribution (e.g., the idea or vision) is greater. If desert tracks hard work and if persons with greater ability need not work as hard to complete the same task as persons of lesser ability, viewing desert in terms of hard work will produce the same counterintuitive pattern of results as does the focus on sacrifice.16 Viewing deserved wages in terms of proportionality to each participant's objective contribution has several advantages. First, it gets around the problem of discrimination against persons of greater ability that characterized the sacrifice- and desert-based analyses of fairness. Second, this account tracks our intuitions with regard to products produced as a result of the efforts of multiple persons. Here it seems that the fitting reward for each person's participation is proportional to her contribution. This is why we often think it deserved that persons who work longer hours receive more pay but think differently where one person's labor requires considerably greater skills than the others. Consider a case in which two workers do a job. The first puts in 10 hours using a backhoe while the second puts in 50 hours using a shovel. Because of the greater efficiency of the backhoe, the first accomplishes ten times as much. It does not seem undeserved for the first to get paid considerably more. Third, this account is compatible with different accounts of value and models of contribution that might be used to fill out claims of fairness. In particular, the account is neutral with regard to whether the contribution of each person's labor is a function of its marginal product and whether the value of a contribution is a function of the socially necessary labor time that brought it about.17 My general point about the relation of contribution to deserved wages could be accepted while rejecting one or both of these ideas. Hence, in the context of wages, desert tracks each party's contribution. The focus on contribution unifies the different types of act-based desert, e.g., deserved punishment, wages, and profit, and unifies the act- and character-based desert-types. It unifies the types of act-based desert by viewing desert as a fitting response to values adopted when the agent acts to change in the world. In both cases, we respond to the values to which a person has connected himself. 3.6.3 Desert and effort. The most powerful objection to this argument focuses on the relation of desert to moral responsibility. The objector argues that since the ground of moral desert is something for which the agent is morally responsible, the wage a worker deserves depends on some factor that is substantially within her control. The objector continues that since a worker has substantial control over her efforts and not over her contribution or sacrifice, the former is the ground of desert. The objection thus rests on the following three propositions: 1. If something grounds moral desert, then it is something for which the agent is morally responsible. 2. If the agent is morally responsible for something, then she has substantial control over it. 3. The agent has substantial control over, and only over, her efforts.18 One problem with this argument is that it is not clear why the agent doesn't have substantial control over contribution. She may not have as much control as she has over her effort, but this is not required for something to ground desert. It should be noted that the worker has control over the production of certain objects, but not over whether these objects satisfy others' preferences. This is analogous to how a wrongdoer deserves punishment for doing actions that infringe on the rights of others in part where such actions cause the victim to suffer even though the wrongdoer does not control whether the acts bring about another's suffering. A second problem with effort theory arises when we try to identify the scope of effort that grounds deserved wages. For example, consider where one middle-aged worker, Al, has worked much harder than a second, Bob, to develop specific abilities or more disciplined work habits, and as a result is much more efficient. As a result, the two put in the same effort but Al is far more productive. Intuitively, Al seems deserving of higher wages. But if we count the efforts that a worker puts toward developing specific abilities or general abilities (e.g., discipline), then we end up looking at effort that goes far beyond the workplace. For example, Al's discipline might have been acquired in the college weight room. This broader scope breaks the link between the desert ground and moral responsibility. This is because in many cases the decision to develop or not develop general habits, and perhaps some specific abilities, is made where the agent lacks sufficient information about the relevance of these abilities to future jobs. For example, a player's efforts in learning the intricacies of basketball are often made without considering how this might influence his future employment as a talent scout. A third problem with effort-theory is that the agent's efficiency is a function of her planning and her monitoring and adjusting her effort in response to feedback.19 These features are often the result of imagination, intelligence, and other mental events and capacities that do not result from conscious decision. Even if persons can plan or plan to plan, they often can't plan to plan to plan. A dilemma then arises as to whether these mental events (e.g., planning) ground deserved wages. If they do but do not result from conscious decision, then the desert ground is disconnected from moral responsibility. If they do not ground desert but do affect production, then some seemingly relevant effort will not ground desert. Also, on this second horn, wasteful and inefficient effort will generate as much desert as well-planned and well-executed effort. This is counterintuitive. This counterintuitive result persists even where we discount the failure to plan, monitor, or adjust by persons' lower capacities. 3.7 Conclusion In general, capitalists deserve profits and losses for their contribution to the general welfare. Market and demand imperfections and the moral-free nature of prices (at least within the boundaries of exploitation) prevent the alignment from being a direct one, but the connection generally holds. The role of contribution preserves the central role of moral responsibility by placing value on the object of practical reasoning. It satisfies the fittingness and proportionality relations. It also provides a backward-looking and pre-institutional ground of moral desert. In addition, the thesis fits into a deeper explanation of desert in terms of persons connecting themselves to values. The contribution thesis also unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. I now turn to some of the objections to the thesis that capitalists deserve their profits and losses based on their contribution to collective well-being. 4. OBJECTIONS There are roughly three different types of objections that are raised to my thesis. First, capitalists do deserve profits and losses but not on the basis of their contribution to the general welfare. Second, persons do have moral desert but capitalists do not deserve profits and losses. Third, persons do not have moral desert. I shall address the first two, the third takes us too far afield. 4.1 Capitalists deserve profits but not on the basis of their contribution to general welfare On some accounts, deserved profits rest on things other than contribution. These theories fit into two types of grounds: forward-looking and backward-looking theories. An influential forward-looking theory is that put forth by N. Scott Arnold.20 Arnold views deserved punishment as a type of institutional desert. Institutional desert consists of those rules that award costs and benefits in a way that best achieves the market's essential goal. He asserts that since the market's essential goal is the efficient distribution of scarce resources, market-based desert consists of the rule or rules that best achieve this goal. On his account, the rule (or a member of the set of maximally efficient rules) that does so involves capitalists keeping their profits and losses. He argues that this reward system is maximally efficient because it provides capitalists with an incentive to produce efficiently, transfers resources to them in a manner proportionate to their past efficiency in reorganizing production, and communicates the efficient means of production to competitors. Hence, on his account, the effects of allowing capitalists to keep their profits and losses makes them deserved. This account has several problems. The most glaring is that it makes the effects of capitalists' receipt of profits and losses rather than their acts the ground of the desert claim. This is not an act-based ground and this severs any connection between moral responsibility and deserved profit. This is problematic to the extent that one thinks that desert necessarily involves a ground for which the agent is morally responsible. Another problem is that it is forward-looking. To the extent that desert is not forward-looking, this approach is mistaken. Arnold is aware of this objection and claims that institutional desert can be forward-looking where the institution in question is justified on forward-looking grounds. This conflicts with our intuitions about desert because it threatens to categorize a large set of forward-looking value judgments as desert judgments. This account also relies on an institutional desert. As mentioned above, this confuses desert whose grounds are institutional with desert whose ground is pre-institutional but whose effects causally involve institutions. It is hard to see how a ground can essentially refer to an institutional property when such properties are themselves neither something for which the agent is morally responsible nor something that is a good- (or right-) making property. The backward-looking theories agree that capitalists deserve their profit but claim that their desert rests on something else such as risk or the postponement of gratification. The notion that risk can ground desert is unconvincing. Intuitively persons do not deserve things just for taking a risk, independent of the nature and degree of risk.21 For example, persons who take extreme risks often seem to deserve little (e.g., persons who play the lottery). Similarly, persons who take risks that offer little benefit to the risk-taker or others also do not intuitively seem to deserve very much (e.g., persons who invest in mining gold in areas where there is little evidence that it occurs). Moreover, the fit element is not met. It is intuitively unclear how taking a risk makes it valuable for a person to receive a benefit or loss. If we link this risk to the contribution to others' welfare the fit does intuitively seem to be met, but it seems to be contribution and not risk that explains the intuitions involved. The postponement of gratification suffers from similar difficulties.22 Postponing gratification does not seem especially valuable; again it seems to be a function of the likely outcome and perhaps reason for the postponement. The fit element is again not met. Postponement of gratification intuitively seems to be neutral since it is not clear why it matters when a person experiences gratification. However, when linked to something like contribution the fit element does appear to be satisfied. Other theories might rest deserved profits and losses on hard work or sacrifice.23 These approaches intuitively fail to satisfy proportionality in other areas since inefficient or mentally slow workers do not seem to have strong deserts based on their hard work or sacrifice. For example, we intuitively think that a talented quarterback (Ken) who leads his team to repeat championships deserves more income for his playing than does a quarterback with mediocre talent who worked as hard as Ken but who could not read defensive alignments as well. In the context of punishment, there is a similar disconnection between hard work and sacrifice and deserved punishment. If we have analogous intuitions with regard to capitalists, and I think we do, then this suggests that these grounds will not satisfy the proportionality feature of desert. George Sher argues to the contrary that hard work grounds desert because the person is investing his time and energy into a project. Since this is in effect an investment of his life, it has value.24 This argument rests on the premise that since a person's life has value, whatever the life is invested in also has value. This is plausible because it appears able to satisfy the fit relation (because the life has value) and proportionality (since the amount of value tracks the amount of the agent's life mixed in). Like the investment of labor, it is not clear that a person can literally invest his life into a project. After all, a life is not an object but an object's duration in time and this seems incapable of being mixed into other physical objects.25 A more figurative account of mixing one's life into a project escapes this metaphysical concern but only by making Sher's position obscure. Assuming then that I am right in arguing that if capitalists deserve their profits and losses, then contribution grounds it, we still need to look at a challenge to the antecedent. 4.2 Persons deserve things but capitalists don't deserve their profits and losses John Christman raises a powerful objection to the idea that contribution grounds capitalist desert.26 Christman notes that in an imperfect free market, profit and loss are the result of consumer demand and the competitive threat posed by other competitors (the proximity and capacity of potential competitors). Christman then argues that profits and losses do not satisfy the proportionality requirement of desert because neither the consumer benefit nor the producer's contribution determines the size of profit.27 Christman provides three main reasons in support of these claims. First, consumer demand and competitive threat determine profit level but are not part of the consumer benefit. Second, since greater competitive threat benefits consumers, the size of profit is in fact inversely related to consumer benefit. Third, the consumer and competitor conditions are not things within the producer's control and hence not part of his contribution. The first reason should be quickly rejected. The proponent of the contribution thesis asserts that the shape and magnitude of consumer demand and competitive threat are the context in which the benefit is provided. He does not assert that it is part of the consumer benefit. This is analogous to the way in which a starting baseball player has more value to his team the less able his backup, but the weakness of the backup is not itself a benefit that the starter provides to his team. Similarly, the amount of misallocation of resources is the context in which the consumer benefit is provided, but it is not itself a benefit. The notion that the profit is inversely related to the benefit provided to consumers also rests on an error. The error occurs since Christman apparently holds the benefit fixed in different competitive environments and then argues that, given this level of benefit, additional profits merely harm the consumers. The problem here is that this is not something a contribution theorist would hold fixed. If one producer vastly improves production efficiency as compared to his competitors, then over time he contributes more to consumers than one who only slightly improves it. If this is correct and if profits track the degree of improvement, and I assume that they do, then profits will track contribution. The third assumption is trickier for it seems to make the contribution theorist allow that the desert ground involves causal factors outside of the agent's control. Every desert theory must make such an allowance, however, at least to some extent. This is because it is logically impossible for agents to deserve (or even control) the traits and conditions that make their actions and character possible. That is, acceptance of the following proposition will make moral desert impossible: If a person does not deserve to have X and X makes Y possible, then that person does not deserve Y. This can be seen in that no one can deserve the ability to exert effort or a life-sustaining environment.28 Such desert would only be possible for a self-caused being and such a being is impossible. However, once we allow the causal basis of desert grounds to include things outside the agent's control, then the door is sufficiently open to allow persons to deserve things based on the contribution to consumers even though this depends in part on contextual factors. This is because what desert requires is merely that the ground of desert is substantially under the agent's control. The choice of what, where, and when to produce meets this test. An objection that might be raised to this line is that the fact that persons have different opportunities to make a capitalist contribution undermines the case for desert.29 The different opportunities are relevant only if moral desert is comparative, either in general or in the economic context. This is not true of other areas of desert. Consider deserved punishment. How much a rapist deserves to suffer for his act is intuitively independent of how much punishment past and future rapists will be given. If, for example, misogynistic judges have given other rapists one-week sentences, this does not mean that such a penalty for future rapists is deserved. A similar thing appears to be true of character-based desert. A virtuous person does not appear to deserve a life of suffering even if this is what has been given to other similarly virtuous persons. Similar intuitions hold in the context of economic desert. If others who contributed greatly to consumers have received very little benefit for their contribution, then it hardly seems deserved for the next great contributor to get little. If this pattern holds, then moral desert is probably not relational. The explanation for this is that fittingness relation is a function of the values a person has connected himself to (and perhaps also the strength of the connection) and this is conceptually independent of others' actions and characters. Another objection is that capitalists don't deserve their profits and losses because desert rests on virtue or vice alone and capitalists' characters vary. The objector might be arguing that the negative desert that accompanies the vicious character of some capitalists overrides the positive desert that accompanies their contribution to others. This, however, is compatible with the conclusion of this paper since there is nothing about the conclusion that prevents different deserts from being combined to produce an overall desert analogous to the way that vector-forces combine to produce a net vector-force. To be interesting, then, the objector must be asserting the following. Strong Character Theory: Moral desert rests on, and only on, an agent's character. There appear to be four main arguments for this theory. One argument behind this notion is that a person is constituted by her character and hence it is what should determine a person's desert. The idea is that what should ground punishment is who we are rather than what we do. One concern with this argument is that desert rests on some factor for which we are responsible and, on a libertarian account of free will, we are fully responsible for who we are only if we chose to be that way.30 This is because on a libertarian account a person's responsibility must rest on factors that are in the end not traceable to factors outside of his control, such as his environment or genetics. On a fundamental level, only an agent's choices are not traceable to environment or genetics. If we view choices as a type of mental act, then this theory thereby asserts that desert fundamentally rests on acts. Hence, this first argument fails to support the Character Theory. A second argument is that the Strong Character Theory explains why we normally focus on acts. We focus on acts because we can't make reliable judgments of a person's character. We further think that acts ground deserved punishment only in so far as they reflect character. This explains why provocation and duress excuse or partially excuse the agent. They excuse because they involve a disconnection between a person's act and his character. The problem with this account is that the excusing effects of provocation and duress can also be explained in terms of their undermining a person's responsibility for his acts. They do so by introducing emotional forces that overcome a person's ability to control his acts (and would do so for an ordinary person). Thus the role of excuses such as provocation and duress does not support the Strong Character Theory. A third argument for the Strong Character Theory is that character is less subject to moral luck (external influences that affect a person) than other factors and hence a more appropriate ground for desert. The problem with this argument is that there is also constitutive moral luck, i.e., moral luck that shapes what character one has.31 This influence is obviously quite strong, which is why we think that it is important when raising children to have healthy environments. If the influence of moral luck prevents a factor from grounding desert, then this undermines the notion that character grounds desert. One might argue that moral luck shapes character formation less than other factors, e.g., acts and attempts. However, it is not clear what argument supports this claim. This is particularly true if we recognize the significant role genetics plays in determining a person's intelligence, personality, and life outcomes.32 It is not clear in what sense genetics plays a similar role in determining whether someone attempts to perform or performs an act, although it may come into play in so far as a person's character explains in part his thoughts and actions. Thus, the problem of moral luck does not provide clear support for the Character Theory. A fourth argument from intuitions does support the Character Theory . Here our intuitions suggest that the world is a better place if virtuous rather than vicious persons receive increased well-being. Consider the following case, involving the allocation of opium. There are two persons with the same level of unhappiness, both of whom have a painful terminal illness. The first is virtuous, although his role as a space explorer has not allowed him to express it by directly benefiting many people. The second is vicious but was unable to express this for the same reason. It intuitively seems that if we have enough opium for only one person (and it can't be divided), then the world is a better place if we give it to the virtuous rather than the vicious person. These thought experiments seem a little hard to imagine since it is hard to believe that very different characters over the life of the relevant individuals didn't translate into a different number of wrongdoings. If one thinks that this difficulty does not undermine our confidence in our intuitions, then this thought experiment and ones like it still do not provide any support for the Strong Character Theory because they do not rule out acts as a ground of desert. The objector might concede that the Strong Character Theory is false, but assert that the only acts that ground desert are ones done from certain motives (e.g., duty or love of humanity). He might assert that capitalist acts are rarely done from such motives. The problem is that some acts that ground positive desert (e.g., benefiting one's children where one identifies their interests with one's own) are done out of self-interest. Also, some acts that ground negative desert (e.g., injuring others in the pursuit of an ideological goal or to promote the interests of one's children) are done out of desirable motives. If this is correct, then act-based desert need not reduce to motive-based desert. This makes sense since that which grounds desert is something for which a person is responsible and it is not clear that persons can select the motive from which they act. In conclusion, then, there is no support for the notion that acts don't ground desert. If this is correct, then in deciding whether capitalists deserve their profits, we have to be concerned with both their character- and act-based desert. The latter includes capitalist contributions to the welfare of others. One last objection that might be raised is that capitalist desert rests on the legitimacy of the capitalists' ownership of the means of production both in general and in the particular ways found in the actual world.33 A defense of this claim will take us far afield but arguments based on either autonomy or consequences are available to support the claim of legitimacy. One last type of objection rests on the denial of moral desert in general. But a general defense of desert is a project that is outside the scope of this essay. Such a defense will likely rely on showing that our considered moral judgments cohere around certain intrinsic ?better-than? judgments in which the moral ground is something for which persons are morally responsible and the existence of moral desert best explains this coherence. I will instead merely stipulate that my conclusion depends on the undefended claim that persons deserve things. 5. CONCLUSION In general, capitalists deserve profits and losses for their contribution to the general welfare. The role of contribution preserves the central role of moral responsibility, satisfies the fittingness and proportionality relations, and provides a backward-looking and pre-institutional ground of moral desert. In addition, the thesis fits into a deeper explanation of desert in terms of persons connecting themselves to values. The contribution thesis also unifies several different types of act-based desert, specifically deserved profits and losses, deserved punishment, and deserved wages. Hence, to the extent that desert-satisfaction is relevant in the selection of an economic system, this result strengthens the case for capitalism.34 REFERENCES Arnold, N. S. 1987. Why profits are deserved. Ethics 97: 387?402 [OpenURL Query Data] Arthur, J. and W. H. Shaw. 1991. eds. Justice and economic distribution, 2nd edn, Prentice Hall Buchanan, A. 1991. Efficiency arguments for and against the market. In Justice and economic distribution, 2nd edn, ed. J. Arthur and W. H. Shaw. Prentice Hall: 182?92 Christman, J. 1988. Entrepreneurs, profits, and deserving market shares. Social Philosophy & Policy 6: 1?16 [OpenURL Query Data] Cohen, G. A. 1979. The labor theory of value and the concept of exploitation. Philosophy and Public Affairs 8: 338?60 [OpenURL Query Data] Demsetz, H. 1972. Wealth distribution and the ownership of rights. Journal of Legal Studies 1: 223?32 [OpenURL Query Data] Feinberg, J. 1970. Justice and personal desert. In his Doing and deserving, Princeton University Press: 55?94 Feldman, F. 1995. Adjusting utility for justice: a consequentialist reply to the objection from justice. Philosophy and Phenomenological Research 55: 567?85 [OpenURL Query Data] Feldman, F. 1997. Desert: reconsideration of some received wisdom. In his Utilitarianism, hedonism, and desert. Cambridge University Press: 175?92 Hurka, T. 2001. The common structure of virtue anddesert. Ethics 112: 6?31 [CrossRef] [OpenURL Query Data] Kagan, S. 1999. Equality and desert. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 298?314 Kershnar, S. 2002. Private property rights and autonomy. Public Affairs Quarterly 16: 231?58 [OpenURL Query Data] Lomasky, L. 1987. Persons, rights, and the moral community. Oxford University Press McLeod, O. 1999. Desert and wages. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 271?82 Metz, T. 2000. Arbitrariness, justice, and respect. Social Theory and Practice 26: 25?45 [OpenURL Query Data] Miller, D. 1976. Social justice. Oxford University Press Nagel, T. 1982. Moral luck. In Free will, ed. G. Watson. Oxford University Press: 174?86 Nathanson, S. 1998. Economic justice. Prentice Hall Parfit, D. 1984. Reasons and persons. Oxford University Press Parker, R. 1991. Blame, punishment, and the role of result. In Philosophy of Law, 4th edn, ed. J. Feinberg and H. Gross. Wadsworth: 732?38 Pinker, S. 2002. The blank slate. Viking Rawls, J. 1971. A theory of justice. Harvard University Press Sadurski, W. 1985. Giving desert its due: social justice and legal theory. Reidel Schweickart, D. 1991. Capitalism, contribution, and sacrifice. In Justice and economic distribution, 2nd edn, ed. J. Arthur and W. H. Shaw. Prentice Hall: 168?81 Sen, A. 1992. Inequality reexamined. Harvard University Press Sher, G. 1987. Desert. Princeton University Press Simmons, A. J. 1995. Historical rights and fair shares. Law and Philosophy 14: 149?84 [CrossRef] [OpenURL Query Data] Slote, M. A. 1999. Desert, consent, and justice. In L. P. Pojman and O. McLeod, eds., What do we deserve? Oxford University Press: 210?23 Waldron, J. 1983. Two worries about mixing one's labour. Philosophical Quarterly 33: 39?42 [OpenURL Query Data] Waldron, J. 1992. Superseding historical injustice. Ethics 103: 4?28 [CrossRef] [OpenURL Query Data] Watson, G. 1977. Skepticism about weakness of will. Philosophical Review 86: 316?39 [OpenURL Query Data] Wertheimer, A. 1996. Exploitation. Princeton University Press Zaitchik, A. 1977. On deserving to deserve. Philosophy and Public Affairs 6: 370?88 [OpenURL Query Data] Notes 1 See, e.g., Feldman (1995), Kagan (1999), and Hurka (2001). 2 Parfit (1984: 493?502). 3 The idea for these points comes from Fred Feldman (1997), especially 182?86. 4 Arnold (1987: 390 n.6). 5 This condition is part of (1)(a) if benefit or cost X is individuated in terms of both type and amount. 6 The idea for this point comes from Thad Metz. 7 A developed argument for this claim is found in Watson (1977: 316?39). 8 For the autonomy defense of private property rights, see Loren Lomasky (1987) and Kershnar (2002). 9 For arguments that current property rights can survive claims of injustice in the distant past, see Simmons (1995) and Waldron (1992). 10 Demsetz (1972). 11 The notion of a moral-free zone as ambiguous between the claim that the transacting parties are morally permitted to arrive at any price and that any price is equally good. I mean the former, whereas some factors, e.g., deserved wages, affect the latter. 12 This account comes from Werthemier (1996 : 20ff., 211). An account of disproportionate gain should focus on whether the stronger party is taking unfair advantage of the weaker party, not on whether the stronger party takes advantage of an unfairness to the weaker party. In the latter case the stronger party is not the cause of the unfairness to the weaker party. Focusing on the latter would make most transactions, no matter how rational and appropriate given the background conditions, unfair if the background conditions reflect injustice or unfairness, and this seems counterintuitive. Unfairness in the context of a disproportionate gain is a property of particular transactions not a property of macrostate distributions of wealth (or other resources). This distinction is useful because exploitation can take place within a just economic system and because a non-exploitative transaction can take place within the context of an unjust economic system. Also, since the wrong-making features of distributive injustice and economic exploitation might differ, the two should be kept separate for the purposes of analyzing the permissibility of different acts. 13 Buchanan (1991: 184ff). 14 This view can be seen in Parker (1991: 732?38). 15 One might invoke proximate cause in explaining this intuition. However, if proximate cause is a stand-in for moral responsibility, and I think it is, then we still need an explanation as to why the license-plate thieves are not responsible for this further harm. 16 For a discussion of a type of desert grounded by hard work, see, e.g., Sher (1987: ch. 4). For a discussion of desert grounded by sacrifice, see, e.g., Feinberg (1970: 55?94). For a discussion of desert grounded in contribution, see, e.g., Slote (1999 : 210?23). Some theorists argue that there are several desert types and that these types have different types of grounds. See, e.g., Sher, ibid. and McLeod (1999: 271?82). 17 An argument against the marginal product account can be found in Sen (1992). An argument against the labor theory of value can be seen in Cohen (1979: 338?60). 18 Thad Metz argues that contribution is not substantially under our control because it is influenced by endowment (cf. Metz 2000 ). Endowment also affects the intensity, duration, and decision to exert effort and the planning that directs it. It is not clear why endowment substantially undermines our control for the former and not the latter, especially if we consider, as I argue below, the role of planning. 19 The idea from this paragraph comes from George Sher, ?Effort and imagination?, unpublished manuscript. 20 Arnold (1987: 387?402). 21 This point can also be seen in Christman (1988: 11?12) and Arnold (1987: 395). For example, persons who risk their health by copying the stunts in Jackass ? the movie (Paramount 2002) do not deserve profit. 22 Postponement of gratification likely makes sacrifice the basis for desert and past a certain amount of wealth it is no longer clear that postponing gratification is a sacrifice. Schweickart (1991: 179). 23 The emphasis on hard work can be seen in Miller (1976: 109), Sadurski (1985: 134?35), and Sher (1987: ch. 4); the emphasis on sacrifice in Feinberg (1970: 55?94). 24 Sher (1970: 60?62). 25 The idea for this point comes from Waldron (1983: 39?42). 26 Christman (1988: 12?15). A similar point can be seen in Nathanson (1998: 56). 27 Christman, 13. 28 The idea for this point comes from Sher (1970: ch. 2, esp. 26). Alan Zaitchik points out that the idea behind this attack on desert is that since desert cannot rest on a foundational base it must rest on an impossible infinite regress, Zaitchik (1977: 373). Both arguments are aimed at Rawls (1971: 104, 310?15). 29 The idea for this point comes from Gillian Brock. 30 The notion that desert must rest on factors that we control has been challenged, Feldman (1997: 178?84). I claim that Feldman's counterexamples (e.g., injured parties owed compensation) capture claims rather than desert. 31 This point can be seen in Nagel (1982: 181?82). 32 Pinker (2002: 372?78). 33 Schweickart (1991: 175?76). 34 I am grateful to John Christman, Thad Metz, and George Schedler for the extremely helpful comments and criticisms of this paper. From checker at panix.com Fri Jul 1 17:38:40 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:38:40 -0400 (EDT) Subject: [Paleopsych] Review of Richard Swinburne, ed., Bayes's Theorem Message-ID: Bayes's Theorem (Proceedings of the British Academy, vol. 113), edited by Richard Swinburne, Oxford University Press, 2002, 160 pages Reviewed by Paul Anand, The Open University and Health Economics Research Centre, University of Oxford Economics and Philosophy (2005), 21:139-142 Cambridge University Press DOI: 10.1017/S026626710422051X This short collection of essays celebrates the 200th anniversary of Bayes's Theorem, famous or notorious depending on one's perspective, as the basis for a non-classical approach to statistical inference. Given the steady rise of Bayesianism in econometric and related statistical work, a volume ? even one by philosophers ? devoted to the theorem responsible should be of considerable interest to many scientists, economists and econometricians included. Comprising four papers based on presentations given to a British Academy symposium, an additional article by David Miller, a biographical note by G. A. Barnard first published by Biometrica in 1958 and a version of Reverand Thomas Bayes's original essay presented posthumously by Richard Price to the Royal Society in 1763, the collection highlights the existence of a small (and important) body of work that continues to examine conceptual issues in the foundations of statistics. In this review, I shall make brief comments on contributions but say most about the papers by Sober and Howson. In a substantial introduction (chapter 1), Richard Swinburn locates Bayes's Theorem in a world that permits many concepts of probability. He begins with some preliminary remarks on the meaning of probability and a distinction between logical or evidential probability on the one hand, and statistical probability on the other, due to Carnap. He offers a summary of some probability axioms stated as relating to classes first, and then to propositions, and though he says little about the difficulties that are said to follow from the latter approach, he provides a simple account of the Dutch Book argument claiming that it is strongest when applied to bets that take place simultaneously (a point that parallels a similar issue in the literature on rationality and intransitive preference ? see, for example, Anand (1993)). The introduction then develops as a thesis about limits to the justification of prior probabilities: only a priori criteria, including the concept of simplicity, can justify a world view in which certain (probability affecting) factors operate everywhere, so it is maintained. It may be confusing to have an editor who claims to take a line different to that of his contributors (and all of them at that) on the importance of a priori criteria but the disparity is not one that seems to interfere with the analysis that follows. The essays themselves begin with a chapter by Elliot Sober whose title ?Bayesianism ? its Scope and Limits? indicates, precisely in my view, how we should think of questions concerning Bayesian inference. Sober's description of the issues is clear though it might have benefited from a discussion of the way in which Bayesian inference is actually used by advocates of this approach to inference. (The later chapter by Philip Dawid, a statistician, fills this gap.) Nonetheless, the difficulties faced by a version of Bayesianism based on priors grounded in insufficient reason, and the shift to a subjective approach which fails the objective needs of scientific method, are well made. These observations leave open the possibility that Bayesianism with subjective priors might be valid in decision theory even if it were not useful for scientific inference ? a position that seems consistent with Sober's position but one which awaits justification. Sober's discussion proceeds to an examination of likelihoodism ? an emphasis on prob (observation/hypothesis) as opposed to probabilistic approaches which emphasise prob (hypothesis/observation) ? which he uses as a foil, ultimately against, Bayesianism. The analysis begins by noting that likelihoods are ?often more objective than prior probabilities,? notes an absurd consequence of the likelihood approach and goes on to argue that what likelihoodism really provides is an account of support for a hypothesis, rather than a measure of its overall plausibility. The discussion is interesting but is linked to statistical inference in biological applications in such a way that many economists would, unfortunately, not find it easy to draw lessons from for their own work. However, the same cannot be said for remarks designed, successfully in my view, to interest readers in Akaike's (1973) framework for (econometric) model selection which aims at finding models that are predictively accurate but not necessarily true. Anyone who might use empirical evidence could profitably read this section which casts Akaike's approach as an alternative framework to Bayesianism. The fact that it penalises less simple models may well be a significant advantage over Bayesianism, but the claim that this can be justified on principled grounds remains to be proven. At least from this discussion (which its author allows is not comprehensive), it seems that Akaike's approach to predictive accuracy parallels the move from R-squared to adjusted R-squared statistics. However, just because Akaike's statistic makes a deduction for parameters used and calls the result an unbiased estimate of predictive accuracy does not, of itself, tell us that simplicity is, on conceptual grounds, epistemically relevant, a point echoed in remarks by the following contributor. In chapter 3, Colin Howson provides a substantial and wide-ranging essay in which he argues, essentially, for what he calls the ?Second Bayesian Theory? (SBT) by which he appears to mean the probabilistic component of theories by Ramsey and de Finetti. (Economists normally refer to this as the theory of subjective probability and some may not be aware of but want to consult Howson and Urbach's (1993 ) comprehensive and witty introduction to the literature of which the chapter is part.) This paper is divided into a longer part that surveys, over several subsections, some of the background followed by a shorter, more technical and focused discussion of issues surrounding a claim about the logical foundations of the probability calculus. The survey section deals with topics that include Fisher and significance tests, Lindley's paradox, likelihood, priors and simplicity with the aim of raising concerns that Bayesianism can resolve which the classical approach and its variants may not. The second, shorter part of Howson's essay is devoted to a discussion, centered around his previously published theorem, of the consistency of SBT. (This is difficult as it brings together ideas from optimisation and logic and then does a lot of work using non-technical language.) Understanding the relations between logic and probability and the logical basis of a probabilistic calculus are crucial issues touched on here though I believe that further comments would have helped the reader assess the project. Howson shows that SBT is an ?authentic logic? but given that SBT (from de Finetti on) is an axiomatic theory anyway, I wonder how Howson's arguments for consistency relate to and compare with the claim that SBT is normatively desirable on account of its axioms. One might also ask if being an authentic logic would turn out to distinguish between alternatives to SBT ? we now know that a wide range of non-expected, intransitive utility theories can be formalised and normatively justified so it would be useful to know how much significance we should attribute to being an authentic logic. Put differently, if the ?probability axioms are the complete logic of probable inference? as Howson states, what, if anything, does this tell us about the merits of alternative concepts of credence or uncertainty? This is not a criticism of Howson but it is a reminder that the revolution in the foundations of decision theory over the past 30 years means that nothing about the theory of choice (probability included) can be taken for granted. Of the remaining three contributions, it is fair to say that that by Dawid is the most applied and decision theoretic. His discussions of legal decisions provides a good (if too rare) mix of application and foundational issues that could be useful for those who teach foundations of decision theory. There is a tendency for some Bayesians to propose the approach as a panacea for a range of inference problems that require different concepts of credence (rather than meanings of probability) and there is some evidence of that tendency here too. Nonetheless, the framework for comparing approaches Dawid develops has been nicely honed and repays reading whatever one's own standpoint. In contrast, John Earman's chapter has a more historical flavour (unlike his 1992 Bayes or Bust ) taking, as it does, themes that tend to interest Bayesians and examining them in the context of Hume's analysis of evidence for miracles. There are some potential points of contact with modern concerns though these are not Earman's primary focus and the demolition job he performs is likely to be of most interest to Hume scholars. The last chapter in this collection, David Miller's discussion of the propensity view seems interesting in its own right though I did feel there was a question as to whether the paper is really sufficiently relevant to the rest of the debate to merit inclusion. That quibble apart, this book provides researchers on the edge of the field with a sense of some key current concerns as well as a useful reference point for those wanting to explore the foundations of statistics (or decision theory) in more depth. REFERENCES Anand, P. 1993. Foundations of rational choice under risk. Oxford University Press (reprint 2002) Earman, J. 1992. Bayes or bust. MIT Press Howson, C. and P. Urbach. 1993. Scientific reasoning: the bayesian approach. Open Court (2nd edition) From checker at panix.com Fri Jul 1 17:38:46 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:38:46 -0400 (EDT) Subject: [Paleopsych] Boston Globe: Daddy, what did you do in the men's movement? Message-ID: Daddy, what did you do in the men's movement? http://www.boston.com/news/globe/ideas/articles/2005/06/19/daddy_what_did_you_do_in_the_mens_movement?mode=PF Robert Bly may have retreated to his sweat lodge, but the reconsideration of masculinity and fatherhood he helped initiate hasn't ended. By Paul Zakrzewski | June 19, 2005 THE LAST TIME most of us heard a joke about grown men getting in touch with themselves by beating on drums or hunkering down in sweat lodges, the first Gulf War was in full swing, and Nirvana ruled the airwaves. But for a brief moment in the early 1990s, the ''men's movement" was everywhere you looked, from Jay Leno to ABC's ''20/20" to the pages of Esquire and Playboy. And if the movement was never particularly large or diverse - according to Newsweek, about 100,000 mostly white, middle-aged men had attended a patchwork of weekend retreats, conferences, and workshops by 1991, when the movement peaked - it struck a chord with a country that appeared confused about contemporary manhood. Books by Sam Keen, Michael Meade, and other leading figures in the movement sold hundreds of thousands of copies, while Robert Bly's ''Iron John," a cultural exegesis on wounded masculinity in the form of an obscure fairy tale, spent more than 60 weeks on the New York Times bestseller list. Arguably, the Bly-style mythopoetic men's movement, as it was known, can be traced back to the late 1970s, to men's consciousness-raising groups and masculinity classes in places like Cambridge, Berkeley, and Ann Arbor. However, it was Bly's collaboration with Bill Moyers on the 1990 PBS documentary ''A Gathering of Men" that turned the groundswell of retreats and gatherings into a national phenomenon. With his lilting Minnesota brogue and occasional impish aside, the grandfatherly Bly talked about the Wild Man, avatar of a kind of inner masculine authenticity lost during the Industrial Revolution, when fathers left the homestead (and their sons) behind and went to work in factories. With the lore and lessons of manhood no longer passed on to younger generations, men lost a certain kind of male identity, even the sense of life as a quest. ''Many of these men are not happy," Bly wrote of today's ''soft males," as he called them. ''You quickly notice the lack of energy in them. They are life-preserving, but not exactly life-giving." Today, however, the drums have largely fallen silent. While there are still weekend retreats - for example, the ManKind Project, which boasts more than two-dozen centers worldwide, conducts ''New Warrior Training Adventures" for some 3,000 men every year - these are mostly affairs for the already initiated. ''The men's movement as we knew it has gone underground," says Ken Byers, a San Francisco-based writer and therapist who attended dozens of retreats in the early 1990s. ''Unless you're involved in that underground, there's very little way for the average American man to connect with it." Of course, Bly's mythopoetic movement was only one of several, often contradictory men's movements. Since the 1970s, ''men's rights" advocates have pushed for fathers' parental rights, while profeminist groups such as the National Organization of Men Against Sexism and the national network of Men's Resource Centers want men to become more accountable for sexism, homophobia, and violence. And in the wake of Bly, new mass men's movements seized the media spotlight. In 1995, Nation of Islam leader Louis Farrakhan organized the Million Man March, to inspire African-American men to rebuild their lives and neighborhoods. Meanwhile, by the mid-1990s, the Christian evangelical Promise Keepers were packing hundreds of thousands of men into football stadiums each year for rallies that, like the ''muscular Christianity" movement a century before, encouraged them to reclaim their masculinity by retaking control of their families with the help of Jesus Christ. So what happened to Bly's mythopoetic movement? The negative media coverage, such as Esquire's ''Wild Men and Wimps" spoof issue in 1992, didn't exactly help. But there were other factors, too. For one thing, even many of the men not inclined to dismiss Bly-style gatherings as silly found themselves mystified by the rarefied Jungian concepts tossed around the campfires like so many marshmallows. ''Many of the men I saw worked really hard at trying to figure out the mythology, but they just weren't getting it in the belly," says Byers, echoing the title of Sam Keen's bestselling book. Unlike the Promise Keepers, which held weekly check-in sessions, there was no follow-up work done once participants left their weekend retreats. ''It was an event, a spectacle," says Michael Kimmel, professor of sociology at SUNY Stony Brook and the author of ''Manhood in America," a 1997 cultural history of masculinity. ''You were supposed to be changed by it and then go home." Part of the problem, too, was the mythopoetic movement's complex relationship to feminism. On the one hand, some feminists construed Bly's attack on feminized males as reactionary. ''I'd hoped by now that men were strong enough to accept their vulnerability and to be authentic without aping Neanderthal cavemen," Betty Friedan told The Washington Post back in 1991. (Bly denied that there was anything anti-woman about his ideas.) What's more, the movement itself could never get beyond the fact that unlike the feminist movement - which itself had lost steam by the 1990s, as women achieved more economic and financial power - Bly and his followers never had any clear political agenda to drive them forward. Then again, perhaps the death of the men's movement has been greatly exaggerated. Like the women's movement, it may just be that its biggest lessons have simply been absorbed into the culture, minus the pagan fairy tales and faux Native American rites. For example, it's evident to any man who carves out time in his busy week to meet his buddies for a drink that, as Bly suggested, men benefit from time spent in ''ritual space" - that is, with other men. (Full disclosure: For the last year I've met with other men in their 30s and 40s for a weekly discussion group in Jamaica Plain, where we talk about everything from career issues to complicated relationships with our fathers.) And whether or not they can tell their Wild Man from their King (another figure in Bly s complex mythological scheme), many younger men want to be more engaged in family life than their own fathers were. In 1992, about 68 percent of college-educated men said they wanted to move into jobs with more responsibility, according to a recent study by the Families and Work Institute. A decade later, the number fell to 52 percent. Meanwhile, a 2000 study by the Radcliffe Public Policy Center found that the job characteristic most often ranked as very important by men ages 21 to 39 was a work schedule that allowed them to spend more time with their families. Seventy percent said they were willing to sacrifice pay and lose promotions to do so. Still, the reality of being a good father often poses more of a challenge for these young men than they expect, often in ways that Bly himself might have explained. ''One of the central problems is that the image that men have of immersing themselves in families is a very maternal one," says Mark O'Connell, a Boston-based psychologist and the author of the recent book ''The Good Father: On Men, Masculinity, and Life in the Family" (Scribner). ''They are trying to follow something that isn't altogether authentic and reflective of the different strengths that men bring to the table." America, of course, is a different place than it was when Bly wrote his best seller. Today, when men get together in organized men's groups, they are more likely to talk about Jesus Christ than Iron John. Nevertheless, there's more than a touch of Bly in John Eldredge's ''Wild at Heart: Discovering the Secret of a Man's Soul," an evangelical call-to-arms that has sold 1.5 million copies since it was published in 2001 and that has helped launch a series of weekend workshops. Men still go into the woods, but instead of wrestling with the Wild Man, they meet Jesus, described as a kind of fierce, unfettered energy that transforms ''really nice guys" (a version of Bly's ''soft males") into passionate beings ready to tackle life's adventures, including romantic relationships. ''Not every woman wants a battle to fight, but every woman wants to be fought for," Eldredge observes in a passage that might have been written before Betty Friedan was born. Meanwhile, after years of dwindling attendance due to financial problems, the Promise Keepers are staging a comeback this summer, hoping to fill 20 stadium rallies across the country. And in March, the first annual Catholic Men's Conference, inspired by the Promise Keepers, attracted 2,200 men to Boston, who came to listen to speakers ranging from Archbishop Sean P. O'Malley to ''Passion of the Christ" star Jim Caviezel to Bush administration official James Towey. (According to organizer Scot Landry, the event's success was fueled by the growing number of men's fellowship groups in the Boston Archdiocese, which have spread from a handful of parishes 5 years ago to between 30 and 50 today.) The emphasis was on the importance of traditional Catholic teachings on sexuality and the family, under which men - not their wives - are called to be ''the spiritual leaders of your home," as one speaker put it. Even if we're not likely to see maverick poets and Jungian therapists on television specials and magazine covers again any time soon, one thing is clear. The Bly-style men's movement highlighted a powerful urge for men to commune with each other that persists today, even among those who wouldn't be caught dead within miles of a drumming circle. ''There was something about Bly's language and approach that was easy to caricature," says O'Connell. ''But he was on to something really important, and a lot of what he was talking about got lost in translation." Paul Zakrzewski is the editor of ''Lost Tribe: Jewish Fiction from the Edge" (Harper Perennial). He lives in Jamaica Plain. E-mail [2]pzak at verizon.net. From checker at panix.com Fri Jul 1 17:39:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 1 Jul 2005 13:39:23 -0400 (EDT) Subject: [Paleopsych] NYT: With Music for the Eye and Colors for the Ear Message-ID: With Music for the Eye and Colors for the Ear New York Times, 5.7.1 http://www.nytimes.com/2005/07/01/arts/design/01kimm.html [Another review of the exhibit, whose terrific introduction by the curators, and which was atteneded by 150 people, perhaps a record at the Hirshhorn, I reported on earlier. By [3]MICHAEL KIMMELMAN Washington "Visual Music" is a fine-tuned, highly diverting, deceptively radical exhibition about the relationship of music and modern art, lately arrived here at the Hirshhorn Museum. In its hippy-trippy way, it rewrites a crucial chapter of history. Its subtitle is "Synaesthesia in Art and Music Since 1900." Aristotle formulated the idea that each of the five senses - smell, taste, touch, hearing and sight - had its own proper and distinct sphere of activity. There were overlaps, he said (movement pertained both to sight and touch); and he speculated that the mysteries of color harmony might have something to do with musical harmony, an idea that would resonate for centuries. Musical harmony, as an expression of geometry, was thought to be useful to the study of art and architecture from the Renaissance on. But the notion that there was an essential separation among the sensual spheres persisted into the early 19th century. At the same time reports began to emerge of rare people who said they experienced two sensations simultaneously: they saw colors when they heard sounds, or they heard sounds when they ate something. The condition was called synaesthesia. It's no coincidence that scientific interest in synaesthesia coincided with the Symbolist movement in Europe, with its stresses on metaphor, allusion and mystery. Synaesthesia was both metaphorical and mysterious. Scientists were puzzled. People who claimed to have it couldn't agree about exactly what they experienced. "To ordinary individuals one of these accounts seems just as wild and lunatic as another but when the account of one seer is submitted to another seer," noted the Victorian psychologist and polymath Sir Francis Galton in 1883, "the latter is scandalized and almost angry at the heresy of the former." I have come across via the color historian John Gage an amusing account from some years later by the phonologist Roman Jakobson, who studied a multilingual woman with synaesthesia. The woman described to him perceiving colors when she heard consonants and vowels or even whole words: "As time went on words became simply sounds, differently colored, and the more outstanding one color was, the better it remained in my memory. That is why, on the other hand, I have great difficulty with short English words like jut, jug, lie, lag, etc.: their colors simply run together." Russian, she also told Jakobson, has "a lot of long, black and brown words," while German scientific expressions "are accompanied by a strange, dull yellowish glimmer." "Visual Music" is full of strange, glimmering yellowish and other colored shapes. What might visual art look like if it were akin to music? That's the question the various artists here asked themselves - a question that goes back to Richard Wagner, the Symbolists' patron saint for his dream of a Gesamtkunstwerk, a universal artwork uniting music and art. Painters like Kandinsky, Frantisek Kupka, Mikhail Matiushin (he was a Russian composer, influenced by Arnold Schoenberg, who like Schoenberg also painted) and Arthur Dove, with whom "Visual Music" begins, elaborated on Wagner's theme. They painted pictures that claimed to have the condition of music - pure abstractions with occasional shapes that resembled staves, musical notes or violins. Through the medium of musical metaphor, in other words, synaesthesia gave birth to abstract art. This is the show's quite radical, if not altogether original, point: that abstraction's history is not just the familiar sequence of isms (Constructivism, Suprematism, Abstract Expressionism, Minimalism) but also the consequence of a particular idea. The idea is synaesthesia. And its protagonists, while including a few famous names like Kandinsky, were on the whole cultish and now forgotten figures or total outsiders to the art world: they were filmmakers, animators, computer geeks and 1960's psychedelic light show performers. Blurring high and low, their legacy represented not a corruption or cul-de-sac of traditional modernism but a parallel strand of it, which has made its way, willy-nilly, right up to the present. The show here ends with digitally enhanced multimedia works by Jennifer Steinkamp, Jim Hodges and Leo Villareal. Organized by Kerry Brougher and Judith Zilczer at the Hirshhorn, and Jeremy Strick and Ari Wiseman at the Los Angeles Museum of Contemporary Art, "Visual Music" originated in Los Angeles, aptly, since much of what's on view consists of films and other moving images, made by artists from California, a few of whom also worked for Hollywood. This was inevitable. Abstract painters in the early 20th century tried to emulate musical attributes, like rhythm, harmony and tonality, but music is temporal. It moves through time. And to suggest temporality or movement in two dimensions via staggered lines, vortices, cones or whatever spatial device, doesn't suffice. So it fell to experimental filmmakers like L?opold Survage, Viking Eggeling, Hans Richter and Oskar Fischinger to pick up from where Kandinsky left off and devise abstract movies, at first silent, then animating musical scores. The shapes they used were pretty much the same as the ones in the paintings - swirling lines, concentric circles, zigzags, confetti bursts that now pulsed, shimmied and flickered. A slew of devices and charming gimmicks followed. The color organ was a clunky box with a silent keyboard, prisms, mirrors and a projector that let a player compose an abstract moving picture. Painters like Daniel Vladimir Baranoff-Rossin? and Stanton Macdonald-Wright, one of its inventors, having come up against the limitations of painting, tried their hands at color organs. New oscilloscopes produced wavy moving patterns that filmmakers like Hy Hirsh could set to jazz or Afro-Cuban music. Len Lye produced cameraless animations by painting straight onto filmstrips. There's a wonderful hand-painted animation by Lye from 1935, "A Colour Box," set to a jaunty tune, which ran as a hit short before feature films in British theaters; it includes, midway through, an advertisement for the postal service, which sponsored Lye, the initials for the post office dancing briefly across the screen. Among my own favorite confections here are ones by Thomas Wilfred, the Whitneys and Jordan Belson. Wilfred, a Danish lute player by training, born in 1889, contrived an instrument he called the clavilux that produced light displays, which, to modern eyes, resemble lava lamps and Hubble space photos before the fact. From Los Angeles, John and James Whitney exploited nascent computer technology, starting in the 1950's, to compose hypnotic, multiscreen abstract films set to raga and other forms of zone-out music. At the Hirshhorn you can recline in the darkness on huge, cushy ottomans, while the Whitneys' images play retinal games with your eyeballs. And from San Francisco, Mr. Belson collaborated on polymorphous audiovisual concerts in the late 1950's and early 1960's that set the stage for the era's psychedelic light shows. A few of these, by collectives like Single Wing Turquoise Bird and Joshua Light Show, are screened in a room at the Hirshhorn, minus only the bongs. In turn, such events inspired Mr. Belson toward more mind-bending, kaleidoscopic films suggesting cosmic swirls and mixing different brands of music. Nearly 80 now, he was commissioned by the Hirshhorn to produce a new work for this show, "Epilogue," its lush and misty optics synchronized to a score by Rachmaninoff. All these swimmy works begin to blend together after a while, but what's remarkable about seeing them in one place is precisely that they do look so similar. I said earlier that the experimental films from the 1920's on used the same vocabulary as the paintings from the turn of the century. Likewise, the newest computer-generated installations. "Visual Music," aside from rewriting history, is also a show about failure - the failure of metaphor, which no technology may overcome. In all this time, no perfect way to make art into music has been devised. Squiggly lines and pulsing colors approximate music but they can't ever become it. Aristotle was right. The senses do have their own domains. Music is moving in ways visual art isn't and vice-versa, and that's why they're both necessary. Like Wagner's Gesamtkunstwerk, the dream of making one art that's like another is just a utopian fantasy, born of a peculiarly modern impatience with art's limitations and a misplaced notion that, like science, art needs constantly to advance or else become irrelevant. But art is not science. Its limitations are its virtues. In the meantime it gives us the works here, the best of which are dizzily transporting. From anonymous_animus at yahoo.com Fri Jul 1 18:29:03 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Fri, 1 Jul 2005 11:29:03 -0700 (PDT) Subject: [Paleopsych] shame In-Reply-To: <200507011800.j61I0NR25213@tick.javien.com> Message-ID: <20050701182903.4724.qmail@web30813.mail.mud.yahoo.com> >>When there is "public silence around shame, it doesn't get discussed, it just gets more deeply embedded."<< --I agree. It also gets distorted. If a person can't say "I'm ashamed", it may be easier to translate that into "shame on you!" and pass the emotion to another. I've seen many group squabbles where it was obvious to me (being trusted to some extent with the private thoughts of each member of the group rather than just the official story) that each person was to some extent blaming others out of his own shame. Someone who often feels stupid calling another person "stupid" and so on. Someone with less personal experience with each side may simply think "X values intelligence, and calls Y stupid because Y isn't adhering to that value". That Y may really have done something stupid clouds things further. X may never even know he's transferring his shame to another. >>It would be silly to say that shame doesn't hurt and isn't sometimes very painful, but it does make you think about what you hold dear, whether that be at an individual or collective level, or as a nation. It is one of the emotions that most clearly throw into relief the values we have.<< --That's true as long as introspection is allowed and encouraged. Often, national/racial/religious groups bypass introspection and go directly to blame or attack. Shame becomes a trigger for blaming others. >>Yes. Part of my interest in shame came from thinking about the limits of pride, especially when it's used in queer pride, or fat pride, or whatever. There is a real limit to those politics.<< --Agreed. I would include "American pride" which is a kind of national self-esteem movement. Why work on my flaws or build my strengths if I can be proud of where I live, no matter what? >>Shaming is very limited in its value. It requires that someone stand on high and point the finger. Some strands of feminism have usedshaming, but it's the experiencing of shame rather than the wielding of shame that can be good.<< --That's a good point. Especially if the blamer is to some extent passing on his/her own shame. The "hot potato" game. The question is, how do you encourage people to feel their own shame rather than passing it on in order to feel more innocent? In extreme cases of conflict, each side may be so terrified of feeling shame that it must demonize and attack the other constantly in order to hold the feeling at bay. This can cause whole generations to inherit shame and the mechanisms of denial along with it. Michael ____________________________________________________ Yahoo! Sports Rekindle the Rivalries. Sign up for Fantasy Football http://football.fantasysports.yahoo.com From checker at panix.com Sat Jul 2 15:24:21 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:21 -0400 (EDT) Subject: [Paleopsych] Economist: Fusion power: Nuclear ambitions Message-ID: Fusion power: Nuclear ambitions http://www.economist.com/science/PrinterFriendly.cfm?Story_ID=4127211 [Thanks to Sarah for this.] Jun 30th 2005 A step towards commercial fusion power. Perhaps [4]Get article background THIS week, an international project to build a nuclear-fusion reactor came a step closer to reality when politicians agreed it should be constructed in France rather than in Japan, the other country lobbying to host it. The estimated cost is $12 billion, making it one of the most expensive scientific projects around--comparable financially with the International Space Station. It is scheduled to run for 30 years, which is handy since, for the past half century, fusion advocates have claimed that achieving commercial nuclear fusion is 30 years away. The International Thermonuclear Experimental Reactor (ITER), as the project is known, is intended to be the final proving step before a commercial fusion reactor is built. It would demonstrate that power can be generated using the energy released when two light atomic nuclei are brought together to make a heavier one--a process similar to the one that powers the sun and other stars. Advocates of fusion point to its alleged advantages over other forms of power generation. It is efficient, so only small quantities of fuel are needed. Unlike existing nuclear reactors, which produce nasty long-lived radioactive waste, the radioactive processes involved with fusion are relatively short-lived and the waste products benign. Unlike fossil-fuel plants, there are no carbon-dioxide emissions. And the principal fuel, a heavy isotope of hydrogen called deuterium, is present in ordinary water, of which there is no shortage. The challenges of achieving fusion should not be underestimated. A large volume of gas must be heated to a temperature above that found at the centre of the sun. At the same time, that gas must be prevented from touching the walls of the reactor by confining it in a powerful magnetic field known as a magnetic bottle. The energy released in fusion is carried mostly by neutrons, a type of subatomic particle that has no electric charge and hence cannot be confined by the magnetic bottle. Ensuring that the reactor wall can cope with being bombarded by these neutrons presents a further challenge. The costs involved are immense. The budget for ITER involves spending $5 billion on construction, $5 billion on operating costs over 20 years and more than $1 billion on decommissioning. Yet the reason why taxpayers should spend such sums is unclear. The world is not short of energy. Climate change can be addressed without recourse to generating power from fusion since there are already many alternatives to fossil-fuel power plants. And $12 billion could buy an awful lot of research into those alternatives. Part of the reason why commercial fusion reactors have always been 30 years away is that increasing the size of the reactors to something big enough to be a power plant proved harder than foreseen. But fusion aficionados also blame a lack of urgency for the slow progress, claiming that at least 15 years have been lost because of delays in decision-making and what they regard as inadequate funding. There is some truth in this argument. ITER is a joint project between America, most of the European Union, Japan, China, Russia and South Korea. For the past 18 months, work was at a standstill while the member states wrangled over where to site the reactor in what was generally recognised as a proxy for the debate over the war in Iraq. America was thought to support the placing of ITER in Japan in return for Japan's support in that war. Meanwhile, the Russians and Chinese were supporting France which, like them, opposed the American-led invasion. That France was eventually chosen owes much to the fact that the European Union promised to support a suitable Japanese candidate as the next director general of ITER. Like the International Space Station, ITER had its origins in the superpower politics of the 1980s that brought the cold war to its end as Russia and the West groped around for things they could collaborate on. Like the International Space Station, therefore, ITER is at bottom a political animal. And, like the International Space Station, the scientific reasons for developing it are almost non-existent. They cannot justify the price. References 4. http://www.economist.com/background/displayBackground.cfm?story_id=4127211 E-mail me if you have problems getting the referenced article. From checker at panix.com Sat Jul 2 15:24:27 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:27 -0400 (EDT) Subject: [Paleopsych] eSkeptic: The Life and Science of Fred Hoyle Message-ID: eSkeptic: The Life and Science of Fred Hoyle From: Michael Shermer Date: Fri, 01 Jul 2005 00:00:00 -0700 To: eugen at leitl.org Subject: eSkeptic: The Life and Science of Fred Hoyle Reply-To: E-Skeptic riday, July 1st, 2005 --------------------------- To view this newsletter with graphics and formatting, visit: --------------------------- This week's eSkeptic features an announcement for Michael Shermer's upcoming weekend workshop at the Esalen Institute, Science, Spirituality & the Search for Meaning, followed by James N. Gardener's review of Conflict in the Cosmos: Fred Hoyle's Life in Science, a biography by Simon Mitton, published by Joseph Henry Press, ISBN 0309093139. --------------------------- SCIENCE, SPIRITUALITY & THE SEARCH FOR MEANING a weekend seminar led by Michael Shermer August 12th, 8:30pm to August 14th, 11:30am at the Esalen Institute, Big Sur, CA The intellectual and spiritual quest to understand the universe and our place in it is at the core of both science and religion. At the beginning of the 20th century social scientists predicted that belief in God would decrease by the end of the century because of the secularization of society. In fact, the opposite happened. Never in history have so many, and such a high percentage of the population, believed in God and expressed spirituality. To find out why, science historian and social scientist Dr. Michael Shermer has undertaken a monumental study of science, spirituality, and the search for meaning through his numerous writings, presented here for the first time in workshop format. Since humans are storytelling animals, a deeper aspect of this issue involves the origins and purposes of myth and religion in human history and culture. Why is there is an eternal return of certain mythic themes in religion, such as messiah myths, flood myths, creation myths, destruction myths, redemption myths, and end of the world myths? What do these recurring themes tell us about the workings of the human mind and culture? What can we learn from these myths beyond the moral homilies offered in their narratives? What can we glean about ourselves as we gaze into these mythic mirrors of our souls? Humans are not only storytelling animals, we are also pattern-seeking animals, and there is a tendency to find pattern even when none exists. To most of us the pattern of the universe indicates design. For countless millennia we have taken these patterns and constructed stories about how our cosmos was designed specifically for us. For the past few centuries, however, science has presented us with a viable alternative in which we are but one among tens of millions of species, housed on but one planet among many orbiting an ordinary solar system, itself one among possibly billions of solar systems in an ordinary galaxy, located in a cluster of galaxies not so different than billions of other galaxy clusters, themselves whirling away from one another in an expanding cosmic bubble that very possibly is only one among a near infinite number of bubble universes. Is it really possible that this entire cosmological multiverse exists for one tiny subgroup of a single species on one planet in a lone galaxy in that solitary bubble universe? In this workshop, we will explore the deepest question of all: what if the universe and the world were not created for us by an intelligent designer, and instead is just one of those things that happened? Can we discover meaning in this apparently meaningless universe? Can we still find the sacred in this age of science? The answer is yes! ABOUT THE ESALEN INSTITUTE Esalen is, geographically speaking, a literal cliff, hanging precariously over the Pacific Ocean. The Esselen Indians used the hot mineral springs here as healing baths for centuries before European settlers arrived. Today the place is adorned with a host of lush organic gardens, mountain streams, a cliff-side swimming pool, hot springs embedded in a multimillion-dollar stone, cement, and steel spa, and meditation huts tucked away in the trees. Esalen was founded in 1962 by Stanford graduates Michael Murphy and Richard Price and has featured such notable visitors as Richard Feynman, Abraham Maslow, Timothy Leary, Paul Tillich, Carlos Castaneda, and B. F. Skinner. Regardless of your source of spirituality (science, religion, or self), Esalen embodies the integration of body, mind, and spirit. ABOUT THE SEMINAR LEADER Dr. Michael Shermer is the Founding Publisher of Skeptic magazine, the Director of the Skeptics Society, a monthly columnist for Scientific American, the host of the Skeptics Distinguished Science Lecture Series at the California Institute of Technology (Caltech), and the co-host and producer of the 13-hour Fox Family television series, Exploring the Unknown. He is the author of Science Friction: Where the Known Meets the Unknown, about how the mind works and how thinking goes wrong. His book The Science of Good and Evil: Why People Cheat, Gossip, Share Care, and Follow the Golden Rule, is on the evolutionary origins of morality and how to be good without God. He wrote a biography, In Darwin,s Shadow, about the life and science of the co-discoverer of natural selection, Alfred Russel Wallace. He also wrote The Borderlands of Science, about the fuzzy land between science and pseudoscience, and Denying History, on Holocaust denial and other forms of pseudohistory. His book How We Believe: Science, Skepticism, and the Search for God, presents his theory on the origins of religion and why people believe in God. He is also the author of Why People Believe Weird Things on pseudoscience, superstitions, and other confusions of our time. According to the late Stephen Jay Gould (from his Foreword to Why People Believe Weird Things): Michael Shermer, as head of one of America,s leading skeptic organizations, and as a powerful activist and essayist in the service of this operational form of reason, is an important figure in American public life. Dr. Shermer received his B.A. in psychology from Pepperdine University, M.A. in experimental psychology from California State University, Fullerton, and his Ph.D. in the history of science from Claremont Graduate University. Since his creation of the Skeptics Society, Skeptic magazine, and the Skeptics Distinguished Science Lecture Series at Caltech, he has appeared on such shows as 20/20, Dateline, Charlie Rose, Larry King Live, Tom Snyder, Donahue, Oprah, Lezza, Unsolved Mysteries, and other shows as a skeptic of weird and extraordinary claims, as well as interviews in countless documentaries aired on PBS, A&E, Discovery, The History Channel, The Science Channel, and The Learning Channel. REGISTRATION $595, which includes the workshop, housing, and meals at one of the most beautiful locations in the world. Register for the seminar through the Esalen Institute, (not through the Skeptics Society). (831) 667-3038 (programs) (831) 667-3005 (reservations) --------------------------- THE BIG BANG'S STEADY STATE The Life & Science of Fred Hoyle a book review by James N. Gardner The vast scale of the cosmos confounds our imagination. What human mind -- calibrated by natural selection to appreciate intuitively the dimensions of African savannahs, primeval arboreal hideaways, and Ice Age mammoth hunting grounds -- can truly grasp its fathomless enormity? Billions of galaxies, each containing hundreds of billions of stars, those stars probably orbited by trillions of planets, and the entire fabric of spacetime expanding outward like the surface of an inflating balloon -- this is the surpassingly strange picture of our universe that constitutes the consensus paradigm of modern cosmology. That vision is centered on the premise of a Big Bang -- a primordial explosion that launched the whole shebang hurtling outward at breakneck speed -- which seems, from a commonsense perspective, perfectly outrageous. What came before the Big Bang, we wonder? What caused this peculiar genesis event? Could the cosmos really have been born ex nihilo, for no apparent reason and from the loins of nothing at all? These were the puzzles that led a giant of British astronomy -- Fred Hoyle -- to suggest a dramatic alternative: the steady-state theory, which hypothesized that the universe is eternal and ever-expanding and that the cosmic storehouse of matter is constantly replenished through a process of continuous creation. As Simon Mitton demonstrates in this superb new biography of Hoyle, the great scientist's genius lay in his ability to resist the temptation to surrender to mainstream orthodoxy. While Hoyle's cosmological theory may have turned out to have been spectacularly wrong (more on this below), what cannot be denied is that his stubborn unwillingness to bow to conventional wisdom was a valuable intellectual asset that benefited the entire scientific community. As Mitton puts it: Hoyle's personal contribution to the rebirth of British astronomy came from his outstanding ability to think outside the box... An enduring feature of Hoyle's character was that in every sense he never let setbacks, rejections, or political maneuvers deflect him from his research agenda. He always had a deep conviction that in his "search for the truth," which is how he expressed his life's mission, any opponent should be able to provide a counterargument from experiment or direct observation. He declined all opposition based on semantic arguments invoking the philosophy of science, or the deployment of a paradigm, or appeals to common sense. Iconoclasm and catholicity of scientific interest were the two key markers of Hoyle's long and conflict-laden life. As astrophysicist Owen Gingerich observes in a thoughtful foreword to Conflict in the Cosmos, these characteristics -- deeply rooted in Hoyle's hard-scrabble background -- were both his greatest strength and the source of his ultimate undoing: Fred Hoyle was the quintessential outsider, entering Emmanuel College Cambridge from an impoverished family background and with a distinct Yorkshire accent, and leaving Cambridge in a misguided huff 39 years later. But in between he ascended into the highest ranks of British science, almost single-handedly returning Britain to the top echelons of international theoretical astrophysics and setting it on the path toward excellence in observational astronomy. It is a stirring Dickensian story of an inquisitive, rough-hewn lad making the grade in the tightly traditional world of Cantabrigian academia, yet with the depths of a Greek tragedy where the flawed hero finally becomes an outcast. The 39-year interregnum was the central chapter in the scientist's life -- a tumultuous period characterized by heroic accomplishment, intense controversy, and an extraordinary level of celebrity, which Hoyle achieved both as a popular BBC commentator and as a highly successful science fiction writer. As Mitton points out: After 1950, Fred Hoyle was a very public figure at home and abroad... His broadcasts for the BBC in 1950 were just extraordinary and brought him immediate fame as a gifted expositor. With his gritty Yorkshire manner, his ability to be picturesque using words alone, and the universe as his topic, he transformed the BBC's approach to academic lectures, persuading them of the benefits of a less donnish style of presentation. His lectures for radio audiences set the prelude for a brilliant parallel career as a popular science and science fiction writer... In science fiction, his first novel, The Black Cloud, remains his best, having now acquired cult status: In 2004, an opinion poll conducted by the British newspaper The Guardian to find the most accomplished science fiction writers placed Hoyle in third position! Hoyle is remembered most vividly for the idea about which he was famously mistaken: that the universe exists in a steady state, with the stockpile of atoms in an eternally expanding cosmos continuously refilled by the constant creation of new matter. Normally, falsified scientific hypotheses like the steady-state conjecture are tossed unceremoniously in the dustbin of intellectual history, serving at best as amusing footnotes to the main body of orthodox theory (think of Darwin's misplaced reliance on Lamarckism as a subsidiary engine of evolution in The Origin of Species). But, once again, Hoyle confounds tradition. Because he was both passionate and brutally honest about the implications of his steady-state hypothesis, Hoyle was able to foment a heated intellectual debate that significantly advanced our understanding of the universe, despite the fact that his particular conjecture turned out to be deeply flawed. As Mitton notes, "What is extraordinary about Fred Hoyle's science is that his impact derives equally from when he was right and when he was wrong!" If Hoyle was wrong about the nature of the process of cosmogenesis, he was spectacularly right about an equally profound mystery: the origin of the chemical elements. In what is surely his most important contribution to astrophysics, Hoyle and three collaborators were able to demonstrate rigorously in their famous B2FH scientific paper that all of the elements of the periodic table except the lightest are forged in the hearts of giant supernovae, under a variety of physical conditions, through a process known as nucleosynthesis. It is this process of stellar alchemy, Hoyle and his colleagues showed, that accounts for the richness and complexity of the chemical palette of the universe, which in turn accounts for the possibility of life. In the midst of this monumental accomplishment, Hoyle stumbled across a deep mystery that eventually lured him away from the shoreline of genuine science out onto the trackless sea of metaphysical speculation: the apparent fine-tuning of nature evidenced by the details of the process through which the element carbon is synthesized. This discovery provoked Hoyle's most controversial conjecture: the notion that the universe appeared to be deliberately fine-tuned to favor the emergence of carbon-based life. As Hoyle wrote late in his life: The issue of whether the universe is purposive is an ultimate question that is at the back of everybody's mind... And Dr. [Ruth Nanda] Ashen has now raised exactly the same question as to whether the universe is a product of thought. And I have to say that that is also my personal opinion, but I can't back it up by too much of precise argument. There are many aspects of the universe where you either have to say there have been monstrous coincidences, which there might have been, or, alternatively, there is a purposive scenario to which the universe conforms. The debate over this portentous issue rages on to this day, fueled by the recent discovery of the monstrously large landscape of alternate versions of low-energy physics mathematically allowed by M-theory, only a tiny fraction of which would permit the emergence of anything resembling our own universe and of carbon-based life. Indeed, that discovery has lead many cutting-edge cosmologists to overlay a refinement of Big Bang inflation theory called eternal chaotic inflation with an explanatory approach that has been traditionally reviled by most scientists known as the weak anthropic principle. (The weak anthropic principle merely states in tautological fashion that since human observers inhabit this particular universe, it must perforce be life-friendly or it would not contain any observers resembling ourselves.) Eternal chaotic inflation, invented by Russian-born physicist Andrei Linde, asserts that instead of just one Big Bang there are, always have been, and always will be, an infinite multiplicity of Big Bangs going off in inaccessible regions all the time. These Big Bangs create a vast horde of new universes constantly and the whole ensemble constitutes a multiverse. One gets the uneasy feeling that if this current theorizing turns out to be correct, Fred Hoyle may have been on the right track all along! Perhaps the multiverse is eternal. Perhaps there is a process of continuous creation (a.k.a. eternal chaotic inflation) as opposed to a one-off genesis event (i.e., a single Big Bang). Maybe the only thing Fred Hoyle truly failed to grasp was the sheer, unexpected grandeur of steady-state cosmogenesis. Hoyle believed that the continuous-creation process yielded "no more than one atom in the course of a year in a volume equal to St. Paul's Cathedral." This is an image of a natural process comfortably within the confines of our biologically evolved human imagination. But if Linde and his colleagues are correct, the process of continuous creation operates at a scale utterly beyond our capacity to physically envision it -- not mere atoms but entire new baby universes are continuously created in an eternal process with striking parallels to Hoyle's discarded steady-state cosmological theory. --------------------------- The eSkeptic newsletter is published (almost) weekly by the Skeptics Society, ISSN 1556-5696. Subscribe to eSkeptic by sending an email to . Unsubscribe by sending an email to . Contact us at . From checker at panix.com Sat Jul 2 15:24:34 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:34 -0400 (EDT) Subject: [Paleopsych] Scientific American: Inconstant Constants Message-ID: Inconstant Constants http://www.sciam.com/print_version.cfm?articleID=0005BFE6-2965-128A-A96583414B7F0000 5.5.23 Do the inner workings of nature change with time? By John D. Barrow and John K. Webb Some things never change. Physicists call them the constants of nature. Such quantities as the velocity of light, c, Newton's constant of gravitation, G, and the mass of the electron, m[e], are assumed to be the same at all places and times in the universe. They form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe. Physics has progressed by making ever more accurate measurements of their values. And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why they take the special numerical values that they do. In SI units, c is 299,792,458; G is 6.673 X 10^-11; and m[e] is 9.10938188 X 10^-31--numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or "theory of everything." Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature. In recent years, however, the status of the constants has grown more muddled, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time--as many as seven more. One implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional "shadows." Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number--10^500--of possible "worlds" with different self-consistent sets of laws and constants [see "The String Theory Landscape," by Raphael Bousso and Joseph Polchinski; Scientific American, September 2004]. So far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to one, but we have to remain open to the unnerving possibility that our known universe is but one of many--a part of a multiverse--and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws [see "Parallel Universes," by Max Tegmark; Scientific American, May 2003]. No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable universe could be one of many isolated oases surrounded by an infinity of lifeless space--a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be. Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants. A Ruler You Can Trust Indeed, the word "constant" may be a misnomer. Our constants could vary both in time and in space. If the extra dimensions of space were to change in size, the "constants" in our three-dimensional world would change with them. And if we looked far enough out in space, we might begin to see regions where the "constants" have settled into different values. Ever since the 1930s, researchers have speculated that the constants may not be constant. String theory gives this idea a theoretical plausibility and makes it all the more important for observers to search for deviations from constancy. Such experiments are challenging. The first problem is that the laboratory apparatus itself may be sensitive to changes in the constants. The size of all atoms could be increasing, but if the ruler you are using to measure them is getting longer, too, you would never be able to tell. Experimenters routinely assume that their reference standards--rulers, masses, clocks--are fixed, but they cannot do so when testing the constants. They must focus their attention on constants that have no units--they are pure numbers--so that their values are the same irrespective of the units system. An example is the ratio of two masses, such as the proton mass to the electron mass. One ratio of particular interest combines the velocity of light, c, the electric charge on a single electron, e, Planck's constant, h, and the so-called vacuum permittivity, [varepsilon.gif] [0]. This famous quantity, [alpha.gif] = e^2/2 [varepsilon.gif] [0]hc, called the fine-structure constant, was first introduced in 1916 by Arnold Sommerfeld, a pioneer in applying the theory of quantum mechanics to electromagnetism. It quantifies the relativistic (c) and quantum (h) qualities of electromagnetic (e) interactions involving charged particles in empty space ( [varepsilon.gif] [0]). Measured to be equal to 1/137.03599976, or approximately 1/137, [alpha.gif] has endowed the number 137 with a legendary status among physicists (it usually opens the combination locks on their briefcases). If [alpha.gif] had a different value, all sorts of vital features of the world around us would change. If the value were lower, the density of solid atomic matter would fall (in proportion to [alpha.gif] ^3), molecular bonds would break at lower temperatures ( [alpha.gif] ^2), and the number of stable elements in the periodic table could increase (1/ [alpha.gif] ). If were too big, small atomic nuclei could not exist, because the electrical repulsion of their protons would overwhelm the strong nuclear force binding them together. A value as big as 0.1 would blow apart carbon. The nuclear reactions in stars are especially sensitive to [alpha.gif] . For fusion to occur, a star's gravity must produce temperatures high enough to force nuclei together despite their tendency to repel one another. If [alpha.gif] exceeded 0.1, fusion would be impossible (unless other parameters, such as the electron-to-proton mass ratio, were adjusted to compensate). A shift of just 4 percent in would alter the energy levels in the nucleus of carbon to such an extent that the production of this element by stars would shut down. Nuclear Proliferation The second experimental problem, less easily solved, is that measuring changes in the constants requires high-precision equipment that remains stable long enough to register any changes. Even atomic clocks can detect drifts in the fine-structure constant only over days or, at most, years. If [alpha.gif] changed by more than four parts in 10^15 over a three-year period, the best clocks would see it. None have. That may sound like an impressive confirmation of constancy, but three years is a cosmic eyeblink. Slow but substantial changes during the long history of the universe would have gone unnoticed. Fortunately, physicists have found other tests. During the 1970s, scientists from the French atomic energy commission noticed something peculiar about the isotopic composition of ore from a uranium mine at Oklo in Gabon, West Africa: it looked like the waste products of a nuclear reactor. About two billion years ago, Oklo must have been the site of a natural reactor [see "A Natural Fission Reactor," by George A. Cowan; Scientific American, July 1976]. In 1976 Alexander Shlyakhter of the Nuclear Physics Institute in St. Petersburg, Russia, noticed that the ability of a natural reactor to function depends crucially on the precise energy of a particular state of the samarium nucleus that facilitates the capture of neutrons. And that energy depends sensitively on the value of [alpha.gif] . So if the fine-structure constant had been slightly different, no chain reaction could have occurred. But one did occur, which implies that the constant has not changed by more than one part in 10^8 over the past two billion years. (Physicists continue to debate the exact quantitative results because of the inevitable uncertainties about the conditions inside the natural reactor.) In 1962 P. James E. Peebles and Robert Dicke of Princeton University first applied similar principles to meteorites: the abundance ratios arising from the radioactive decay of different isotopes in these ancient rocks depend on [alpha.gif] . The most sensitive constraint involves the beta decay of rhenium into osmium. According to recent work by Keith Olive of the University of Minnesota, Maxim Pospelov of the University of Victoria in British Columbia and their colleagues, at the time the rocks formed, was within two parts in 10^6 of its current value. This result is less precise than the Oklo data but goes back further in time, to the origin of the solar system 4.6 billion years ago. To probe possible changes over even longer time spans, researchers must look to the heavens. Light takes billions of years to reach our telescopes from distant astronomical sources. It carries a snapshot of the laws and constants of physics at the time when it started its journey or encountered material en route. Line Editing Astronomy first entered the constants story soon after the discovery of quasars in 1965. The idea was simple. Quasars had just been discovered and identified as bright sources of light located at huge distances from Earth. Because the path of light from a quasar to us is so long, it inevitably intersects the gaseous outskirts of young galaxies. That gas absorbs the quasar light at particular frequencies, imprinting a bar code of narrow lines onto the quasar spectrum. Whenever gas absorbs light, electrons within the atoms jump from a low energy state to a higher one. These energy levels are determined by how tightly the atomic nucleus holds the electrons, which depends on the strength of the electromagnetic force between them--and therefore on the fine-structure constant. If the constant was different at the time when the light was absorbed or in the particular region of the universe where it happened, then the energy required to lift the electron would differ from that required today in laboratory experiments, and the wavelengths of the transitions seen in the spectra would differ. The way in which the wavelengths change depends critically on the orbital configuration of the electrons. For a given change in [alpha.gif] , some wavelengths shrink, whereas others increase. The complex pattern of effects is hard to mimic by data calibration errors, which makes the test astonishingly powerful. Before we began our work seven years ago, attempts to perform the measurement had suffered from two limitations. First, laboratory researchers had not measured the wavelengths of many of the relevant spectral lines with sufficient precision. Ironically, scientists used to know more about the spectra of quasars billions of light-years away than about the spectra of samples here on Earth. We needed high-precision laboratory measurements against which to compare the quasar spectra, so we persuaded experimenters to undertake them. Initial measurements were done by Anne Thorne and Juliet Pickering of Imperial College London, followed by groups led by Sveneric Johansson of Lund Observatory in Sweden and Ulf Griesmann and Rainer Kling of the National Institute of Standards and Technology in Maryland. The second problem was that previous observers had used so-called alkali-doublet absorption lines--pairs of absorption lines arising from the same gas, such as carbon or silicon. They compared the spacing between these lines in quasar spectra with laboratory measurements. This method, however, failed to take advantage of one particular phenomenon: a change in [alpha.gif] shifts not just the spacing of atomic energy levels relative to the lowest-energy level, or ground state, but also the position of the ground state itself. In fact, this second effect is even stronger than the first. Consequently, the highest precision observers achieved was only about one part in 10^4. In 1999 one of us (Webb) and Victor V. Flambaum of the University of New South Wales in Australia came up with a method to take both effects into account. The result was a breakthrough: it meant 10 times higher sensitivity. Moreover, the method allows different species (for instance, magnesium and iron) to be compared, which allows additional cross-checks. Putting this idea into practice took complicated numerical calculations to establish exactly how the observed wavelengths depend on [alpha.gif] in all different atom types. Combined with modern telescopes and detectors, the new approach, known as the many-multiplet method, has enabled us to test the constancy of [alpha.gif] with unprecedented precision. Changing Minds When embarking on this project, we anticipated establishing that the value of the fine-structure constant long ago was the same as it is today; our contribution would simply be higher precision. To our surprise, the first results, in 1999, showed small but statistically significant differences. Further data confirmed this finding. Based on a total of 128 quasar absorption lines, we found an average increase in [alpha.gif] of close to six parts in a million over the past six billion to 12 billion years. Extraordinary claims require extraordinary evidence, so our immediate thoughts turned to potential problems with the data or the analysis methods. These uncertainties can be classified into two types: systematic and random. Random uncertainties are easier to understand; they are just that--random. They differ for each individual measurement but average out to be close to zero over a large sample. Systematic uncertainties, which do not average out, are harder to deal with. They are endemic in astronomy. Laboratory experimenters can alter their instrumental setup to minimize them, but astronomers cannot change the universe, and so they are forced to accept that all their methods of gathering data have an irremovable bias. For example, any survey of galaxies will tend to be overrepresented by bright galaxies because they are easier to see. Identifying and neutralizing these biases is a constant challenge. The first one we looked for was a distortion of the wavelength scale against which the quasar spectral lines were measured. Such a distortion might conceivably be introduced, for example, during the processing of the quasar data from their raw form at the telescope into a calibrated spectrum. Although a simple linear stretching or compression of the wavelength scale could not precisely mimic a change in [alpha.gif] , even an imprecise mimicry might be enough to explain our results. To test for problems of this kind, we substituted calibration data for the quasar data and analyzed them, pretending they were quasar data. This experiment ruled out simple distortion errors with high confidence. For more than two years, we put up one potential bias after another, only to rule it out after detailed investigation as too small an effect. So far we have identified just one potentially serious source of bias. It concerns the absorption lines produced by the element magnesium. Each of the three stable isotopes of magnesium absorbs light of a different wavelength, but the three wavelengths are very close to one another, and quasar spectroscopy generally sees the three lines blended as one. Based on laboratory measurements of the relative abundances of the three isotopes, researchers infer the contribution of each. If these abundances in the young universe differed substantially--as might have happened if the stars that spilled magnesium into their galaxies were, on average, heavier than their counterparts today--those differences could simulate a change in [alpha.gif] . But a study published this year indicates that the results cannot be so easily explained away. Yeshe Fenner and Brad K. Gibson of Swinburne University of Technology in Australia and Michael T. Murphy of the University of Cambridge found that matching the isotopic abundances to emulate a variation in [alpha.gif] also results in the overproduction of nitrogen in the early universe--in direct conflict with observations. If so, we must confront the likelihood that really has been changing. The scientific community quickly realized the immense potential significance of our results. Quasar spectroscopists around the world were hot on the trail and rapidly produced their own measurements. In 2003 teams led by Sergei Levshakov of the Ioffe Physico-Technical Institute in St. Petersburg, Russia, and Ralf Quast of the University of Hamburg in Germany investigated three new quasar systems. Last year Hum Chand and Raghunathan Srianand of the Inter-University Center for Astronomy and Astrophysics in India, Patrick Petitjean of the Institute of Astrophysics and Bastien Aracil of LERMA in Paris analyzed 23 more. None of these groups saw a change in [alpha.gif] . Chand argued that any change must be less than one part in 10^6 over the past six billion to 10 billion years. How could a fairly similar analysis, just using different data, produce such a radical discrepancy? As yet the answer is unknown. The data from these groups are of excellent quality, but their samples are substantially smaller than ours and do not go as far back in time. The Chand analysis did not fully assess all the experimental and systematic errors--and, being based on a simplified version of the many-multiplet method, might have introduced new ones of its own. One prominent astrophysicist, John Bahcall of Princeton, has criticized the many-multiplet method itself, but the problems he has identified fall into the category of random uncertainties, which should wash out in a large sample. He and his colleagues, as well as a team led by Jeffrey Newman of Lawrence Berkeley National Laboratory, have looked at emission lines rather than absorption lines. So far this approach is much less precise, but in the future it may yield useful constraints. Reforming the Laws If our findings prove to be right, the consequences are enormous, though only partially explored. Until quite recently, all attempts to evaluate what happens to the universe if the fine-structure constant changes were unsatisfactory. They amounted to nothing more than assuming that [alpha.gif] became a variable in the same formulas that had been derived assuming it is a constant. This is a dubious practice. If [alpha.gif] varies, then its effects must conserve energy and momentum, and they must influence the gravitational field in the universe. In 1982 Jacob D. Bekenstein of the Hebrew University of Jerusalem was the first to generalize the laws of electromagnetism to handle inconstant constants rigorously. The theory elevates [alpha.gif] from a mere number to a so-called scalar field, a dynamic ingredient of nature. His theory did not include gravity, however. Four years ago one of us (Barrow), with H?vard Sandvik and Jo?o Magueijo of Imperial College London, extended it to do so. This theory makes appealingly simple predictions. Variations in [alpha.gif] of a few parts per million should have a completely negligible effect on the expansion of the universe. That is because electromagnetism is much weaker than gravity on cosmic scales. But although changes in the fine-structure constant do not affect the expansion of the universe significantly, the expansion affects [alpha.gif] . Changes to [alpha.gif] are driven by imbalances between the electric field energy and magnetic field energy. During the first tens of thousands of years of cosmic history, radiation dominated over charged particles and kept the electric and magnetic fields in balance. As the universe expanded, radiation thinned out, and matter became the dominant constituent of the cosmos. The electric and magnetic energies became unequal, and [alpha.gif] started to increase very slowly, growing as the logarithm of time. About six billion years ago dark energy took over and accelerated the expansion, making it difficult for all physical influences to propagate through space. So [alpha.gif] became nearly constant again. This predicted pattern is consistent with our observations. The quasar spectral lines represent the matter-dominated period of cosmic history, when [alpha.gif] was increasing. The laboratory and Oklo results fall in the dark-energy-dominated period, during which has been constant. The continued study of the effect of changing [alpha.gif] on radioactive elements in meteorites is particularly interesting, because it probes the transition between these two periods. Alpha Is Just the Beginning Any theory worthy of consideration does not merely reproduce observations; it must make novel predictions. The above theory suggests that varying the fine-structure constant makes objects fall differently. Galileo predicted that bodies in a vacuum fall at the same rate no matter what they are made of--an idea known as the weak equivalence principle, famously demonstrated when Apollo 15 astronaut David Scott dropped a feather and a hammer and saw them hit the lunar dirt at the same time. But if [alpha.gif] varies, that principle no longer holds exactly. The variations generate a force on all charged particles. The more protons an atom has in its nucleus, the more strongly it will feel this force. If our quasar observations are correct, then the accelerations of different materials differ by about one part in 10^14--too small to see in the laboratory by a factor of about 100 but large enough to show up in planned missions such as STEP (space-based test of the equivalence principle). There is a last twist to the story. Previous studies of [alpha.gif] neglected to include one vital consideration: the lumpiness of the universe. Like all galaxies, our Milky Way is about a million times denser than the cosmic average, so it is not expanding along with the universe. In 2003 Barrow and David F. Mota of Cambridge calculated that [alpha.gif] may behave differently within the galaxy than inside emptier regions of space. Once a young galaxy condenses and relaxes into gravitational equilibrium, [alpha.gif] nearly stops changing inside it but keeps on changing outside. Thus, the terrestrial experiments that probe the constancy of [alpha.gif] suffer from a selection bias. We need to study this effect more to see how it would affect the tests of the weak equivalence principle. No spatial variations of [alpha.gif] have yet been seen. Based on the uniformity of the cosmic microwave background radiation, Barrow recently showed that [alpha.gif] does not vary by more than one part in 10^8 between regions separated by 10 degrees on the sky. So where does this flurry of activity leave science as far as [alpha.gif] is concerned? We await new data and new analyses to confirm or disprove that varies at the level claimed. Researchers focus on [alpha.gif] , over the other constants of nature, simply because its effects are more readily seen. If [alpha.gif] is susceptible to change, however, other constants should vary as well, making the inner workings of nature more fickle than scientists ever suspected. The constants are a tantalizing mystery. Every equation of physics is filled with them, and they seem so prosaic that people tend to forget how unaccountable their values are. Their origin is bound up with some of the grandest questions of modern science, from the unification of physics to the expansion of the universe. They may be the superficial shadow of a structure larger and more complex than the three-dimensional universe we witness around us. Determining whether constants are truly constant is only the first step on a path that leads to a deeper and wider appreciation of that ultimate vista. From checker at panix.com Sat Jul 2 15:24:45 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:24:45 -0400 (EDT) Subject: [Paleopsych] Sunday Times: Married to a Genius by Jeffrey Meyers Message-ID: Married to a Genius by Jeffrey Meyers http://www.timesonline.co.uk/article/0,,2102-1644797,00.html 5.6.12 REVIEWED BY JOHN CAREY MARRIED TO GENIUS by Jeffrey Meyers Southbank Publishing ?9.99 pp256 Geniuses are traditionally difficult to live with. It is part of their mystique. Disregard for other people is vital for their art, or so they claim. According to D. H. Lawrence, you must have "something vicious in you" to be a writer. Graham Greene said you needed a splinter of ice in your heart. Jeffrey Meyers's sharp-witted book tests these beliefs by examining the marital relationships of nine writers -- Leo Tolstoy, Joseph Conrad, George Bernard Shaw, James Joyce, D. H. Lawrence, Virginia Woolf, Katherine Mansfield, Ernest Hemingway and Scott Fitzgerald. Each study is brilliant and arresting, and they reflect fascinatingly on one another. Meyers has an intricate grasp of modern literature, and has already written full-scale biographies of five of his subjects. Above all, he reveals how subtly writers' lives infiltrate their fiction -- the hardest trick in literary biography. Most judges would choose Tolstoy as the greatest genius of the bunch, and he was by a fair margin the most repellent human being. After a youth of drinking, whoring and gambling, he fell madly in love, at 34, with 18-year-old Sofya Behrs. She seemed "a mere child, a lovely thing", but turned out to be just as pig-headed as he was, and foolish with it. She thought his devotion to the peasantry absurd, while he concluded, from daily observation of her, that there was something wrong with her whole sex -- "Woman is generally stupid." There were furious rows, hysterical fits and suicide attempts. She almost died giving birth to her fifth child, but Tolstoy was offended when she expressed fears about further pregnancies, so breeding continued. Ashamed of his brutal appetites, he maligned the female body ("ugliness, slovenliness, odours") and advocated chastity in his misogynistic Kreutzer Sonata, just as Sofya was giving birth to their 13th. By comparison, George Bernard Shaw's marital arrangements seem almost ideal. A passionless philanderer, frightened of women, he took as his bride the equally frigid Charlotte Payne-Townshend. She was, he said, physically and emotionally like a muffin, but her great attractions were ?4,000 a year, a mighty sum at the time, and a determination never to consummate the marriage -- the last thing Shaw wanted. They managed pretty well for 45 years, and her militant chastity went into the making of St Joan, a subject she suggested and researched. Admittedly, the Shaws' solution would not suit all married couples. Joseph Conrad's was more usual, since he married a substitute mother. Jessie, a former typist, was intellectually undeveloped but excellent at domestic chores. She treated Conrad as a son, calling him "Boy", and nursing him through his shattering depressions. When a real son arrived, Conrad naturally felt displaced, and this led to a strange incident when, on a train with Jessie and their child, he suddenly threw the their bundle of baby clothes out of the window. Tight-lipped, Jessie remarked that when the clothes were found there would be a search for the baby's corpse. Meyers ingeniously deciphers this moment of murderous jealousy as the germ of Conrad's novel The Secret Agent, in which the shady and incompetent Verloc kills his "stepson" Stevie who is the sole object of his wife's love. The happiest marriage in the book, and also the unlikeliest, is James Joyce's to Nora Barnacle. She was a raw, uneducated girl from the west of Ireland, with so little understanding of literature that she thought her husband's writing idiotic, calling him affectionately "simple-minded Jim". Seemingly there was some writerly instinct in Joyce that picked her out as his salvation. An icy, inhibited intellectual, who once described adult sex as "brief, brutal, irresistible and devilish", he needed a woman who could arouse his passion and unlock his guilty store of obscene fantasies. Nora was surprised by the literary outcome. Reading Molly Bloom's reveries at the end of Ulysses she commented, "I guess the man's a genius, but what a dirty mind he has, hasn't he?" Joyce seems to have been an extreme case of what Freud identified as the commonest sexual malady among modern males -- the inability to feel intellectual respect and sexual passion for the same woman. With D. H. Lawrence, though, it was just the opposite. Frieda von Richthofen attracted him because, in addition to her aristocratic lineage, she was aflame with intellectual vivacity and emancipated modern theories that excited his shrinking, puritanical nature. As Meyers shows, her ideas and character appear everywhere in his work. Because they were such opposites, violent hatred complicated their love. They would go at each other hammer-and-tongs in public, pulling hair, punching, screaming abuse, to the embarrassment of their friends. Meyers thinks there was an element of slapstick and self-parody in these set-tos, and for Lawrence they were also a kind of therapy. He would have been bored with a submissive mate. "I must have opposition, something to fight on, or I shall go under," he admitted. Seemingly cruel and outrageous behaviour on Frieda's part, of which there was plenty, may have stemmed from a subliminal realisation of this need. Of course, with Frieda there is also the chance that it was just cruel and outrageous. Woolf and Mansfield were both invalids, as well as geniuses, and needed faithful nursing. Only Woolf got it. Her husband, Leonard, had been a colonial administrator in Ceylon ("ruled India, hung black men, shot tigers", as Virginia airily put it) and Meyers thinks the imperial ethic of duty and self-sacrifice helped him cope with his wife's descents into madness. Their attempts at sexual relations had been "a terrible failure", and were soon abandoned. But he supported her gallantly to the end. Not so Mansfield's husband, John Middleton Murry, who was of a lower class than Leonard Woolf, and appears weak and insecure by comparison. His wife's tuberculosis frightened him, and he stayed in London while she went south, vainly seeking a cure. Meyers judges him harshly, but Mansfield's poor-little-rich-girl bohemianism must have been hard for a well-brought-up, penny-pinching boy to handle, and Murry was probably jealous of her talent, as Leonard was not of Woolf's. The two Americans also make a contrasting pair, Hemingway brutal and exploitative, Fitzgerald feeble, but faithful to his maniacally egotistic wife Zelda. Hemingway's was the simpler case. He tried to force women into the role of passive, devoted creatures, as men had done since the stone age. The Fitzgeralds, by comparison, were disastrously modern -- drunk on fame and money, flaunting their style and beauty as if conforming to some tabloid image of how celebrities should behave, and spiralling into alcoholism and madness. From a literary angle, though, they triumphed. Zelda's tragedy gave Fitzgerald the inspiration for his last great novel, Tender Is the Night, whereas Hemingway's aggressive maleness wrecked his four marriages and his art. Meyers's analyses are always, as here, beautifully clear-cut, but they never lose sight of a truth that H G Wells voiced about the Lawrences' marriage: "The mysteries of human relationships are impenetrably obscure." Available at the Books First price of ?8.49 plus 99p p&p on 0870 165 8585 and [74]www.timesonline.co.uk/booksfirstbuy From checker at panix.com Sat Jul 2 15:31:15 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:15 -0400 (EDT) Subject: [Paleopsych] Bulletin of the Atomic Scientists: The Pentagon's psychic friends network Message-ID: The Pentagon's psychic friends network http://www.thebulletin.org/print.php?art_ofn=mj05shermer Bulletin of the Atomic Scientists The Men Who Stare at Goats By Jon Ronson Picador, 2004 278 pages; $24 By Michael Shermer May/June 2005 pp. 60-61 (vol. 61, no. 03) Allison was an attractive Oregonian brunette in a new ageish way, before the new age bloomed in the 1980s. She wore all-natural fibers, flowers in her hair, and nothing on her feet. But what most intrigued me in our year of distance dating were Allison's spiritual gifts. I knew she could see through me metaphorically, but Allison also saw things that she said were not allegorical: body auras, energy chakras, spiritual entities, and light beings. One night she closed the door and turned off the lights in my bathroom and told me to stare into the mirror until my aura appeared. During a drive one evening she pointed out spiritual beings dotting the landscape. I tried to see the world as Allison did, but I couldn't. I was a skeptic, and she was a psychic. This was the age of paranormal proliferation. While a graduate student in experimental psychology, I saw on television the Israeli psychic Uri Geller bend cutlery and reproduce drawings using, so he said, psychic powers alone. Since a number of experimental psychologists had tested Geller and declared him genuine, I began to think that there might be something to it, even if I couldn't personally get with the paranormal program. But then one night I saw the magician James "The Amazing" Randi on Johnny Carson's Tonight Show, replicating with magic everything Geller did. Randi bent spoons, duplicated drawings, levitated tables, and even performed a psychic surgery. When asked about Geller's ability to pass the tests of professional scientists, Randi explained that scientists are not trained to detect trickery and intentional deception, the very art of magic. Randi's right. I vividly recall a seminar that Allison and I attended in which a psychic healer shoved a 10-inch sail needle through his arm with no apparent pain and only a drop of blood. Years later, and to my chagrin, Randi performed the same feat with the simplest of magic. Randi confirmed my skeptical intuitions about all this paranormal piffle, but I always assumed that it was the province of the cultural fringes. Then, in 1995, the story broke that for the previous 25 years the U.S. Army had invested $20 million in a highly secret psychic spy program called Star Gate (also Grill Flame and Scanate), a Cold War project intended to close the "psi gap" (the psychic equivalent of the missile gap) between the United States and Soviet Union. The Soviets were training psychic spies, so we would too. The Men Who Stare at Goats, by British investigative journalist Jon Ronson, is the story of this program, how it started, the bizarre twists and turns it took, and how its legacy carries on today. (Ronson's previous book, Them: Adventures with Extremists, explored the paranoid world of cult-mongers and conspiracy theorists.) In a highly readable narrative style, Ronson takes readers on a Looking Glass-like tour of what U.S. Psychological Operations (PsyOps) forces were researching: invisibility, levitation, telekinesis, walking through walls, and even killing goats just by staring at them (the ultimate goal was killing enemy soldiers telepathically). In one project, psychic spies attempted to use "remote viewing" to identify the location of missile silos, submarines, POWs, and MIAs from a small room in a run-down Maryland building. If these skills could be honed and combined, perhaps military officials could zap remotely viewed enemy missiles in their silos, or so the thinking went. Initially, the Star Gate story received broad media attention (including a spot on ABC's Nightline) and made a few of the psychic spies, such as Ed Dames and Joe McMoneagle, minor celebrities. As regular guests on Art Bell's pro-paranormal radio talk show, the former spies spun tales that, had they not been documented elsewhere, would have seemed like the ramblings of paranoid cultists. (There is even a connection between Dames, Bell, and the Heaven's Gate cult mass suicide in 1997, in which 39 UFO devotees took a permanent "trip" to the mother ship they believed was trailing the Hale-Bopp comet.) But Ronson has brought new depth to the account by carefully tracking down leads, revealing connections, and uncovering previously undisclosed stories. For example, Ronson convincingly connects some of the bizarre torture techniques used on prisoners at Cuba's Guantanamo Bay and at Iraq's Abu Ghraib prison with similar techniques employed during the FBI siege of the Branch Davidians in Waco, Texas. FBI agents blasted the Branch Davidians all night with such obnoxious sounds as screaming rabbits, crying seagulls, dentist drills, and Nancy Sinatra's "These Boots Are Made for Walking." The U.S. military employed the same technique on Iraqi prisoners of war, instead using the theme song from the PBS kids series Barney and Friends--a tune many parents concur does become torturous with repetition. One of Ronson's sources, none other than Geller (of bent-spoon fame), led him to Maj. Gen. Albert Stubblebine III, who directed the psychic spy network from his office in Arlington, Virginia. Stubblebine thought that with enough practice he could learn to walk through walls, a belief encouraged by Lt. Col. Jim Channon, a Vietnam vet whose post-war experiences at such new age meccas as the Esalen Institute in Big Sur, California, led him to found the "first earth battalion" of "warrior monks" and "Jedi knights." These warriors, according to Channon, would transform the nature of war by entering hostile lands with "sparkly eyes," marching to the mantra of "Om," and presenting the enemy with "automatic hugs." Disillusioned by the ugly carnage of modern war, Channon envisioned a battalion armory of machines that would produce "discordant sounds" (Nancy and Barney?) and "psycho-electric" guns that would shoot "positive energy" at enemy soldiers. Although Ronson expresses skepticism throughout his narrative, he avoids the ontological question of whether any of these claims have any basis in reality. That is, can anyone levitate, turn invisible, walk through walls, or remotely view a hidden object? Inquiring minds (scientists) want to know. The answer is an unequivocal no. Under controlled conditions, remote viewers have never succeeded in finding a hidden target with greater accuracy than random guessing. The occasional successes you hear about are due either to chance or to suspect experiment conditions, like when the person who subjectively assesses whether the remote viewer's narrative description seems to match the target already knows the target location and its characteristics. When both the experimenter and the remote viewer are blinded to the target, all psychic powers vanish. Herein lies an important lesson that I have learned in many years of paranormal investigations and that Ronson gleaned in researching his illuminating book: What people remember rarely corresponds to what actually happened. Case in point: A man named Guy Savelli told Ronson that he had seen soldiers kill goats by staring at them, and that he himself had also done so. But as the story unfolds we discover that Savelli is recalling, years later, what he remembers about a particular "experiment" with 30 numbered goats. Savelli randomly chose goat number 16 and gave it his best death stare. But he couldn't concentrate that day, so he quit the experiment, only to be told later that goat number 17 had died. End of story. No autopsy or explanation of the cause of death. No information about how much time had elapsed; the conditions, like temperature, of the room into which the 30 goats had been placed; how long they had been there, and so forth. Since Ronson was skeptical, Savelli triumphantly produced a videotape of another experiment where someone else supposedly stopped the heart of a goat. But the tape showed only a goat whose heart rate dropped from 65 to 55 beats per minute. That was the extent of the empirical evidence of goat killing, and as someone who has spent decades in the same fruitless pursuit of phantom goats, I conclude that the evidence for the paranormal in general doesn't get much better than this. They shoot horses, don't they? Michael Shermer is the publisher of Skeptic magazine (www.skeptic.com), a columnist for Scientific American, and the author of several books, including Why People Believe Weird Things (1997) and Science Friction: Where the Known Meets the Unknown (2005). From checker at panix.com Sat Jul 2 15:31:27 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:27 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'The Genius Factory': Test-Tube Superbabies Message-ID: 'The Genius Factory': Test-Tube Superbabies New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03MORRICE.html THE GENIUS FACTORY The Curious History of the Nobel Prize Sperm Bank. By David Plotz. 262 pp. Random House. $24.95. By POLLY MORRICE ''All parents expect too much of their children,'' David Plotz writes in ''The Genius Factory,'' his beguiling account of one man's struggle to ensure that everyone's children -- at least white ones -- would come up to the mark. In our era of rampant parental ambition, of ''aggro soccer dads and home schooling enthusiasts plotting their children's future one spelling bee at a time,'' the cockeyed vision of Robert K. Graham, a California millionaire who sought to create cadres of baby geniuses, seems less bizarre than it probably did in 1980, when Graham's Repository for Germinal Choice, better known as the Nobel Prize sperm bank, opened its doors. Plotz, only 10 at the time, recalls his father's appalled reaction to the notion of using brainiac sperm to spawn wunderkinder: He tried to explain it was ''the sort of thing Hitler would have tried.'' Has Graham's project lost its sinister edge? This is one of two inquiries that Plotz, the deputy editor at Slate, explores in his first book. The reader may conclude Hitler would have been more efficient than Graham. Although Graham's business talents allowed him to parlay his invention of plastic eyeglass lenses into a great fortune, he fumbled the first stage of his grand scheme -- cajoling Nobel winners in science to provide their superior seed to improve America's gene pool. The problem was his showpiece donor: William Shockley, a pioneer of the transistor who shared the 1956 Nobel in physics. Shockley's sperm, ''a superb asset,'' in Graham's view, was the first contribution frozen, color-coded and offered to infertile couples eager to conceive. In this case Graham's natural marketing flair was done in by his knee-jerk adoration of brilliance. For years Shockley had preached that whites were genetically superior to blacks, and he was widely despised. Reporters who might have seen the genius sperm bank as ''well meaning and perhaps even visionary'' perceived it as inseparable from Shockley's racism. It was reviled as a horror and lampooned as a joke, and Nobel donors shunned it. So the Nobel Prize sperm bank produced no Nobel offspring (even Shockley quit donating sperm, fearing his was too aged to beget healthy children). Yet Graham kept the bank in business nearly two decades, with slightly lowered standards for donors. His staff wooed successful scientists and businessmen who were athletic, healthy and tall (Graham discovered American parents were wary of little eggheads). He lured customers by letting them select donors from an irresistible collection of what Plotz calls ''prime cuts of American man.'' By the time the bank closed in 1999, its customers had produced 215 babies, a respectable addition to the national ''germ plasm,'' as Graham might have said. Those children populate the second part of Plotz's story. In a 2001 article in Slate, Plotz sought information from anyone connected with the repository. He soon found himself cast as the ''Semen Detective,'' trying to hook up sperm-bank children and their mothers with the anonymous progenitors. This would be difficult territory for any writer, and Plotz has to reassure himself that none of his confidants wants him to ''go all Oprah.'' No wonder. We meet, for instance, a young man who desperately hopes his biological dad will be a better father than the one who raised him. Plotz's kindness shines through, but some readers may wonder if the book's halves -- explorations of the nature of parenthood and the morality of the Nobel sperm bank -- are coherent. But in the end, the themes mesh. Plotz's meetings with employees, consumers and offspring of the repository, sympathetic people on the whole, may have led him to his understated conclusion that the enterprise wasn't so terrible. For one thing, Graham's inspired strategy of providing consumers a choice of the most desirable men possible freed women from the tyranny of early fertility doctors. And it has become standard industry practice; as Plotz says, ''All sperm banks have become eugenic sperm banks.'' Indeed, reproductive technologies all have eugenic possibilities now, especially preimplantation genetic diagnosis, a means of screening embryos that may one day let parents select the traits they wish for their children. Plotz labels this petri dish micromanagement an instance of ''private eugenics.'' But, he argues, even parents who ''will be lining up for P.G.D. and hoping for a prodigy'' have no use for traditional eugenics, which, in its brutal, negative form, culminated in the Nazis' ''mercy killings'' of those they judged unfit. ''Negative eugenics,'' Plotz says, ''was state-sponsored and brutal. But 'positive' eugenics took a milder approach.'' Graham's version ''sought to increase the number of outstanding people,'' in Plotz's phrase. Is personal eugenics -- producing a superkid for yourself instead of for the master race -- problematic? Plotz suggests the influence of genes is dicey enough and the role of nurture strong enough that we are delusional if we think we can make our children ''what we want them to be, rather than what they are.'' This conclusion, however comforting for parents of teenagers, won't quash everyone's objections. It doesn't address the recent swing toward nature in the old nature vs. nurture debate. Nor does it provide an answer for those who fear that prenatal screening may lead scientists to limit future research on genetic disorders. But Plotz's take on the role of genes now -- in our imaginations and in fact, so far as we can determine that -- is humane and funny, which are fine traits for any argument, or any book. Polly Morrice is writing a book about autism. From checker at panix.com Sat Jul 2 15:31:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 2 Jul 2005 11:31:33 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Three Billion New Capitalists': Consider the Outsource Message-ID: 'Three Billion New Capitalists': Consider the Outsource New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03BLODGET.html [First chapter appended.] THREE BILLION NEW CAPITALISTS The Great Shift of Wealth and Power to the East. By Clyde Prestowitz. 321 pp. Basic Books. $26.95. By HENRY BLODGET IF you've managed to ignore the alarm bells on the outlook for American economic leadership -- and you enjoy dreaming -- don't read Clyde Prestowitz's ''Three Billion New Capitalists: The Great Shift of Wealth and Power to the East.'' It argues that the United States faces such serious fiscal and competitive challenges that we may be headed not only for a declining standard of living but for a 1930's-style depression with a capital D. Here's the story. In the golden age, 1950-73, we had it all -- low-cost manufacturing, rising wages, technological dominance, a highly educated and motivated work force, a trade surplus. Until 1971, our reserve currency was backed by gold, forcing us to be responsible. We had control over our economic destiny. Since then, bit by bit, we've lost much of our strength and are in danger of losing the rest. Our first problem is the surge in competitiveness on the part of the rest of the world, especially China and India, a trend Thomas L. Friedman analyzes in detail in [3]''The World Is Flat.'' Even if the playing field were level -- which it isn't -- we would not be able to compete with the combination of low-cost labor, talent and fire in the belly of these two behemoths. Our second problem is that we still think we're living in the golden age. In fact, we suffer from a misguided sense of superiority, profligate spending habits, a weak education system, mammoth debts, a ballooning trade deficit and a religious devotion to free-trade theories developed before the Industrial Revolution. Each of these issues could consume a book, but Prestowitz, president of the Economic Strategy Institute and a trade negotiator in the Reagan administration, packs them into one. The heart of the question, as he sees it, is that we are not defending the jewel in our economic crown -- our technology and manufacturing capabilities -- but are instead waxing poetic about the virtues of free trade while more practical countries walk off with our loot. This, he contends, will lead to the gutting of our economy, with well-paid skilled jobs replaced by low-paid menial ones, and an America in hock to the world's next economic leaders. Globalization, of course, is nothing new. The ''hollowing out'' debate hinges on whether the United States can replace the jobs it loses with equal or better ones. Capitalism is fueled by Schumpeter's creative destruction -- new forever displacing old -- and this country has thrived through transitions from agriculture to manufacturing to automation to outsourcing to services. Free-trade advocates argue that globalization is just the latest phase of a continuing evolution. Trade hawks like Prestowitz argue that now is different because of the sheer size of India and China and our inadequate response to the new situation. Globalization has always been a touchy subject (after all, Americans lose jobs when companies move production and services overseas) -- so touchy that most popular discussion of it is inflammatory or inane or both. Last year, John Kerry branded corporations and executives who send jobs offshore ''Benedict Arnold companies and C.E.O.'s,'' and a White House adviser, N. Gregory Mankiw, provoked many a storm by suggesting that offshoring was actually beneficial because, among other things, it lowers prices and makes labor available for new opportunities. Mankiw may have been impolitic, but Kerry was just pandering. If the choice is go offshore or go out of business, a chief executive doesn't have a choice. Prestowitz acknowledges that many companies can't survive today without offshoring, but argues that we often abandon industries we could continue to dominate and so lose the ability to lead the next wave of innovation. He lays the blame on government, not the private sector. ''Whether it recognizes the fact or not,'' he declares, ''the United States has a de facto economic strategy, and right now it is to send the country's most important industries overseas.'' He observes, moreover, that the benefits of offshoring go beyond cost: ''You do save money,'' a senior manager at the semiconductor equipment maker KLA-Tencor says about sending work to India. ''But pretty soon, you realize the work is getting done faster and better, and you start sending more and more of it. You also start sending more advanced work and then have to figure out what, if anything, you really don't want to send.'' The work is getting done faster and better, Prestowitz argues, because Indians are not only hungrier than we are, but better educated. China, India, Japan and Europe all churn out more science and engineering degrees than we do. Worse -- and downright embarrassing -- is the state of American education. Globally, our 12th-graders rank only in the 10th percentile in math (that's 10th percentile, not 10th). Our students also rank first in their assessment of their own performance: we're not only poorly prepared, we have delusions of grandeur. One common argument against the hollowing-out theory is that we can afford to lose jobs in low-tech manufacturing because we retain our high-tech design and manufacturing capabilities. Prestowitz counters that China's and India's incentives and resources are so compelling that the high-tech work is leaving, too. Another argument is that a revaluation of the yuan will curb imports and stimulate exports, thus repairing the trade deficit. In fact, Prestowitz asserts, our manufacturing capacity has been so gutted that we can't export our way out, even if the dollar's value drops to zero. The only path is to cut spending. But Prestowitz risks sounding like Chicken Little when he pronounces the globalization of today more than just another ''gale of creative destruction'' to which our economy will eventually adapt. Manufacturing has long been declining as a percentage of the United States economy, but the jobs lost have been more than offset by growth in services (in health care, financial services, law, retailing, and so on). Prestowitz points out that services are now being offshored, too, but not (yet) at a rate threatening our main growth industries. The McKinsey Global Institute, for example, reports that while 24 million Americans switch jobs each year, only 3 million jobs are estimated to go offshore by 2015. The critical question, still to be satisfactorily answered, is whether offshoring produces net economic gain or loss. Prestowitz deconstructs an oft-cited McKinsey study concluding that each $1 of spending sent offshore results in an overall gain in the gross domestic product of $1.12 to $1.14. He points out the study relies on data suggesting that 69 percent of displaced workers found jobs at an average of 97 percent of their former pay. This leaves 31 percent who didn't find new jobs. Not only that, ''if employers took McKinsey's advice to increase their offshoring,'' he says, the gain would quickly become a loss. In America's boom time, government-business cooperation was considered anathema to free-market principles -- ''Politicians shouldn't pick winners and losers!'' In Prestowitz's view, the laissez-faire trade theories of the 19th century have no place in 2005; since he holds that many of our successes have resulted from public-private collaboration, most of his proposals for maintaining American competitiveness boil down to government taking a more active role. Pay teachers more. Help workers move between jobs by offering wage insurance and portable health coverage. Reduce oil consumption by providing incentives for efficient cars (and include S.U.V.'s in mileage regulations). Tax spending, not saving. Help strategic industries with federal loan guarantees and grants. Call ''a new Bretton Woods Conference'' to set steps for reducing the role of the dollar in the world economy and so defuse the trade-deficit bomb. Whatever you think about offshoring, most of these ideas are no-brainers. Henry Blodget, a former securities analyst, writes frequently for Slate and New York magazine. ------------- First chapter of 'Three Billion New Capitalists' http://www.nytimes.com/2005/07/03/books/chapters/0703-1st-prest.html By CLYDE PRESTOWITZ As the headlines listed in the prologue attest, many world figures now fear that a crisis scenario may no longer be a fantasy. American leaders are not concerned. None other than former Secretary of State Colin Powell recently told the Atlantic Monthly that, "The United States cannot be touched in this generation by anyone in terms of military power, economic power, the strength of our political system, and our values system." There are good reasons for Powell's confidence. With just 5 percent of the world's population, the United States accounts for over 30 percent of its production and almost 40 percent of its consumption. At $11 trillion, America's gross domestic product is more than twice as big as that of the next largest national economy, and its real per capita income is far above that of any other major country. Its language, American English, is the language of commerce worldwide, and the U.S. dollar is the world's money. Go anywhere in the world and people will tell you how much something costs in dollars and will accept dollars without hesitation. Indeed, Americans have a special privilege in this regard: whereas others must first earn dollars in order to buy oil or wheat or Toyotas on the international market, Americans only need to print more dollars. Of the world's 1,000 largest corporations, 423 are American, and the New York and Nasdaq stock exchanges account for 44 percent of the value of all the stocks in the world. The United States is home to the world's finest universities and the overwhelming majority of its leading research centers, and it spends more on research and development than the next five countries combined. It is, quite simply, the richest, most powerful nation the world has ever seen. Americans long ago adopted the view that helping the rest of the world get rich is good for America. And thus, for the past half century, the United States has-through the process of globalization-orchestrated the growing integration of national economies to create an international exchange of goods, services, money, technology, and people. The results have been as intended and expected. This globalization has largely been directed by America, but it has enhanced American wealth and power by enabling others, particularly our allies, to flourish. It was this process, not military threats, that won the Cold War by lifting billions in the free world out of poverty and creating centers of wealth and power around the planet. In Asia, Japan became the world's second largest economy; other countries like Singapore and South Korea flourished so greatly that they became known as "tigers." Across the Atlantic, the European Union grew from six to twenty-five countries and introduced the euro, the first common European currency since Roman times. In Latin America, Mexico has attracted huge foreign investment by becoming a virtual extension of the American economy, and Brazil is flourishing by dint of American and other foreign investment. Although American corporations initially led globalization, they are no longer the only or even the dominant players. Sony, Nokia, Cemex, and Samsung are just a few of the growing numbers of non-American companies that have become global household names. Of course, American influence has not disappeared. American music, clothing styles, sports stars, and movies are not the only entries, but they set the pace, as have Silicon Valley entrepreneurs and the Nobel Prize winners of great American universities like MIT, Harvard, Stanford, and Caltech. Some people and countries have been uncomfortable with the American flavor of this system and have criticized globalization as a euphemism for Americanization. Yet they have found it hard to resist-one of McDonald's most successful restaurants being on the Champs Elys?es in Paris, for example. In the end everybody seems to want to join, and in fact almost everybody has joined. "Globalization" was an odd term to use during the Cold War because half the world was socialist or communist and not playing. A citizen of the communist bloc who dared to even suggest playing risked being purged (or worse) as a hated "capitalist roader." Over the past two decades, however, China, India, and the former Soviet Union all decided to leave their respective socialist workers' paradises and drive with their combined 3 billion citizens onto the once despised capitalist road. Although these people are mostly poor, the number having an advanced education and sophisticated skills is larger than the populations of many first world countries. They are arriving on the scene in the context of revolutionary changes. A series of global treaties, concluded largely at American behest, has dramatically lowered trade and investment barriers, making the old rutted capitalist road a lot smoother. With contract manufacturers that can produce anywhere in the world and express delivery companies like FedEx and UPS that can deliver anywhere in the world in thirty-six hours, the road has become a highway. Finally, the global deployment of the Internet negated time and distance for transactions that can be done in bits instead of atoms. Now the highway is a high-speed capitalist raceway, and those 3 billion new people driving on it are, effectively, in your office and living room, and you are in theirs. All of this has generated a whole new wave and model of globalization that is turning the world upside down. The global economic system was designed during the Cold War to attract these newcomers to capitalism, but no one actually anticipated that they would join or what their absorption into it would mean. Although this new wave of globalization has many potential advantages for everyone, it also poses serious challenges. It comes at a time when a fundamental flaw in the international economic structure has combined with American self-indulgence and Asian mercantilism to stress the system and make it vulnerable. The irony here is that the winners of the Cold War were less prepared for victory than the losers were for defeat. Thus the impact of the new wave, if not handled carefully, could bring the whole system crashing down. They Can't Move the Snow to India It was in the winter of 2003 that my oldest son, Chummy, gave me my first glimpse of the powerful forces being unleashed by the new capitalists and how they might interact with the old system and structures. We were skiing on the north side of Lake Tahoe in California, where he lives. On the lift he asked if I would consider coinvesting with him in a local snow-removal company. "What do you mean by snow removal?" I asked, somewhat surprised because my son is a high-level software developer. "Well," he explained, "the company has contracts to plow the parking lots and access roads of the hotels and vacation condominiums around here whenever it snows, and that happens pretty frequently between November and May." "But what on earth are you doing," I exclaimed, "going into something as mundane as snow plowing?" "Dad," he said, "they can't move the snow to India." It took a minute for that to sink in. It had never occurred to me that my son had anything to fear from India or anywhere else in terms of his career path. It was I, after all, who had advised him to go into computer science, secure in the knowledge that it would put him in a position to write his own ticket. When I asked if his job was in any danger, he thought it unlikely but noted that "outsourcing" is the new management buzzword. "You can never be sure," he said, "that some MBA hotshot with little knowledge of the technology but a big need to impress top management with his or her sophistication won't decide to move the whole operation offshore to India or elsewhere." My son further explained that all the big consulting and service firms like Bearing Point, IBM, Deloitte, and others were making daily pitches to top management on how much they could save by outsourcing to India. After asking about the snow removal company's financial status and agreeing to put in a few dollars, I decided to add India (where I hadn't been in twenty years) to the countries in Asia I was scheduled to visit over the next four weeks. At my first stop in Tokyo, discussion centered almost exclusively on China. The tone of the talk was somewhat schizophrenic. Several years ago the Japanese had feared being "hollowed out" as China took over production of steel, machine tools, and electronic components, but now they were talking of China as an opportunity. They even spoke of China possibly replacing America as the world's growth engine and of Japan orienting itself more toward China and less toward America. They were proud of their corporate and national strategy for maintaining a strong manufacturing base that allowed them-unlike the United States, which they said had little to sell-to capitalize on the China boom. Yet in the hon-ne, or real truth, of quiet conversation after a few drinks, Japanese corporate and government leaders alike wondered how Japan would be able to compete with China in the future. In Beijing and Shanghai I was struck again, as I have always been during my visits over the past twenty years, by the rapidity of China's continuing modernization. Stay away from China for six months and you no longer recognize the place when you return. As I took the twelve-minute ride from the airport to downtown on Shanghai's new maglev bullet train, I couldn't help thinking how nice it would be to have something like this in America. That thought recurred over the next few days as I made my rounds of factories, government offices, consulting firms, and think tanks. By now everyone knows that China is the world's location of choice for low-cost commodity manufacturing. But what I kept hearing and seeing was that it is also rapidly becoming the location of choice for high-tech manufacturing and even research and development. This impression was greatly strengthened by my visit with old friends at Motorola in Beijing. In the 1980s, as the U.S. trade deficit began to soar, Motorola was a prime leader in an effort to ensure continued high-tech production in the United States through a coordinated industry-government program to improve U.S. high-tech competitiveness. Now, I was told, Motorola had just moved a big part of its manufacturing and R&D to China. I winged on to Singapore, where I was scheduled to meet with the senior minister and father of his country, Lee Kuan Yew. I knew that Lee, having foreseen that China would displace Singapore as a low-cost manufacturing location, had been urging a new high-tech and service-oriented strategy for the now wealthy and high-cost city-state. How did he view the future? With concern, was the answer. China was moving much faster than even he had anticipated, and India's domination of services was completely unexpected. In India, after a tour of Delhi, Hyderabad, Chennai, and Bangalore, I realized I was seeing a revolution-a different, more exciting, and more challenging future than I had imagined. In the "accent neutralization" classes at call center training schools, I listened to English-speaking Indian young people learn to sound like people in Kansas or Ottawa. Thus, if you're a customer of Dell Computer or United Airlines or some other U.S. company phoning a call center to get tech support or make reservations for a trip, you will think you're talking to someone across town or in another American city; you won't realize that India is at the other end of your line. In Hyderabad I met with Raju Ramalinga, the founder of Indian infotech services provider Satyam, and I listened as he explained how in 1972 he had started sending programmers to U.S. clients for limited software writing contracts. Now, at their request, he has taken over complete management of those clients' back offices all over the world. By doing the work at the Satyam campus outside of town, he cuts client costs by 70 percent. In Bangalore I saw 1800 Indians with Ph.D.s in electrical engineering and computer science designing Intel's latest chips. Again, the cost savings were huge; more importantly, Intel couldn't find the same number of equally qualified people in the United States. In Chennai I visited the new biotech industrial park to be directed by Krishna Ella, a University of Michigan Ph.D. who, after several years at the leading edge of biotechnology in the United States, has come home to India, where costs are 20 percent of those in the U.S. market. By the end of my tour, I understood my son's interest in snow removal. I also understood why the notion of outsourcing was sending shivers down the spines of millions of formerly secure upper-middle-class professionals who were beginning to appreciate how blue-collar workers feel about visiting the unemployment office. I flew home via Frankfurt and Paris. On the Lufthansa flight from Delhi, I read the cover story in Der Spiegel about whether, in response to global competition, Europeans could bring themselves to change from their current thirty-five-hour work week to a forty-hour one. After what I had seen over the past three weeks, the question seemed trivial. Can Europe survive? is more appropriate, I thought. But then I remembered that the maglev trains in Shanghai were built in Europe, that Finland has a trade surplus with China, and in Europe my cell phone would work everywhere, instead of only in certain locations, as it does in the United States. At Charles de Gaulle Airport in Paris, I bought a pile of newspapers and magazines for the flight to Washington Dulles. The Guardian of London had a front-page story about how the deficit-ridden British National Health Service was thinking about air-expressing blood samples to India for analysis to save money. Lab results would be returned via e-mail. As I arrived in Washington, tax time was fast approaching. So I booked a quick appointment with my tax accountant at a medium-size local firm. As we chatted about my expenses, donations, and deductions, I happened to mention that I was just back from India. "India!" he exclaimed. "We just did a deal to move our whole data processing operation to Bangalore. Your taxes will actually be calculated there." He explained that the move was saving the firm 80 percent on its processing costs. (I wondered why my bill was not being reduced, but that's another book.) That night I phoned my daughter to let her know I was back and to get caught up on the grandchildren. . . . From anonymous_animus at yahoo.com Sat Jul 2 18:09:45 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sat, 2 Jul 2005 11:09:45 -0700 (PDT) Subject: [Paleopsych] mythology and the mind In-Reply-To: <200507021800.j62I0dR01579@tick.javien.com> Message-ID: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> >>Why is there is an eternal return of certain mythic themes in religion, such as messiah myths, flood myths, creation myths, destruction myths, redemption myths, and end of the world myths? What do these recurring themes tell us about the workings of the human mind and culture?<< --Maybe that the mind operates in some ways like a fractal, and that social patterns of dominance, appeasement and sacrifice are projected onto nature automatically, given the lack of a more accurate model. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ljohnson at solution-consulting.com Sat Jul 2 20:14:45 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sat, 02 Jul 2005 14:14:45 -0600 Subject: [Paleopsych] Bulletin of the Atomic Scientists: The Pentagon's psychic friends network In-Reply-To: References: Message-ID: <42C6F5B5.5040006@solution-consulting.com> Frank, thanks for passing this on. Surely you will not go to hell, disbelief notwithstanding. As I have come to expect from Shermer, the key argument is an outright lie, namely that there is no statistical support for remote viewing. See Jessica Utts' report, and contrast with Hyman's report which he decided upon before seeing the evidence. (Hyman is another skeptic who cannot be convinced by evidence). Even Hyman admits that the statistical evidence is very strong, but he still undermined the program. http://en.wikipedia.org/wiki/Remote_viewing http://anson.ucdavis.edu/~utts/air2.html Utts' own abstract says: "Using the standards applied to any other area of science, it is concluded that psychic functioning has been well established. The statistical results of the studies examined are far beyond what is expected by chance. Arguments that these results could be due to methodological flaws in the experiments are soundly refuted. Effects of similar magnitude to those found in government-sponsored research at SRI and SAIC have been replicated at a number of laboratories across the world. Such consistency cannot be readily explained by claims of flaws or fraud." I have the greatest of respect for the scientific method, and I am always disappointed by Shermer and his ilk that distort and lie in order to support their strict materialistic paradigm. It is inadequate to explain the data, so let's not collect any more data. "Sit down before fact like a small child, and be prepared to give up every preconceived notion and follow wherever and to whatever abyss nature lead, or you will learn nothing." -- T. S. Huxley Other than that, I have no strong feelings. Lynn Premise Checker wrote: > The Pentagon's psychic friends network > http://www.thebulletin.org/print.php?art_ofn=mj05shermer > Bulletin of the Atomic Scientists > > The Men Who Stare at Goats > By Jon Ronson > Picador, 2004 > 278 pages; $24 > > By Michael Shermer > May/June 2005 pp. 60-61 (vol. 61, no. 03) > > Allison was an attractive Oregonian brunette in a new ageish way, > before the new age bloomed in the 1980s. She wore all-natural fibers, > flowers in her hair, and nothing on her feet. But what most intrigued > me in our year of distance dating were Allison's spiritual gifts. I > knew she could see through me metaphorically, but Allison also saw > things that she said were not allegorical: body auras, energy > chakras, > spiritual entities, and light beings. One night she closed the door > and turned off the lights in my bathroom and told me to stare into > the > mirror until my aura appeared. During a drive one evening she pointed > out spiritual beings dotting the landscape. I tried to see the world > as Allison did, but I couldn't. I was a skeptic, and she was a > psychic. > > This was the age of paranormal proliferation. While a graduate > student > in experimental psychology, I saw on television the Israeli psychic > Uri Geller bend cutlery and reproduce drawings using, so he said, > psychic powers alone. Since a number of experimental psychologists > had > tested Geller and declared him genuine, I began to think that there > might be something to it, even if I couldn't personally get with the > paranormal program. But then one night I saw the magician James "The > Amazing" Randi on Johnny Carson's Tonight Show, replicating with > magic > everything Geller did. Randi bent spoons, duplicated drawings, > levitated tables, and even performed a psychic surgery. When asked > about Geller's ability to pass the tests of professional scientists, > Randi explained that scientists are not trained to detect trickery > and > intentional deception, the very art of magic. Randi's right. I > vividly > recall a seminar that Allison and I attended in which a psychic > healer > shoved a 10-inch sail needle through his arm with no apparent pain > and > only a drop of blood. Years later, and to my chagrin, Randi performed > the same feat with the simplest of magic. > > Randi confirmed my skeptical intuitions about all this paranormal > piffle, but I always assumed that it was the province of the cultural > fringes. Then, in 1995, the story broke that for the previous 25 > years > the U.S. Army had invested $20 million in a highly secret psychic spy > program called Star Gate (also Grill Flame and Scanate), a Cold War > project intended to close the "psi gap" (the psychic equivalent of > the > missile gap) between the United States and Soviet Union. The Soviets > were training psychic spies, so we would too. The Men Who Stare at > Goats, by British investigative journalist Jon Ronson, is the > story of > this program, how it started, the bizarre twists and turns it took, > and how its legacy carries on today. (Ronson's previous book, Them: > Adventures with Extremists, explored the paranoid world of > cult-mongers and conspiracy theorists.) > > In a highly readable narrative style, Ronson takes readers on a > Looking Glass-like tour of what U.S. Psychological Operations > (PsyOps) > forces were researching: invisibility, levitation, telekinesis, > walking through walls, and even killing goats just by staring at them > (the ultimate goal was killing enemy soldiers telepathically). In one > project, psychic spies attempted to use "remote viewing" to identify > the location of missile silos, submarines, POWs, and MIAs from a > small > room in a run-down Maryland building. If these skills could be honed > and combined, perhaps military officials could zap remotely viewed > enemy missiles in their silos, or so the thinking went. > > Initially, the Star Gate story received broad media attention > (including a spot on ABC's Nightline) and made a few of the psychic > spies, such as Ed Dames and Joe McMoneagle, minor celebrities. As > regular guests on Art Bell's pro-paranormal radio talk show, the > former spies spun tales that, had they not been documented elsewhere, > would have seemed like the ramblings of paranoid cultists. (There is > even a connection between Dames, Bell, and the Heaven's Gate cult > mass > suicide in 1997, in which 39 UFO devotees took a permanent "trip" to > the mother ship they believed was trailing the Hale-Bopp comet.) > > But Ronson has brought new depth to the account by carefully tracking > down leads, revealing connections, and uncovering previously > undisclosed stories. For example, Ronson convincingly connects > some of > the bizarre torture techniques used on prisoners at Cuba's Guantanamo > Bay and at Iraq's Abu Ghraib prison with similar techniques employed > during the FBI siege of the Branch Davidians in Waco, Texas. FBI > agents blasted the Branch Davidians all night with such obnoxious > sounds as screaming rabbits, crying seagulls, dentist drills, and > Nancy Sinatra's "These Boots Are Made for Walking." The U.S. military > employed the same technique on Iraqi prisoners of war, instead using > the theme song from the PBS kids series Barney and Friends--a tune > many parents concur does become torturous with repetition. > > One of Ronson's sources, none other than Geller (of bent-spoon fame), > led him to Maj. Gen. Albert Stubblebine III, who directed the psychic > spy network from his office in Arlington, Virginia. Stubblebine > thought that with enough practice he could learn to walk through > walls, a belief encouraged by Lt. Col. Jim Channon, a Vietnam vet > whose post-war experiences at such new age meccas as the Esalen > Institute in Big Sur, California, led him to found the "first earth > battalion" of "warrior monks" and "Jedi knights." These warriors, > according to Channon, would transform the nature of war by entering > hostile lands with "sparkly eyes," marching to the mantra of "Om," > and > presenting the enemy with "automatic hugs." Disillusioned by the ugly > carnage of modern war, Channon envisioned a battalion armory of > machines that would produce "discordant sounds" (Nancy and Barney?) > and "psycho-electric" guns that would shoot "positive energy" at > enemy > soldiers. > > Although Ronson expresses skepticism throughout his narrative, he > avoids the ontological question of whether any of these claims have > any basis in reality. That is, can anyone levitate, turn invisible, > walk through walls, or remotely view a hidden object? Inquiring minds > (scientists) want to know. The answer is an unequivocal no. Under > controlled conditions, remote viewers have never succeeded in finding > a hidden target with greater accuracy than random guessing. The > occasional successes you hear about are due either to chance or to > suspect experiment conditions, like when the person who subjectively > assesses whether the remote viewer's narrative description seems to > match the target already knows the target location and its > characteristics. When both the experimenter and the remote viewer are > blinded to the target, all psychic powers vanish. > > Herein lies an important lesson that I have learned in many years of > paranormal investigations and that Ronson gleaned in researching his > illuminating book: What people remember rarely corresponds to what > actually happened. Case in point: A man named Guy Savelli told Ronson > that he had seen soldiers kill goats by staring at them, and that he > himself had also done so. But as the story unfolds we discover that > Savelli is recalling, years later, what he remembers about a > particular "experiment" with 30 numbered goats. Savelli randomly > chose > goat number 16 and gave it his best death stare. But he couldn't > concentrate that day, so he quit the experiment, only to be told > later > that goat number 17 had died. End of story. No autopsy or explanation > of the cause of death. No information about how much time had > elapsed; > the conditions, like temperature, of the room into which the 30 goats > had been placed; how long they had been there, and so forth. Since > Ronson was skeptical, Savelli triumphantly produced a videotape of > another experiment where someone else supposedly stopped the heart of > a goat. But the tape showed only a goat whose heart rate dropped from > 65 to 55 beats per minute. > > That was the extent of the empirical evidence of goat killing, and as > someone who has spent decades in the same fruitless pursuit of > phantom > goats, I conclude that the evidence for the paranormal in general > doesn't get much better than this. They shoot horses, don't they? > > Michael Shermer is the publisher of Skeptic magazine > (www.skeptic.com), a columnist for Scientific American, and the > author > of several books, including Why People Believe Weird Things (1997) > and > Science Friction: Where the Known Meets the Unknown (2005). > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From waluk at earthlink.net Sun Jul 3 00:51:18 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sat, 02 Jul 2005 17:51:18 -0700 Subject: [Paleopsych] mythology and the mind In-Reply-To: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> References: <20050702180945.8406.qmail@web30803.mail.mud.yahoo.com> Message-ID: <42C73686.9060601@earthlink.net> Gerry says: Most literature students are aware that there is one basic concept called "conflict" and this results in 7 plots found in all literature. These are: 1. *Man vs. Nature* as in /Tarzan /, /Robinson Crusoe /, /The Call Of The Wild / and /Moby Dick /. 2. *Man vs. Man *exemplified by /Shane /, /Othello /, and /Les Miserables /. 3. *Man vs. Environment *found in Dickens - /Oliver Twist / or /David Copperfield /, for example. 4. *Man vs. God *such as Hermann Hesse 's /Siddhartha / and the classic /Zen And The Art Of Motorcycle Maintenance /. For more overt battles with the heavens, see Homer 's /The Odyssey / or the Book of Job in The Bible . 5. *Man vs. Supernatural *as in H.G. Wells ' /War Of The Worlds / and Washington Irving 's /The Legend Of Sleepy Hollow /. Often instead the supernatural in turn act as a catalyst for other conflict - William Peter Blatty 's /The Exorcist / causes Father Mike to question himself, and Edgar Allan Poe 's /The Tell-Tale Heart / uses the spectral beating of a dead man's heart to illustrate a murderer's descent into madness . 6. *Man vs. Self. *Having now conquered all things that man cannot directly control - nature, God, other men, his environment, and the supernatural - he now finds that he must not be in conflict with himself in order to attain happiness . Sometimes these conflicts can be desperately dark and painful - /Requiem For A Dream /'s sordid display of addiction and Hamlet 's suicidal thoughts over the anguish of his mother's betrayal and father's death are eerie in that they touch close to home about the suffering of life. Other books which center on this conflict include Salinger 's /The Catcher In The Rye /, Christopher Marlowe 's /Faust /, Virginia Woolf 's /The Voyage Out /, Wharton 's /Ethan Frome / and John Updike 's /Rabbit, Run /. 7. *Man vs. Machine. *For some unseemly reason, once man has conquered the things he cannot control, and has mastered his own self, he is still unsatisfied. His stasis is immediately dropped so that he may invent new things with which he can conflict. One can only wonder if man is doomed to conflict by its very recidivism , or if in some sad masochistic existentialism , the reason we spend so much time analyzing and writing about (and, in this case, creating) our conflicts is that to be is to suffer. As the wise Buddha said, "All is suffering." Still, it seems almost maddening to think that we were not content with the struggles listed before, but have since added machines to our list. The battle with the machines usually arises out of a dystopia that occurs as appearance and reality are blurred. Of course, the first real exploration of this conflict lay in a novel based on the first invention of ourselves - Mary Shelley 's /Frankenstein /. Some other excellent pieces on this include Arthur C. Clarke 's /2001: A Space Odyssey /, Philip K. Dick 's /Man, Android, and Machine /, and Kokaku Kidoutai 's 1995 film /Ghost In The Shell /. >>>>>Why is there an eternal return of certain mythic themes in religion, such as messiah myths, flood >>>myths, creation myths, destruction myths, redemption >>>myths, and end of the world myths? What do these >>>recurring themes tell us about the workings of the >>>human mind and culture?<< >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From HowlBloom at aol.com Sun Jul 3 13:29:51 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 3 Jul 2005 09:29:51 EDT Subject: [Paleopsych] why do we sleep? Message-ID: <24.7420d603.2ff9424f@aol.com> I found this, stopped what I was doing, filched it, wrote a few sentences on it for you, and here it is. Just something to niggle at your brain while it niggles at mine... The evolutionary reason for sleep and for the dream for flying are two of the most intriguing unanswered mysteries faced by modern psychology. If the work of sleep researchers like J. Alan Hobson and William Dement give you the feeling that sleep is at least one area of study we can afford to pause and take a nap about, think again. Don?t even bother to think. Just ponder this simple eye-opener: ?Dolphins sleep with one-half of the brain at a time, closing one eye while floating or swimming about.? Does that jar you awake? It certainly snaps me to attention. Now the question is this. Why DO we sleep? (And why do we dream of flying?) Does anyone have hard research or persuasive anecdote on this...aside from the usual suspects, like we sleep to digest the learning from experiences of the day? Howard Here?s the article this comes from: Retrieved July 3, 2005, from the World Wide Web http://www.sciencenews.org/articles/20050702/fob1.asp Science News Online Week of July 2, 2005; Vol. 168, No. 1 Sleepless in SeaWorld: Some newborns and moms forgo slumber Naila Moreira Orca-whale and dolphin mothers and their newborns appear not to sleep for a month after the pups' birth, researchers report. Neither parent nor offspring shows any ill effects from the long waking stint, and the animals don't later compensate with extra sleep. a6302_1551.jpg UP WITH THE BABY. An orca-whale mother and her newborn pup may forgo sleep for several weeks before adopting a normal pattern. Dolphins also exhibit this behavior. SeaWorld, San Diego No previously studied mammal stays awake for so long, says Jerry Siegel of the University of California, Los Angeles (UCLA), an investigator in the study. In the months following their wakeful period, baby whales and dolphins?and their mothers?ramped up slowly to sleep amounts typical of normal adults, Siegel and his colleagues report. The infants' sleep pattern contrasts with that of other mammals, which need extra sleep during infancy and gradually sleep less as they age. Oleg Lyamin, also of UCLA, started observing an orca mother and her baby just after it was born at SeaWorld, San Diego. Orcas usually snooze for 5 to 8 hours a night, closing both eyes and floating motionlessly. The SeaWorld orca mother and baby, Lyamin found, neither shut their eyes nor remained motionless. Instead, the animals were constantly active, with the infant surfacing for a breath every 30 seconds. The researchers made similar observations of another SeaWorld orca mom and baby. The team also watched dolphins at the Utrish Dolphinarium in Moscow. Dolphins sleep with one-half of the brain at a time, closing one eye while floating or swimming about. The team observed no sleeping behavior in the first month after birth among four dolphin mom-baby pairs. The findings, reported in the June 30 Nature, challenge prevailing notions of the purpose of sleep, some researchers say. "We're under the belief that if you don't get sleep, you can't perform, and you're at risk for developing all sorts of disorders," says Paul Shaw of Washington University in St. Louis. For instance, rats die after being deprived of sleep for just 2 weeks. The UCLA data are "the beginning of a change in the way we view sleep," says Shaw. Scientists have commonly hypothesized that people and other animals require sleep for brain development and learning (SN: 6/1/02, p. 341: http://www.sciencenews.org/articles/20020601/fob6.asp). "Here we have a developing [whale or dolphin] youngster with no evidence of sleep," says Irene Tobler of ETH-Zurich in Switzerland. "It will revolutionize many people's ways of thinking." Siegel argues that sleep is not required for brain development in these and other young animals and instead plays some role as yet unknown. Alternatively, whales and dolphins may have evolved unusual compensatory mechanisms that permit them to develop without sleep, while other animals still require sleep for brain development, Tobler says. Robert Stickgold of Harvard University suggests that mother and baby whales and dolphins may have evolved an unusual form of sleeping. "A sleepwalker makes it down the stairs, into the kitchen, into the refrigerator quite well while a [brain wave] recording says they're in deep sleep," he notes. Stickgold says that such recordings from the animals could help determine whether the orcas and dolphins are awake. Siegel speculates that mothers and babies of both species need constant activity to survive. The mother pushes the baby to the surface to breathe at regular intervals. Also, the baby must stay warm in cold water while it develops its blubber coat. "The mystery is that they're ... dispensing with sleep behavior when so many sleep researchers have assumed that sleep has a vital function," Siegel says. If you have a comment on this article that you would like considered for publication in Science News, send it to editors at sciencenews.org. Please include your name and location. To subscribe to Science News (print), go to https://www.kable.com/pub/scnw/ subServices.asp. To sign up for the free weekly e-LETTER from Science News, go to http://www.sciencenews.org/pages/subscribe_form.asp. References: 2005. No sleep in the deep: Unlike other mammals, newborn dolphins and killer whales stay active 24/7 during first months of development. University of California, Los Angeles press release. June 29. Available at http://www.newsroom.ucla.edu/page.asp?RelNum=6274. Lyamin, O. . . . and J. Siegel. 2005. Animal behaviour: Continuous activity in cetaceans after birth. Nature 435(June 30):1177. Abstract available at http://dx.doi.org/10.1038/4351177a. Further Readings: Bower, B. 2002. Snooze power: Midday nap may awaken learning potential. Science News 161(June 1):341. Available at http://www.sciencenews.org/articles/20020601/fob6.asp. Brownlee, C. 2005. Losing sleep: Mutant flies need less shut-eye. Science News 167(April 30):275. Available at http://www.sciencenews.org/articles/20050430/fob2.asp. Hesman, T. 2000. Fly naps inspire dreams of sleep genetics. Science News 157(Feb. 19):117. Available at http://www.sciencenews.org/articles/20000219/fob4.asp. Milius, S. 2004. Sparrows cheat on sleep: Migratory birds are up at night but still stay sharp. Science News 166(July 17):38. Available at http://www.sciencenews.org/articles/20040717/fob7.asp. Sources: Paul Shaw Anatomy and Neurobiology Washington University School of Medicine 660 S. Euclid Avenue Campus Box 8108 St. Louis, MO 63110 Jerry Siegel Psychiatry and Biobehavioral Sciences Center for Sleep Research Neurobiology Research 151A3 VA GLAHS Sepulveda 16111 Plummer Street North Hills, CA 91343 Robert Stickgold Center for Sleep and Cognition Harvard Medical School Beth Israel Deaconess Medical Center E/FD861 330 Brookline Avenue Boston, MA 02115 Irene Tobler Institute of Pharmacology and Toxicology University of Zurich Winterthurerstrasse 190 CH-8057 Zurich Switzerland http://www.sciencenews.org/articles/20050702/fob1.asp From Science News, Vol. 168, No. 1, July 2, 2005, p. 3. Copyright (c) 2005 Science Service. All rights reserved. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From HowlBloom at aol.com Sun Jul 3 14:17:53 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 3 Jul 2005 10:17:53 EDT Subject: [Paleopsych] when the body runs riot Message-ID: <111.4d660222.2ff94d91@aol.com> The body has good reasons for giving us inflammations, or so we?ve been told by evolutionary biology. Inflammation can quarantine micro-attackers and help heal our wounds. But if the swollen redness of inflammation is so useful, why does the body have built in mechanisms to keep it under control? Those mechanisms are, according to the article below, ?epoxyeicosatrienoic acids (EETs)?. These acids keep a good thing from happening. They rein in inflammation. Could the answer be something that the work of paleopsych member Neil Greenberg taught me a long time ago? In moderate doses things the body?s own internally-concocted remedies are good. In overdoses they can be poisons. Stress hormones are examples. In swift, sharp jolts, they are pick-me-ups, attention, strength, and energy boosters extremely useful in fast but vicious fights or when it?s time to turn tail and skedaddle, escape. But in chronic doses, doses that go on and on and on and on from day to day and week to week, those same stress hormones, those quick-hit tonics, are poisons. Does the body need inflammation inhibitors like epoxyeicosatrienoic acids to make sure that it doesn?t get too much of a good thing? Are these inhibitors part of the same sort of checks and balances that Sherringtonian nerves use when they are finely tuned by an excitatory signals and a counterbalance of inhibitory signals? Are they like the extensor and tensor muscles, the upper bicep balanced against the muscle that faces it on the underside, the muscle below your arm from your shoulder to your elbow? Snip the bottom muscle and your hand will fly up to your shoulder and stay there, stuck in place by the unchecked enthusiasm of the muscle on top. Cut the top muscle, and your arm will look like an unbending stick. Your elbow and forearm will be permanently locked in position. And are the acids that inhibit inflammation a bit like the part of the brain that does the most to civilize us, the most to make us ?human??the pre-frontal cortex? You?d think that the human part of the brain would be there to throw us into hyper-gear and turbocharge, giving us the mental warp engines we need to break the consciousness barrier and rocket into thought. But, no. The prefrontal cortex does the opposite. It?s a brake, a drag-chute, an inhibitor. Without the ?human? part of the brain, that three pound lump of pink stuff in our head would apparently do too much, not too little. It takes restraint to make us human. When Aristotle said that life is really a balancing act between extremes, he may have gotten it far more right than he ever imagined. Howard Retrieved July 3, 2005, from the World Wide Web http://www.sciencenews.org/articles/20050702/fob2.asp Running Interference: Fresh approach to fighting inflammation Nathan Seppa The more scientists learn about inflammation, the less they like it. Although this bodily process speeds wound healing and corrals microbes, it can also do plenty of harm, as seen in people with arthritis, asthma, and a host of other ailments. Unfortunately, today's anti-inflammatory drugs pose their own problems. They cause stomach distress in many people, and some drugs seem to hike the risk of heart attacks. So, the search for a safe inflammation fighter goes on. Bruce D. Hammock, a biochemist at the University of California, Davis, and his colleagues now report that two experimental drugs shield lab mice from extreme inflammation. The findings appear in an upcoming Proceedings of the National Academy of Sciences. Earlier research had suggested that a troublesome enzyme, called soluble epoxide hydrolase, degrades natural inflammation inhibitors known as epoxyeicosatrienoic acids (EETs). ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rauh at uconn.edu Sun Jul 3 14:39:34 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Sun, 03 Jul 2005 10:39:34 -0400 Subject: [Paleopsych] when the body runs riot In-Reply-To: <111.4d660222.2ff94d91@aol.com> References: <111.4d660222.2ff94d91@aol.com> Message-ID: <42C7F8A6.90301@uconn.edu> Robust and stable dynamic systems need control mechanisms for up and down regulation to avoid positive feedback loops that spiral out of the safety margins. Christian HowlBloom at aol.com wrote: > The body has good reasons for giving us inflammations, or so we?ve been > told by evolutionary biology. Inflammation can quarantine > micro-attackers and help heal our wounds. But if the swollen redness of > inflammation is so useful, why does the body have built in mechanisms to > keep it under control? > > > > Those mechanisms are, according to the article below, > ?epoxyeicosatrienoic acids (EETs)?. These acids keep a good thing from > happening. They rein in inflammation. > > > > Could the answer be something that the work of paleopsych member Neil > Greenberg taught me a long time ago? In moderate doses things the > body?s own internally-concocted remedies are good. In overdoses they > can be poisons. Stress hormones are examples. In swift, sharp jolts, > they are pick-me-ups, attention, strength, and energy boosters extremely > useful in fast but vicious fights or when it?s time to turn tail and > skedaddle, escape. But in chronic doses, doses that go on and on and on > and on from day to day and week to week, those same stress hormones, > those quick-hit tonics, are poisons. > > > > Does the body need inflammation inhibitors like epoxyeicosatrienoic > acids to make sure that it doesn?t get too much of a good thing? Are > these inhibitors part of the same sort of checks and balances that > Sherringtonian nerves use when they are finely tuned by an excitatory > signals and a counterbalance of inhibitory signals? Are they like the > extensor and tensor muscles, the upper bicep balanced against the muscle > that faces it on the underside, the muscle below your arm from your > shoulder to your elbow? Snip the bottom muscle and your hand will fly > up to your shoulder and stay there, stuck in place by the unchecked > enthusiasm of the muscle on top. Cut the top muscle, and your arm will > look like an unbending stick. Your elbow and forearm will be > permanently locked in position. > > > > And are the acids that inhibit inflammation a bit like the part of the > brain that does the most to civilize us, the most to make us ?human??the > pre-frontal cortex? You?d think that the human part of the brain would > be there to throw us into hyper-gear and turbocharge, giving us the > mental warp engines we need to break the consciousness barrier and > rocket into thought. But, no. The prefrontal cortex does the > opposite. It?s a brake, a drag-chute, an inhibitor. Without the > ?human? part of the brain, that three pound lump of pink stuff in our > head would apparently do too much, not too little. It takes restraint > to make us human. > > > > When Aristotle said that life is really a balancing act between > extremes, he may have gotten it far more right than he ever imagined. > Howard > > > > Retrieved July 3, 2005, from the World Wide Web > > http://www.sciencenews.org/articles/20050702/fob2.asp > > > > Running Interference: Fresh approach to fighting inflammation > > > > Nathan Seppa > > > > The more scientists learn about inflammation, the less they like it. > Although this bodily process speeds wound healing and corrals microbes, > it can also do plenty of harm, as seen in people with arthritis, asthma, > and a host of other ailments. Unfortunately, today's anti-inflammatory > drugs pose their own problems. They cause stomach distress in many > people, and some drugs seem to hike the risk of heart attacks. So, the > search for a safe inflammation fighter goes on. > > > > Bruce D. Hammock, a biochemist at the University of California, Davis, > and his colleagues now report that two experimental drugs shield lab > mice from extreme inflammation. The findings appear in an upcoming > Proceedings of the National Academy of Sciences. > > > > Earlier research had suggested that a troublesome enzyme, called soluble > epoxide hydrolase, degrades natural inflammation inhibitors known as > epoxyeicosatrienoic acids (EETs). > > > ---------- > Howard Bloom > Author of The Lucifer Principle: A Scientific Expedition Into the Forces > of History and Global Brain: The Evolution of Mass Mind From The Big > Bang to the 21st Century > Recent Visiting Scholar-Graduate Psychology Department, New York > University; Core Faculty Member, The Graduate Institute > www.howardbloom.net > www.bigbangtango.net > Founder: International Paleopsychology Project; founding board member: > Epic of Evolution Society; founding board member, The Darwin Project; > founder: The Big Bang Tango Media Lab; member: New York Academy of > Sciences, American Association for the Advancement of Science, American > Psychological Society, Academy of Political Science, Human Behavior and > Evolution Society, International Society for Human Ethology; advisory > board member: Institute for Accelerating Change ; executive editor -- > New Paradigm book series. > For information on The International Paleopsychology Project, see: > www.paleopsych.org > for two chapters from > The Lucifer Principle: A Scientific Expedition Into the Forces of > History, see www.howardbloom.net/lucifer > For information on Global Brain: The Evolution of Mass Mind from the Big > Bang to the 21st Century, see www.howardbloom.net > > > ------------------------------------------------------------------------ > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych -- ????????????????????????????????????????????????????????????????????? ~ I G N O R A N C E ~ The trouble with ignorance is precisely that if a person lacks virtue and knowledge, he's perfectly satisfied with the way he is. If a person isn't aware of a lack, he can not desire the thing which he isn't aware of lacking. Symposium (204a), Plato _____________________________________________________________________ ????????????????????????????????????????????????????????????????????? From shovland at mindspring.com Sun Jul 3 14:41:00 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 07:41:00 -0700 Subject: [Paleopsych] when the body runs riot Message-ID: <01C57FA2.8FB1F080.shovland@mindspring.com> Nicholas Perricone MD has a lot to say about the inflammation-disease connection, and lots of ideas for dealing with it. Steve Hovland www.stevehovland.net -----Original Message----- From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] Sent: Sunday, July 03, 2005 7:18 AM To: paleopsych at paleopsych.org Subject: [Paleopsych] when the body runs riot << File: ATT00000.txt; charset = UTF-8 >> << File: ATT00001.html; charset = UTF-8 >> << File: ATT00002.txt >> From checker at panix.com Sun Jul 3 15:02:18 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 11:02:18 -0400 (EDT) Subject: [Paleopsych] Science: What Don't We Know? (125th anniversary issue) Message-ID: What Don't We Know? -- Kennedy and Norman 309 (5731): 75 -- Science http://www.sciencemag.org/cgi/content/summary/sci;309/5731/75 et seq. [All articles included. Read carefully. I'd like to know if there will be a different answer to the question, "How much can be boost i.q. and scholastic achievement." Actually, there was very little here touching upon the social sciences or social issues.] Introduction to special issue What Don't We Know? Donald Kennedy and Colin Norman At Science, we tend to get excited about new discoveries that lift the veil a little on how things work, from cells to the universe. That puts our focus firmly on what has been added to our stock of knowledge. For this anniversary issue, we decided to shift our frame of reference, to look instead at what we don't know: the scientific puzzles that are driving basic scientific research. We began by asking Science's Senior Editorial Board, our Board of Reviewing Editors, and our own editors and writers to suggest questions that point to critical knowledge gaps. The ground rules: Scientists should have a good shot at answering the questions over the next 25 years, or they should at least know how to go about answering them. We intended simply to choose 25 of these suggestions and turn them into a survey of the big questions facing science. But when a group of editors and writers sat down to select those big questions, we quickly realized that 25 simply wouldn't convey the grand sweep of cutting-edge research that lies behind the responses we received. So we have ended up with 125 questions, a fitting number for Science's 125th anniversary. First, a note on what this special issue is not: It is not a survey of the big societal challenges that science can help solve, nor is it a forecast of what science might achieve. Think of it instead as a survey of our scientific ignorance, a broad swath of questions that scientists themselves are asking. As Tom Siegfried puts it in his introductory essay, they are "opportunities to be exploited." We selected 25 of the 125 questions to highlight based on several criteria: how fundamental they are, how broad-ranging, and whether their solutions will impact other scientific disciplines. Some have few immediate practical implications--the composition of the universe, for example. Others we chose because the answers will have enormous societal impact--whether an effective HIV vaccine is feasible, or how much the carbon dioxide we are pumping into the atmosphere will warm our planet, for example. Some, such as the nature of dark energy, have come to prominence only recently; others, such as the mechanism behind limb regeneration in amphibians, have intrigued scientists for more than a century. We listed the 25 highlighted questions in no special order, but we did group the 100 additional questions roughly by discipline. Our sister online publications are also devoting special issues to Science's 125th anniversary. The Science of Aging Knowledge Environment, SAGE KE ([32]www.sageke.org), is surveying several big questions confronting researchers on aging. The Signal Transduction Knowledge Environment, STKE ([33]www.stke.org), has selected classic Science articles that have had a high impact in the field of cell signaling and is highlighting them in an editorial guide. And Science's Next Wave ([34]www.nextwave.org) is looking at the careers of scientists grappling with some of the questions Science has identified. We are acutely aware that even 125 unknowns encompass only a partial answer to the question that heads this special section: What Don't We Know? So we invite you to participate in a special forum on Science's Web site ([35]www.sciencemag.org/sciext/eletters/125th), in which you can comment on our 125 questions or nominate topics we missed--and we apologize if they are the very questions you are working on. -------------- How Hot Will the Greenhouse World Be? Richard A. Kerr Scientists know that the world has warmed lately, and they believe humankind is behind most of that warming. But how far might we push the planet in coming decades and centuries? That depends on just how sensitively the climate system--air, oceans, ice, land, and life--responds to the greenhouse gases we're pumping into the atmosphere. For a quarter-century, expert opinion was vague about climate sensitivity. Experts allowed that climate might be quite touchy, warming sharply when shoved by one climate driver or another, such as the carbon dioxide from fossil fuel burning, volcanic debris, or dimming of the sun. On the other hand, the same experts conceded that climate might be relatively unresponsive, warming only modestly despite a hard push toward the warm side. The problem with climate sensitivity is that you can't just go out and directly measure it. Sooner or later a climate model must enter the picture. Every model has its own sensitivity, but each is subject to all the uncertainties inherent in building a hugely simplified facsimile of the real-world climate system. As a result, climate scientists have long quoted the same vague range for sensitivity: A doubling of the greenhouse gas carbon dioxide, which is expected to occur this century, would eventually warm the world between a modest 1.5?C and a whopping 4.5?C. This range--based on just two early climate models--first appeared in 1979 and has been quoted by every major climate assessment since. [37]Figure 1 A harbinger? Coffins being lined up during the record-breaking 2003 heat wave in Europe. Researchers are finally beginning to tighten up the range of possible sensitivities, at least at one end. For one, the sensitivities of the available models (5% to 95% confidence range) are now falling within the canonical range of 1.5?C to 4.5?C; some had gone considerably beyond the high end. And the first try at a new approach--running a single model while varying a number of model parameters such as cloud behavior--has produced a sensitivity range of 2.4?C to 5.4?Cwith a most probable value of 3.2?C. Models are only models, however. How much better if nature ran the experiment? Enter paleoclimatologists, who sort out how climate drivers such as greenhouse gases have varied naturally in the distant past and how the climate system of the time responded. Nature, of course, has never run the perfect analog for the coming greenhouse warming. And estimating how much carbon dioxide concentrations fell during the depths of the last ice age or how much sunlight debris from the eruption of Mount Pinatubo in the Philippines blocked will always have lingering uncertainties. But paleoclimate estimates of climate sensitivity generally fall in the canonical range, with a best estimate in the region of 3?C. The lower end at least of likely climate sensitivity does seem to be firming up; it's not likely below 1.5?C, say researchers. That would rule out the negligible warmings proposed by some greenhouse contrarians. But climate sensitivity calculations still put a fuzzy boundary on the high end. Studies drawing on the past century's observed climate change plus estimates of natural and anthropogenic climate drivers yield up to 30% probabilities of sensitivities above 4.5?C, ranging as high as 9?C. The latest study that varies model parameters allows sensitivities up to 11?C, with the authors contending that they can't yet say what the chances of such extremes are. Others are pointing to times of extreme warmth in the geologic past that climate models fail to replicate, suggesting that there's a dangerous element to the climate system that the models do not yet contain. Climate researchers have their work cut out for them. They must inject a better understanding of clouds and aerosols--the biggest sources of uncertainty--into their modeling. Ten or 15 years ago, scientists said that would take 10 or 15 years; there's no sign of it happening anytime soon. They must increase the fidelity of models, a realistic goal given the continued acceleration of affordable computing power. And they must retrieve more and better records of past climate changes and their drivers. Meanwhile, unless a rapid shift away from fossil fuel use occurs worldwide, a doubling of carbon dioxide--and more--will be inevitable. _________________________________________________________________ What Can Replace Cheap Oil--and When? Richard A. Kerr and Robert F. Service The road from old to new energy sources can be bumpy, but the transitions have gone pretty smoothly in the past. After millennia of dependence on wood, society added coal and gravitydriven water to the energy mix. Industrialization took off. Oil arrived, and transportation by land and air soared, with hardly a worry about where the next log or lump of coal was coming from, or what the explosive growth in energy production might be doing to the world. Times have changed. The price of oil has been climbing, and ice is melting around both poles as the mercury in the global thermometer rises. Whether the next big energy transition will be as smooth as past ones will depend in large part on three sets of questions: When will world oil production peak? How sensitive is Earth's climate to the carbon dioxide we are pouring into the atmosphere by burning fossil fuels? And will alternative energy sources be available at reasonable costs? The answers rest on science and technology, but how society responds will be firmly in the realm of politics. There is little disagreement that the world will soon be running short of oil. The debate is over how soon. Global demand for oil has been rising at 1% or 2% each year, and we are now sucking almost 1000 barrels of oil from the ground every second. Pessimists--mostly former oil company geologists--expect oil production to peak very soon. They point to American geologist M. King Hubbert's successful 1956 prediction of the 1970 peak in U.S. production. Using the same method involving records of past production and discoveries, they predict a world oil peak by the end of the decade. Optimists--mostly resource economists--argue that oil production depends more on economics and politics than on how much happens to be in the ground. Technological innovation will intervene, and production will continue to rise, they say. Even so, midcentury is about as far as anyone is willing to push the peak. That's still "soon" considering that the United States, for one, will need to begin replacing oil's 40% contribution to its energy consumption by then. And as concerns about climate change intensify, the transition to nonfossil fuels could become even more urgent (see p. [38]100). If oil supplies do peak soon or climate concerns prompt a major shift away from fossil fuels, plenty of alternative energy supplies are waiting in the wings. The sun bathes Earth's surface with 86,000 trillion watts, or terawatts, of energy at all times, about 6600 times the amount used by all humans on the planet each year. Wind, biomass, and nuclear power are also plentiful. And there is no shortage of opportunities for using energy more efficiently. Of course, alternative energy sources have their issues. Nuclear fission supporters have never found a noncontroversial solution for disposing of long-lived radioactive wastes, and concerns over liability and capital costs are scaring utility companies off. Renewable energy sources are diffuse, making it difficult and expensive to corral enough power from them at cheap prices. So far, wind is leading the way with a global installed capacity of more than 40 billion watts, or gigawatts, providing electricity for about 4.5 cents per kilowatt hour. That sounds good, but the scale of renewable energy is still very small when compared to fossil fuel use. In the United States, renewables account for just 6% of overall energy production. And, with global energy demand expected to grow from approximately 13 terawatts a year now to somewhere between 30 and 60 terawatts by the middle of this century, use of renewables will have to expand enormously to displace current sources and have a significant impact on the world's future energy needs. What needs to happen for that to take place? Using energy more efficiently is likely to be the sine qua non of energy planning--not least to buy time for efficiency improvements in alternative energy. The cost of solar electric power modules has already dropped two orders of magnitude over the last 30 years. And most experts figure the price needs to drop 100-fold again before solar energy systems will be widely adopted. Advances in nanotechnology may help by providing novel semiconductor systems to boost the efficiency of solar energy collectors and perhaps produce chemical fuels directly from sunlight, CO[2], and water. But whether these will come in time to avoid an energy crunch depends in part on how high a priority we give energy research and development. And it will require a global political consensus on what the science is telling us. _________________________________________________________________ Will Malthus Continue to Be Wrong? Erik Stokstad In 1798, a 32-year-old curate at a small parish church in Albury, England, published a sobering pamphlet entitled An Essay on the Principle of Population. As a grim rebuttal of the utopian philosophers of his day, Thomas Malthus argued that human populations will always tend to grow and, eventually, they will always be checked--either by foresight, such as birth control, or as a result of famine, war, or disease. Those speculations have inspired many a dire warning from environmentalists. Since Malthus's time, world population has risen sixfold to more than 6 billion. Yet happily, apocalyptic collapses have mostly been prevented by the advent of cheap energy, the rise of science and technology, and the green revolution. Most demographers predict that by 2100, global population will level off at about 10 billion. The urgent question is whether current standards of living can be sustained while improving the plight of those in need. Consumption of resources--not just food but also water, fossil fuels, timber, and other essentials--has grown enormously in the developed world. In addition, humans have compounded the direct threats to those resources in many ways, including by changing climate (see p. [37]100), polluting land and water, and spreading invasive species. How can humans live sustainably on the planet and do so in a way that manages to preserve some biodiversity? Tackling that question involves a broad range of research for natural and social scientists. It's abundantly clear, for example, that humans are degrading many ecosystems and hindering their ability to provide clean water and other "goods and services" (Science, 1 April, p. [38]41). But exactly how bad is the situation? Researchers need better information on the status and trends of wetlands, forests, and other areas. To set priorities, they'd also like a better understanding of what makes ecosystems more resistant or vulnerable and whether stressed ecosystems, such as marine fisheries, have a threshold at which they won't recover. [39]Figure 1 Out of balance. Sustaining a growing world population is threatened by inefficient consumption of resources--and by poverty. Agronomists face the task of feeding 4 billion more mouths. Yields may be maxing out in the developed world, but much can still be done in the developing world, particularly sub-Saharan Africa, which desperately needs more nitrogen. Although agricultural biotechnology clearly has potential to boost yields and lessen the environmental impact of farming, it has its own risks, and winning over skeptics has proven difficult. There's no shortage of work for social scientists either. Perverse subsidies that encourage overuse of resources--tax loopholes for luxury Hummers and other inefficient vehicles, for example--remain a chronic problem. A new area of activity is the attempt to place values on ecosystems' services, so that the price of clear-cut lumber, for instance, covers the loss of a forest's ability to provide clean water. Incorporating those "externalities" into pricing is a daunting challenge that demands much more knowledge of ecosystems. In addition, economic decisions often consider only net present value and discount the future value of resources--soil erosion, slash-and-burn agriculture, and the mining of groundwater for cities and farming are prime examples. All this complicates the process of transforming industries so that they provide jobs, goods, and services while damaging the environment less. Researchers must also grapple with the changing demographics of housing and how it will impact human well-being: In the next 35 to 50 years, the number of people living in cities will double. Much of the growth will likely happen in the developing world in cities that currently have 30,000 to 3 million residents. Coping with that huge urban influx will require everything from energy efficient ways to make concrete to simple ways to purify drinking water. And in an age of global television and relentless advertising, what will happen to patterns of consumption? The world clearly can't support 10 billion people living like Americans do today. Whether science--both the natural and social sciences--and technology can crank up efficiency and solve the problems we've created is perhaps the most critical question the world faces. Mustering the political will to make hard choices is, however, likely to be an even bigger challenge. _________________________________________________________________ In Praise of Hard Questions Tom Siegfried[37]* Great cases, as U.S. Supreme Court Justice Oliver Wendell Holmes suggested a century ago, may make bad law. But great questions often make very good science. Unsolved mysteries provide science with motivation and direction. Gaps in the road to scientific knowledge are not potholes to be avoided, but opportunities to be exploited. "Fundamental questions are guideposts; they stimulate people," says 2004 Nobel physics laureate David Gross. "One of the most creative qualities a research scientist can have is the ability to ask the right questions." Science's greatest advances occur on the frontiers, at the interface between ignorance and knowledge, where the most profound questions are posed. There's no better way to assess the current condition of science than listing the questions that science cannot answer. "Science," Gross declares, "is shaped by ignorance." There have been times, though, when some believed that science had paved over all the gaps, ending the age of ignorance. When Science was born, in 1880, James Clerk Maxwell had died just the year before, after successfully explaining light, electricity, magnetism, and heat. Along with gravity, which Newton had mastered 2 centuries earlier, physics was, to myopic eyes, essentially finished. Darwin, meanwhile, had established the guiding principle of biology, and Mendeleyev's periodic table--only a decade old--allowed chemistry to publish its foundations on a poster board. Maxwell himself mentioned that many physicists believed the trend in their field was merely to measure the values of physical constants "to another place of decimals." Nevertheless, great questions raged. Savants of science debated not only the power of natural selection, but also the origin of the solar system, the age and internal structure of Earth, and the prospect of a plurality of worlds populating the cosmos. In fact, at the time of Maxwell's death, his theory of electromagnetic fields was not yet widely accepted or even well known; experts still argued about whether electricity and magnetism propagated their effects via "action at a distance," as gravity (supposedly) did, or by Michael Faraday's "lines of force" (incorporated by Maxwell into his fields). Lurking behind that dispute was the deeper issue of whether gravity could be unified with electromagnetism (Maxwell thought not), a question that remains one of the greatest in science today, in a somewhat more complicated form. Maxwell knew full well that his accomplishments left questions unanswered. His calculations regarding the internal motion of molecules did not agree with measurements of specific heats, for instance. "Something essential to the complete state of the physical theory of molecular encounters must have hitherto escaped us," he commented. When Science turned 20--at the 19th century's end--Maxwell's mentor William Thomson (Lord Kelvin) articulated the two grand gaps in knowledge of the day. (He called them "clouds" hanging over physicists' heads.) One was the mystery of specific heats that Maxwell had identified; the other was the failure to detect the ether, a medium seemingly required by Maxwell's electromagnetic waves. Filling those two gaps in knowledge required the 20th century's quantum and relativity revolutions. The ignorance enveloped in Kelvin's clouds was the impetus for science's revitalization. Throughout the last century, pursuing answers to great questions reshaped human understanding of the physical and living world. Debates over the plurality of worlds assumed galactic proportions, specifically addressing whether Earth's home galaxy, the Milky Way, was only one of many such conglomerations of stars. That issue was soon resolved in favor of the Milky Way's nonexclusive status, in much the same manner that Earth itself had been demoted from its central role in the cosmos by Copernicus centuries before. But the existence of galaxies outside our own posed another question, about the apparent motions of those galaxies away from one another. That issue echoed a curious report in Science's first issue about a set of stars forming a triangular pattern, with a double star at the apex and two others forming the base. Precise observations showed the stars to be moving apart, making the triangle bigger but maintaining its form. "It seems probable that all these stars are slowly moving away from one common point, so that many years back they were all very much closer to one another," Science reported, as though the four stars had all begun their journey from the same place. Understanding such motion was a question "of the highest interest." A half a century later, Edwin Hubble enlarged that question from one about stellar motion to the origin and history of the universe itself. He showed that galaxies also appeared to be receding from a common starting point, evidence that the universe was expanding. With Hubble's discovery, cosmology's grand questions began to morph from the philosophical to the empirical. And with the discovery of the cosmic microwave background in the 1960s, the big bang theory of the universe's birth assumed the starring role on the cosmological stage--providing cosmologists with one big answer and many new questions. By Science's centennial, a quarter-century ago, many gaps still remained in knowledge of the cosmos; some of them have since been filled, while others linger. At that time debate continued over the existence of planets around faraway stars, a question now settled with the discovery of dozens of planets in the solar system's galactic neighborhood. But now a bigger question looms beyond the scope of planets or even galaxies: the prospect of multiple universes, cousins to the bubble of time and space that humans occupy. And not only may the human universe not be alone (defying the old definition of universe), humans may not be alone in their own space, either. The possible existence of life elsewhere in the cosmos remains as great a gap as any in present-day knowledge. And it is enmeshed with the equally deep mystery of life's origin on Earth. Life, of course, inspires many deep questions, from the prospects for immortality to the prognosis for eliminating disease. Scientists continue to wonder whether they will ever be able to create new life forms from scratch, or at least simulate life's self-assembling capabilities. Biologists, physicists, mathematicians, and computer scientists have begun cooperating on a sophisticated "systems biology" aimed at understanding how the countless molecular interactions at the heart of life fit together in the workings of cells, organs, and whole animals. And if successful, the systems approach should help doctors tailor treatments to individual variations in DNA, permitting personalized medicine that deters disease without inflicting side effects. Before Science turns 150, revamped versions of modern medicine may make it possible for humans to live that long, too. As Science and science age, knowledge and ignorance have coevolved, and the nature of the great questions sometimes changes. Old questions about the age and structure of the Earth, for instance, have given way to issues concerning the planet's capacity to support a growing and aging population. Some great questions get bigger over time, encompassing an ever-expanding universe, or become more profound, such as the quest to understand consciousness. On the other hand, many deep questions drive science to smaller scales, more minute than the realm of atoms and molecules, or to a greater depth of detail underlying broad-brush answers to past big questions. In 1880, some scientists remained unconvinced by Maxwell's evidence for atoms. Today, the analogous debate focuses on superstrings as the ultimate bits of matter, on a scale a trillion trillion times smaller. Old arguments over evolution and natural selection have descended to debates on the dynamics of speciation, or how particular behaviors, such as altruistic cooperation, have emerged from the laws of individual competition. Great questions themselves evolve, of course, because their answers spawn new and better questions in turn. The solutions to Kelvin's clouds--relativity and quantum physics--generated many of the mysteries on today's list, from the composition of the cosmos to the prospect for quantum computers. Ultimately, great questions like these both define the state of scientific knowledge and drive the engines of scientific discovery. Where ignorance and knowledge converge, where the known confronts the unknown, is where scientific progress is most dramatically made. "Thoroughly conscious ignorance," wrote Maxwell, "is the prelude to every real advance in science." So when science runs out of questions, it would seem, science will come to an end. But there's no real danger of that. The highway from ignorance to knowledge runs both ways: As knowledge accumulates, diminishing the ignorance of the past, new questions arise, expanding the areas of ignorance to explore. Maxwell knew that even an era of precision measurements is not a sign of science's end but preparation for the opening of new frontiers. In every branch of science, Maxwell declared, "the labor of careful measurement has been rewarded by the discovery of new fields of research and by the development of new scientific ideas." If science's progress seems to slow, it's because its questions get increasingly difficult, not because there will be no new questions left to answer. Fortunately, hard questions also can make great science, just as Justice Holmes noted that hard cases, like great cases, made bad law. Bad law resulted, he said, because emotional concerns about celebrated cases exerted pressures that distorted well-established legal principles. And that's why the situation in science is the opposite of that in law. The pressures of the great, hard questions bend and even break well-established principles, which is what makes science forever self-renewing--and which is what demolishes the nonsensical notion that science's job will ever be done. __________________________________________ Tom Siegfried is the author of Strange Matters and The Bit and the Pendulum. _________________________________________________________________ What Is the Universe Made Of? Charles Seife Every once in a while, cosmologists are dragged, kicking and screaming, into a universe much more unsettling than they had any reason to expect. In the 1500s and 1600s, Copernicus, Kepler, and Newton showed that Earth is just one of many planets orbiting one of many stars, destroying the comfortable Medieval notion of a closed and tiny cosmos. In the 1920s, Edwin Hubble showed that our universe is constantly expanding and evolving, a finding that eventually shattered the idea that the universe is unchanging and eternal. And in the past few decades, cosmologists have discovered that the ordinary matter that makes up stars and galaxies and people is less than 5% of everything there is. Grappling with this new understanding of the cosmos, scientists face one overriding question: What is the universe made of? This question arises from years of progressively stranger observations. In the 1960s, astronomers discovered that galaxies spun around too fast for the collective pull of the stars' gravity to keep them from flying apart. Something unseen appears to be keeping the stars from flinging themselves away from the center: unilluminated matter that exerts extra gravitational force. This is dark matter. Over the years, scientists have spotted some of this dark matter in space; they have seen ghostly clouds of gas with x-ray telescopes, watched the twinkle of distant stars as invisible clumps of matter pass in front of them, and measured the distortion of space and time caused by invisible mass in galaxies. And thanks to observations of the abundances of elements in primordial gas clouds, physicists have concluded that only 10% of ordinary matter is visible to telescopes. [37]Figure 1 In the dark. Dark matter holds galaxies together; supernovae measurements point to a mysterious dark energy. But even multiplying all the visible "ordinary" matter by 10 doesn't come close to accounting for how the universe is structured. When astronomers look up in the heavens with powerful telescopes, they see a lumpy cosmos. Galaxies don't dot the skies uniformly; they cluster together in thin tendrils and filaments that twine among vast voids. Just as there isn't enough visible matter to keep galaxies spinning at the right speed, there isn't enough ordinary matter to account for this lumpiness. Cosmologists now conclude that the gravitational forces exerted by another form of dark matter, made of an as-yet-undiscovered type of particle, must be sculpting these vast cosmic structures. They estimate that this exotic dark matter makes up about 25% of the stuff in the universe--five times as much as ordinary matter. But even this mysterious entity pales by comparison to another mystery: dark energy. In the late 1990s, scientists examining distant supernovae discovered that the universe is expanding faster and faster, instead of slowing down as the laws of physics would imply. Is there some sort of antigravity force blowing the universe up? All signs point to yes. Independent measurements of a variety of phenomena--cosmic background radiation, element abundances, galaxy clustering, gravitational lensing, gas cloud properties--all converge on a consistent, but bizarre, picture of the cosmos. Ordinary matter and exotic, unknown particles together make up only about 30% of the stuff in the universe; the rest is this mysterious anti-gravity force known as dark energy. This means that figuring out what the universe is made of will require answers to three increasingly difficult sets of questions. What is ordinary dark matter made of, and where does it reside? Astrophysical observations, such as those that measure the bending of light by massive objects in space, are already yielding the answer. What is exotic dark matter? Scientists have some ideas, and with luck, a dark-matter trap buried deep underground or a high-energy atom smasher will discover a new type of particle within the next decade. And finally, what is dark energy? This question, which wouldn't even have been asked a decade ago, seems to transcend known physics more than any other phenomenon yet observed. Ever-better measurements of supernovae and cosmic background radiation as well as planned observations of gravitational lensing will yield information about dark energy's "equation of state"--essentially a measure of how squishy the substance is. But at the moment, the nature of dark energy is arguably the murkiest question in physics--and the one that, when answered, may shed the most light. _________________________________________________________________ So Much More to Know ... From the nature of the cosmos to the nature of societies, the following 100 questions span the sciences. Some are pieces of questions discussed above; others are big questions in their own right. Some will drive scientific inquiry for the next century; others may soon be answered. Many will undoubtedly spawn new questions. Is ours the only universe? A number of quantum theorists and cosmologists are trying to figure out whether our universe is part of a bigger "multiverse." But others suspect that this hard-to-test idea may be a question for philosophers. What drove cosmic inflation? In the first moments after the big bang, the universe blew up at an incredible rate. But what did the blowing? Measurements of the cosmic microwave background and other astrophysical observations are narrowing the possibilities. When and how did the first stars and galaxies form? The broad brush strokes are visible, but the fine details aren't. Data from satellites and ground-based telescopes may soon help pinpoint, among other particulars, when the first generation of stars burned off the hydrogen "fog" that filled the universe. Where do ultrahigh-energy cosmic rays come from? Above a certain energy, cosmic rays don't travel very far before being destroyed. So why are cosmic-ray hunters spotting such rays with no obvious source within our galaxy? What powers quasars? The mightiest energy fountains in the universe probably get their power from matter plunging into whirling supermassive black holes. But the details of what drives their jets remain anybody's guess. What is the nature of black holes? Relativistic mass crammed into a quantum-sized object? It's a recipe for disaster--and scientists are still trying to figure out the ingredients. Why is there more matter than antimatter? To a particle physicist, matter and antimatter are almost the same. Some subtle difference must explain why matter is common and antimatter rare. Does the proton decay? In a theory of everything, quarks (which make up protons) should somehow be convertible to leptons (such as electrons)--so catching a proton decaying into something else might reveal new laws of particle physics. What is the nature of gravity? It clashes with quantum theory. It doesn't fit in the Standard Model. Nobody has spotted the particle that is responsible for it. Newton's apple contain ned a whole can of worms. Why is time different from other dimensions? It took millennia for scientists to realize that time is a dimension, like the three spatial dimensions, and that time and space are inextricably linked. The equations make sense, but they don't satisfy those who ask why we perceive a "now" or why time seems to flow the way it does. Are there smaller building blocks than quarks? Atoms were "uncuttable." Then scientists discovered protons, neutrons, and other subatomic particles--which were, in turn, shown to be made up of quarks and gluons. Is there something more fundamental still? Are neutrinos their own antiparticles? Nobody knows this basic fact about neutrinos, although a number of underground experiments are under way. Answering this question may be a crucial step to understanding the origin of matter in the universe. Is there a unified theory explaining all correlated electron systems? High-temperature superconductors and materials with giant and colossal magnetoresistance are all governed by the collective rather than individual behavior of electrons. There is currently no common framework for understanding them. What is the most powerful laser researchers can build? Theorists say an intense enough laser field would rip photons into electron-positron pairs, dousing the beam. But no one knows whether it's possible to reach that point. Can researchers make a perfect optical lens? They've done it with microwaves but never with visible light. Is it possible to create magnetic semiconductors that work at room temperature? Such devices have been demonstrated at low temperatures but not yet in a range warm enough for spintronics applications. What is the pairing mechanism behind high-temperature superconductivity? Electrons in superconductors surf together in pairs. After 2 decades of intense study, no one knows what holds them together in the complex, high-temperature materials. Can we develop a general theory of the dynamics of turbulent flows and the motion of granular materials? So far, such "nonequilibrium systems" defy the tool kit of statistical mechanics, and the failure leaves a gaping hole in physics. Are there stable high-atomic-number elements? A superheavy element with 184 neutrons and 114 protons should be relatively stable, if physicists can create it. Is superfluidity possible in a solid? If so, how? Despite hints in solid helium, nobody is sure whether a crystalline material can flow without resistance. If new types of experiments show that such outlandish behavior is possible, theorists would have to explain how. What is the structure of water? Researchers continue to tussle over how many bonds each H[2]O molecule makes with its nearest neighbors. What is the nature of the glassy state? Molecules in a glass are arranged much like those in liquids but are more tightly packed. Where and why does liquid end and glass begin? Are there limits to rational chemical synthesis? The larger synthetic molecules get, the harder it is to control their shapes and make enough copies of them to be useful. Chemists will need new tools to keep their creations growing. What is the ultimate efficiency of photovoltaic cells? Conventional solar cells top out at converting 32% of the energy in sunlight to electricity. Can researchers break through the barrier? Will fusion always be the energy source of the future? It's been 35 years away for about 50 years, and unless the international community gets its act together, it'll be 35 years away for many decades to come. What drives the solar magnetic cycle? Scientists believe differing rates of rotation from place to place on the sun underlie its 22-year sunspot cycle. They just can't make it work in their simulations. Either a detail is askew, or it's back to the drawing board. How do planets form? How bits of dust and ice and gobs of gas came together to form the planets without the sun devouring them all is still unclear. Planetary systems around other stars should provide clues. What causes ice ages? Something about the way the planet tilts, wobbles, and careens around the sun presumably brings on ice ages every 100,000 years or so, but reams of climate records haven't explained exactly how. What causes reversals in Earth's magnetic field? Computer models and laboratory experiments are generating new data on how Earth's magnetic poles might flip-flop. The trick will be matching simulations to enough aspects of the magnetic field beyond the inaccessible core to build a convincing case. Are there earthquake precursors that can lead to useful predictions? Prospects for finding signs of an imminent quake have been waning since the 1970s. Understanding faults will progress, but routine prediction would require an as-yet-unimagined breakthrough. Is there--or was there--life elsewhere in the solar system? The search for life--past or present--on other planetary bodies now drives NASA's planetary exploration program, which focuses on Mars, where water abounded when life might have first arisen. What is the origin of homochirality in nature? Most biomolecules can be synthesized in mirror-image shapes. Yet in organisms, amino acids are always left-handed, and sugars are always right-handed. The origins of this preference remain a mystery. Can we predict how proteins will fold? Out of a near infinitude of possible ways to fold, a protein picks one in just tens of microseconds. The same task takes 30 years of computer time. How many proteins are there in humans? It has been hard enough counting genes. Proteins can be spliced in different ways and decorated with numerous functional groups, all of which makes counting their numbers impossible for now. How do proteins find their partners? Protein-protein interactions are at the heart of life. To understand how partners come together in precise orientations in seconds, researchers need to know more about the cell's biochemistry and structural organization. How many forms of cell death are there? In the 1970s, apoptosis was finally recognized as distinct from necrosis. Some biologists now argue that the cell death story is even more complicated. Identifying new ways cells die could lead to better treatments for cancer and degenerative diseases. What keeps intracellular traffic running smoothly? Membranes inside cells transport key nutrients around, and through, various cell compartments without sticking to each other or losing their way. Insights into how membranes stay on track could help conquer diseases, such as cystic fibrosis. What enables cellular components to copy themselves independent of DNA? Centrosomes, which help pull apart paired chromosomes, and other organelles replicate on their own time, without DNA's guidance. This independence still defies explanation. What roles do different forms of RNA play in genome function? RNA is turning out to play a dizzying assortment of roles, from potentially passing genetic information to offspring to muting gene expression. Scientists are scrambling to decipher this versatile molecule. What role do telomeres and centromeres play in genome function? These chromosome features will remain mysteries until new technologies can sequence them. Why are some genomes really big and others quite compact? The puffer fish genome is 400 million bases; one lungfish's is 133 billion bases long. Repetitive and duplicated DNA don't explain why this and other size differences exist. What is all that "junk" doing in our genomes? DNA between genes is proving important for genome function and the evolution of new species. Comparative sequencing, microarray studies, and lab work are helping genomicists find a multitude of genetic gems amid the junk. How much will new technologies lower the cost of sequencing? New tools and conceptual breakthroughs are driving the cost of DNA sequencing down by orders of magnitude. The reductions are enabling research from personalized medicine to evolutionary biology to thrive. How do organs and whole organisms know when to stop growing? A person's right and left legs almost always end up the same length, and the hearts of mice and elephants each fit the proper rib cage. How genes set limits on cell size and number continues to mystify. How can genome changes other than mutations be inherited? Researchers are finding ever more examples of this process, called epigenetics, but they can't explain what causes and preserves the changes. How is asymmetry determined in the embryo? Whirling cilia help an embryo tell its left from its right, but scientists are still looking for the first factors that give a relatively uniform ball of cells a head, tail, front, and back. How do limbs, fins, and faces develop and evolve? The genes that determine the length of a nose or the breadth of a wing are subject to natural and sexual selection. Understanding how selection works could lead to new ideas about the mechanics of evolution with respect to development. What triggers puberty? Nutrition--including that received in utero--seems to help set this mysterious biological clock, but no one knows exactly what forces childhood to end. Are stem cells at the heart of all cancers? The most aggressive cancer cells look a lot like stem cells. If cancers are caused by stem cells gone awry, studies of a cell's "stemness" may lead to tools that could catch tumors sooner and destroy them more effectively. Is cancer susceptible to immune control? Although our immune responses can suppress tumor growth, tumor cells can combat those responses with counter-measures. This defense can stymie researchers hoping to develop immune therapies against cancer. Can cancers be controlled rather than cured? Drugs that cut off a tumor's fuel supplies--say, by stopping blood-vessel growth--can safely check or even reverse tumor growth. But how long the drugs remain effective is still unknown. Is inflammation a major factor in all chronic diseases? It's a driver of arthritis, but cancer and heart disease? More and more, the answer seems to be yes, and the question remains why and how. How do prion diseases work? Even if one accepts that prions are just misfolded proteins, many mysteries remain. How can they go from the gut to the brain, and how do they kill cells once there, for example. How much do vertebrates depend on the innate immune system to fight infection? This system predates the vertebrate adaptive immune response. Its relative importance is unclear, but immunologists are working to find out. Does immunologic memory require chronic exposure to antigens? Yes, say a few prominent thinkers, but experiments with mice now challenge the theory. Putting the debate to rest would require proving that something is not there, so the question likely will not go away. Why doesn't a pregnant woman reject her fetus? Recent evidence suggests that the mother's immune system doesn't "realize" that the fetus is foreign even though it gets half its genes from the father. Yet just as Nobelist Peter Medawar said when he first raised this question in 1952, "the verdict has yet to be returned." What synchronizes an organism's circadian clocks? Circadian clock genes have popped up in all types of creatures and in many parts of the body. Now the challenge is figuring out how all the gears fit together and what keeps the clocks set to the same time. How do migrating organisms find their way? Birds, butterflies, and whales make annual journeys of thousands of kilometers. They rely on cues such as stars and magnetic fields, but the details remain unclear. Why do we sleep? A sound slumber may refresh muscles and organs or keep animals safe from dangers lurking in the dark. But the real secret of sleep probably resides in the brain, which is anything but still while we're snoring away. Why do we dream? Freud thought dreaming provides an outlet for our unconscious desires. Now, neuroscientists suspect that brain activity during REM sleep--when dreams occur--is crucial for learning. Is the experience of dreaming just a side effect? Why are there critical periods for language learning? Monitoring brain activity in young children--including infants--may shed light on why children pick up languages with ease while adults often struggle to learn train station basics in a foreign tongue. Do pheromones influence human behavior? Many animals use airborne chemicals to communicate, particularly when mating. Controversial studies have hinted that humans too use pheromones. Identifying them will be key to assessing their sway on our social lives. How do general anesthetics work? Scientists are chipping away at the drugs' effects on individual neurons, but understanding how they render us unconscious will be a tougher nut to crack. What causes schizophrenia? Researchers are trying to track down genes involved in this disorder. Clues may also come from research on traits schizophrenics share with normal people. What causes autism? Many genes probably contribute to this baffling disorder, as well as unknown environmental factors. A biomarker for early diagnosis would help improve existing therapy, but a cure is a distant hope. To what extent can we stave off Alzheimer's? A 5- to 10-year delay in this late-onset disease would improve old age for millions. Researchers are determining whether treatments with hormones or antioxidants, or mental and physical exercise, will help. What is the biological basis of addiction? Addiction involves the disruption of the brain's reward circuitry. But personality traits such as impulsivity and sensation-seeking also play a part in this complex behavior. Is morality hardwired into the brain? That question has long puzzled philosophers; now some neuroscientists think brain imaging will reveal circuits involved in reasoning. What are the limits of learning by machines? Computers can already beat the world's best chess players, and they have a wealth of information on the Web to draw on. But abstract reasoning is still beyond any machine. How much of personality is genetic? Aspects of personality are influenced by genes; environment modifies the genetic effects. The relative contributions remain under debate. What is the biological root of sexual orientation? Much of the "environmental" contribution to homosexuality may occur before birth in the form of prenatal hormones, so answering this question will require more than just the hunt for "gay genes." Will there ever be a tree of life that systematists can agree on? Despite better morphological, molecular, and statistical methods, researchers' trees don't agree. Expect greater, but not complete, consensus. How many species are there on Earth? Count all the stars in the sky? Impossible. Count all the species on Earth? Ditto. But the biodiversity crisis demands that we try. What is a species? A "simple" concept that's been muddied by evolutionary data; a clear definition may be a long time in coming. Why does lateral transfer occur in so many species and how? Once considered rare, gene swapping, particularly among microbes, is proving quite common. But why and how genes are so mobile--and the effect on fitness--remains to be determined. Who was LUCA (the last universal common ancestor)? Ideas about the origin of the 1.5-billion-year-old "mother" of all complex organisms abound. The continued discovery of primitive microbes, along with comparative genomics, should help resolve life's deep past. How did flowers evolve? Darwin called this question an "abominable mystery." Flowers arose in the cycads and conifers, but the details of their evolution remain obscure. How do plants make cell walls? Cellulose and pectin walls surround cells, keeping water in and supporting tall trees. The biochemistry holds the secrets to turning its biomass into fuel. How is plant growth controlled? Redwoods grow to be hundreds of meters tall, Arctic willows barely 10 centimeters. Understanding the difference could lead to higher-yielding crops. Why aren't all plants immune to all diseases? Plants can mount a general immune response, but they also maintain molecular snipers that take out specific pathogens. Plant pathologists are asking why different species, even closely related ones, have different sets of defenders. The answer could result in hardier crops. What is the basis of variation in stress tolerance in plants? We need crops that better withstand drought, cold, and other stresses. But there are so many genes involved, in complex interactions, that no one has yet figured out which ones work how. What caused mass extinctions? A huge impact did in the dinosaurs, but the search for other catastrophic triggers of extinction has had no luck so far. If more subtle or stealthy culprits are to blame, they will take considerably longer to find. Can we prevent extinction? Finding cost-effective and politically feasible ways to save many endangered species requires creative thinking. Why were some dinosaurs so large? Dinosaurs reached almost unimaginable sizes, some in less than 20 years. But how did the long-necked sauropods, for instance, eat enough to pack on up to 100 tons without denuding their world? How will ecosystems respond to global warming? To anticipate the effects of the intensifying greenhouse, climate modelers will have to focus on regional changes and ecologists on the right combination of environmental changes. How many kinds of humans coexisted in the recent past, and how did they relate? The new dwarf human species fossil from Indonesia suggests that at least four kinds of humans thrived in the past 100,000 years. Better dates and additional material will help confirm or revise this picture. What gave rise to modern human behavior? Did Homo sapiens acquire abstract thought, language, and art gradually or in a cultural "big bang," which in Europe occurred about 40,000 years ago? Data from Africa, where our species arose, may hold the key to the answer. What are the roots of human culture? No animal comes close to having humans' ability to build on previous discoveries and pass the improvements on. What determines those differences could help us understand how human culture evolved. What are the evolutionary roots of language and music? Neuroscientists exploring how we speak and make music are just beginning to find clues as to how these prized abilities arose. What are human races, and how did they develop? Anthropologists have long argued that race lacks biological reality. But our genetic makeup does vary with geographic origin and as such raises political and ethical as well as scientific questions. Why do some countries grow and others stagnate? From Norway to Nigeria, living standards across countries vary enormously, and they're not becoming more equal. What impact do large government deficits have on a country's interest rates and economic growth rate? The United States could provide a test case. Are political and economic freedom closely tied? China may provide one answer. Why has poverty increased and life expectancy declined in sub-Saharan Africa? Almost all efforts to reduce poverty in sub-Saharan Africa have failed. Figuring out what will work is crucial to alleviating massive human suffering. The following six mathematics questions are drawn from a list of seven outstanding problems selected by the Clay Mathematics Institute. (The seventh problem is discussed on p. 96.) For more details, go to www.claymath.org/millennium. Is there a simple test for determining whether an elliptic curve has an infinite number of rational solutions? Equations of the form y^2 = x^3 [plus.gif] ax [plus.gif] b are powerful mathematical tools. The Birch and Swinnerton-Dyer conjecture tells how to determine how many solutions they have in the realm of rational numbers--information that could solve a host of problems, if the conjecture is true. Can a Hodge cycle be written as a sum of algebraic cycles? Two useful mathematical structures arose independently in geometry and in abstract algebra. The Hodge conjecture posits a surprising link between them, but the bridge remains to be built. Will mathematicians unleash the power of the Navier-Stokes equations? First written down in the 1840s, the equations hold the keys to understanding both smooth and turbulent flow. To harness them, though, theorists must find out exactly when they work and under what conditions they break down. Does Poincar?'s test identify spheres in four-dimensional space? You can tie a string around a doughnut, but it will slide right off a sphere. The mathematical principle behind that observation can reliably spot every spherelike object in 3D space. Henri Poincar? conjectured that it should also work in the next dimension up, but no one has proved it yet. Do mathematically interesting zero-value solutions of the Riemann zeta function all have the form a [plus.gif] bi? Don't sweat the details. Since the mid-19th century, the "Riemann hypothesis" has been the monster catfish in mathematicians' pond. If true, it will give them a wealth of information about the distribution of prime numbers and other long-standing mysteries. Does the Standard Model of particle physics rest on solid mathematical foundations? For almost 50 years, the model has rested on "quantum Yang-Mills theory," which links the behavior of particles to structures found in geometry. The theory is breathtakingly elegant and useful--but no one has proved that it's sound. _________________________________________________________________ What Is the Biological Basis of Consciousness? Greg Miller For centuries, debating the nature of consciousness was the exclusive purview of philosophers. But if the recent torrent of books on the topic is any indication, a shift has taken place: Scientists are getting into the game. Has the nature of consciousness finally shifted from a philosophical question to a scientific one that can be solved by doing experiments? The answer, as with any related to this topic, depends on whom you ask. But scientific interest in this slippery, age-old question seems to be gathering momentum. So far, however, although theories abound, hard data are sparse. The discourse on consciousness has been hugely influenced by Ren? Descartes, the French philosopher who in the mid-17th century declared that body and mind are made of different stuff entirely. It must be so, Descartes concluded, because the body exists in both time and space, whereas the mind has no spatial dimension. Recent scientifically oriented accounts of consciousness generally reject Descartes's solution; most prefer to treat body and mind as different aspects of the same thing. In this view, consciousness emerges from the properties and organization of neurons in the brain. But how? And how can scientists, with their devotion to objective observation and measurement, gain access to the inherently private and subjective realm of consciousness? Some insights have come from examining neurological patients whose injuries have altered their consciousness. Damage to certain evolutionarily ancient structures in the brainstem robs people of consciousness entirely, leaving them in a coma or a persistent vegetative state. Although these regions may be a master switch for consciousness, they are unlikely to be its sole source. Different aspects of consciousness are probably generated in different brain regions. Damage to visual areas of the cerebral cortex, for example, can produce strange deficits limited to visual awareness. One extensively studied patient, known as D.F., is unable to identify shapes or determine the orientation of a thin slot in a vertical disk. Yet when asked to pick up a card and slide it through the slot, she does so easily. At some level, D.F. must know the orientation of the slot to be able to do this, but she seems not to know she knows. Cleverly designed experiments can produce similar dissociations of unconscious and conscious knowledge in people without neurological damage. And researchers hope that scanning the brains of subjects engaged in such tasks will reveal clues about the neural activity required for conscious awareness. Work with monkeys also may elucidate some aspects of consciousness, particularly visual awareness. One experimental approach is to present a monkey with an optical illusion that creates a "bistable percept," looking like one thing one moment and another the next. (The orientation-flipping Necker cube is a well-known example.) Monkeys can be trained to indicate which version they perceive. At the same time, researchers hunt for neurons that track the monkey's perception, in hopes that these neurons will lead them to the neural systems involved in conscious visual awareness and ultimately to an explanation of how a particular pattern of photons hitting the retina produces the experience of seeing, say, a rose. Experiments under way at present generally address only pieces of the consciousness puzzle, and very few directly address the most enigmatic aspect of the conscious human mind: the sense of self. Yet the experimental work has begun, and if the results don't provide a blinding insight into how consciousness arises from tangles of neurons, they should at least refine the next round of questions. Ultimately, scientists would like to understand not just the biological basis of consciousness but also why it exists. What selection pressure led to its development, and how many of our fellow creatures share it? Some researchers suspect that consciousness is not unique to humans, but of course much depends on how the term is defined. Biological markers for consciousness might help settle the matter and shed light on how consciousness develops early in life. Such markers could also inform medical decisions about loved ones who are in an unresponsive state. Until fairly recently, tackling the subject of consciousness was a dubious career move for any scientist without tenure (and perhaps a Nobel Prize already in the bag). Fortunately, more young researchers are now joining the fray. The unanswered questions should keep them--and the printing presses--busy for many years to come. _________________________________________________________________ Why Do Humans Have So Few Genes? Elizabeth Pennisi When leading biologists were unraveling the sequence of the human genome in the late 1990s, they ran a pool on the number of genes contained in the 3 billion base pairs that make up our DNA. Few bets came close. The conventional wisdom a decade or so ago was that we need about 100,000 genes to carry out the myriad cellular processes that keep us functioning. But it turns out that we have only about 25,000 genes--about the same number as a tiny flowering plant called Arabidopsis and barely more than the worm Caenorhabditis elegans. That big surprise reinforced a growing realization among geneticists: Our genomes and those of other mammals are far more flexible and complicated than they once seemed. The old notion of one gene/one protein has gone by the board: It is now clear that many genes can make more than one protein. Regulatory proteins, RNA, noncoding bits of DNA, even chemical and structural alterations of the genome itself control how, where, and when genes are expressed. Figuring out how all these elements work together to choreograph gene expression is one of the central challenges facing biologists. In the past few years, it has become clear that a phenomenon called alternative splicing is one reason human genomes can produce such complexity with so few genes. Human genes contain both coding DNA--exons--and noncoding DNA. In some genes, different combinations of exons can become active at different times, and each combination yields a different protein. Alternative splicing was long considered a rare hiccup during transcription, but researchers have concluded that it may occur in half--some say close to all--of our genes. That finding goes a long way toward explaining how so few genes can produce hundreds of thousands of different proteins. But how the transcription machinery decides which parts of a gene to read at any particular time is still largely a mystery. The same could be said for the mechanisms that determine which genes or suites of genes are turned on or off at particular times and places. Researchers are discovering that each gene needs a supporting cast of hundreds to get its job done. They include proteins that shut down or activate a gene, for example by adding acetyl or methyl groups to the DNA. Other proteins, called transcription factors, interact with the genes more directly: They bind to landing sites situated near the gene under their control. As with alternative splicing, activation of different combinations of landing sites makes possible exquisite control of gene expression, but researchers have yet to figure out exactly how all these regulatory elements really work or how they fit in with alternative splicing. Figure 1 Approximate number of genes In the past decade or so, researchers have also come to appreciate the key roles played by chromatin proteins and RNA in regulating gene expression. Chromatin proteins are essentially the packaging for DNA, holding chromosomes in well-defined spirals. By slightly changing shape, chromatin may expose different genes to the transcription machinery. Genes also dance to the tune of RNA. Small RNA molecules, many less than 30 bases, now share the limelight with other gene regulators. Many researchers who once focused on messenger RNA and other relatively large RNA molecules have in the past 5 years turned their attention to these smaller cousins, including microRNA and small nuclear RNA. Surprisingly, RNAs in these various guises shut down and otherwise alter gene expression. They also are key to cell differentiation in developing organisms, but the mechanisms are not fully understood. Researchers have made enormous strides in pinpointing these various mechanisms. By matching up genomes from organisms on different branches on the evolutionary tree, genomicists are locating regulatory regions and gaining insights into how mechanisms such as alternative splicing evolved. These studies, in turn, should shed light on how these regions work. Experiments in mice, such as the addition or deletion of regulatory regions and manipulating RNA, and computer models should also help. But the central question is likely to remain unsolved for a long time: How do all these features meld together to make us whole? _________________________________________________________________ To What Extent Are Genetic Variation and Personal Health Linked? Jennifer Couzin Forty years ago, doctors learned why some patients who received the anesthetic succinylcholine awoke normally but remained temporarily paralyzed and unable to breathe: They shared an inherited quirk that slowed their metabolism of the drug. Later, scientists traced sluggish succinylcholine metabolism to a particular gene variant. Roughly 1 in 3500 people carry two deleterious copies, putting them at high risk of this distressing side effect. The solution to the succinylcholine mystery was among the first links drawn between genetic variation and an individual's response to drugs. Since then, a small but growing number of differences in drug metabolism have been linked to genetics, helping explain why some patients benefit from a particular drug, some gain nothing, and others suffer toxic side effects. The same sort of variation, it is now clear, plays a key role in individual risks of coming down with a variety of diseases. Gene variants have been linked to elevated risks for disorders from Alzheimer's disease to breast cancer, and they may help explain why, for example, some smokers develop lung cancer whereas many others don't. These developments have led to hopes--and some hype--that we are on the verge of an era of personalized medicine, one in which genetic tests will determine disease risks and guide prevention strategies and therapies. But digging up the DNA responsible--if in fact DNA is responsible--and converting that knowledge into gene tests that doctors can use remains a formidable challenge. Many conditions, including various cancers, heart attacks, lupus, and depression, likely arise when a particular mix of genes collides with something in the environment, such as nicotine or a fatty diet. These multigene interactions are subtler and knottier than the single gene drivers of diseases such as hemophilia and cystic fibrosis; spotting them calls for statistical inspiration and rigorous experiments repeated again and again to guard against introducing unproven gene tests into the clinic. And determining treatment strategies will be no less complex: Last summer, for example, a team of scientists linked 124 different genes to resistance to four leukemia drugs. But identifying gene networks like these is only the beginning. One of the toughest tasks is replicating these studies--an especially difficult proposition in diseases that are not overwhelmingly heritable, such as asthma, or ones that affect fairly small patient cohorts, such as certain childhood cancers. Many clinical trials do not routinely collect DNA from volunteers, making it sometimes difficult for scientists to correlate disease or drug response with genes. Gene microarrays, which measure expression of dozens of genes at once, can be fickle and supply inconsistent results. Gene studies can also be prohibitively costly. Nonetheless, genetic dissection of some diseases--such as cancer, asthma, and heart disease--is galloping ahead. Progress in other areas, such as psychiatric disorders, is slower. Severely depressed or schizophrenic patients could benefit enormously from tests that reveal which drug and dose will help them the most, but unlike asthma, drug response can be difficult to quantify biologically, making gene-drug relations tougher to pin down. As DNA sequence becomes more available and technologies improve, the genetic patterns that govern health will likely come into sharper relief. Genetic tools still under construction, such as a haplotype map that will be used to discern genetic variation behind common diseases, could further accelerate the search for disease genes. The next step will be designing DNA tests to guide clinical decision-making--and using them. If history is any guide, integrating such tests into standard practice will take time. In emergencies--a heart attack, an acute cancer, or an asthma attack--such tests will be valuable only if they rapidly deliver results. Ultimately, comprehensive personalized medicine will come only if pharmaceutical companies want it to--and it will take enormous investments in research and development. Many companies worry that testing for genetic differences will narrow their market and squelch their profits. Still, researchers continue to identify new opportunities. In May, the Icelandic company deCODE Genetics reported that an experimental asthma drug that pharmaceutical giant Bayer had abandoned appeared to decrease the risk of heart attack in more than 170 patients who carried particular gene variants. The drug targets the protein produced by one of those genes. The finding is likely to be just a foretaste of the many surprises in store, as the braids binding DNA, drugs, and disease are slowly unwound. _________________________________________________________________ Can the Laws of Physics Be Unified? Charles Seife At its best, physics eliminates complexity by revealing underlying simplicity. Maxwell's equations, for example, describe all the confusing and diverse phenomena of classical electricity and magnetism by means of four simple rules. These equations are beautiful; they have an eerie symmetry, mirroring one another in an intricate dance of symbols. The four together feel as elegant, as whole, and as complete to a physicist as a Shakespearean sonnet does to a poet. The Standard Model of particle physics is an unfinished poem. Most of the pieces are there, and even unfinished, it is arguably the most brilliant opus in the literature of physics. With great precision, it describes all known matter--all the subatomic particles such as quarks and leptons--as well as the forces by which those particles interact with one another. These forces are electromagnetism, which describes how charged objects feel each other's influence: the weak force, which explains how particles can change their identities, and the strong force, which describes how quarks stick together to form protons and other composite particles. But as lovely as the Standard Model's description is, it is in pieces, and some of those pieces--those that describe gravity--are missing. It is a few shards of beauty that hint at something greater, like a few lines of Sappho on a fragment of papyrus. The beauty of the Standard Model is in its symmetry; mathematicians describe its symmetries with objects known as Lie groups. And a mere glimpse at the Standard Model's Lie group betrays its fragmented nature: SU(3) [mult.gif] SU(2) [mult.gif] U(1). Each of those pieces represents one type of symmetry, but the symmetry of the whole is broken. Each of the forces behaves in a slightly different way, so each is described with a slightly different symmetry. But those differences might be superficial. Electromagnetism and the weak force appear very dissimilar, but in the 1960s physicists showed that at high temperatures, the two forces "unify." It becomes apparent that electromagnetism and the weak force are really the same thing, just as it becomes obvious that ice and liquid water are the same substance if you warm them up together. This connection led physicists to hope that the strong force could also be unified with the other two forces, yielding one large theory described by a single symmetry such as SU(5). A unified theory should have observable consequences. For example, if the strong force truly is the same as the electroweak force, then protons might not be truly stable; once in a long while, they should decay spontaneously. Despite many searches, nobody has spotted a proton decay, nor has anyone sighted any particles predicted by some symmetry-enhancing modifications to the Standard Model, such as supersymmetry. Worse yet, even such a unified theory can't be complete--as long as it ignores gravity. Figure 1 Fundamental forces. A theory that ties all four forces together is still lacking. Gravity is a troublesome force. The theory that describes it, general relativity, assumes that space and time are smooth and continuous, whereas the underlying quantum physics that governs subatomic particles and forces is inherently discontinuous and jumpy. Gravity clashes with quantum theory so badly that nobody has come up with a convincing way to build a single theory that includes all the particles, the strong and electroweak forces, and gravity all in one big bundle. But physicists do have some leads. Perhaps the most promising is superstring theory. Superstring theory has a large following because it provides a way to unify everything into one large theory with a single symmetry--SO(32) for one branch of superstring theory, for example--but it requires a universe with 10 or 11 dimensions, scads of undetected particles, and a lot of intellectual baggage that might never be verifiable. It may be that there are dozens of unified theories, only one of which is correct, but scientists may never have the means to determine which. Or it may be that the struggle to unify all the forces and particles is a fool's quest. In the meantime, physicists will continue to look for proton decays, as well as search for supersymmetric particles in underground traps and in the Large Hadron Collider (LHC) in Geneva, Switzerland, when it comes online in 2007. Scientists believe that LHC will also reveal the existence of the Higgs boson, a particle intimately related to fundamental symmetries in the model of particle physics. And physicists hope that one day, they will be able to finish the unfinished poem and frame its fearful symmetry. _________________________________________________________________ How Much Can Human Life Span Be Extended? Jennifer Couzin When Jeanne Calment died in a nursing home in southern France in 1997, she was 122 years old, the longest-living human ever documented. But Calment's uncommon status will fade in subsequent decades if the predictions of some biologists and demographers come true. Life-span extension in species from yeast to mice and extrapolation from life expectancy trends in humans have convinced a swath of scientists that humans will routinely coast beyond 100 or 110 years of age. (Today, 1 in 10,000 people in industrialized countries hold centenarian status.) Others say human life span may be far more limited. The elasticity found in other species might not apply to us. Furthermore, testing life-extension treatments in humans may be nearly impossible for practical and ethical reasons. Just 2 or 3 decades ago, research on aging was a backwater. But when molecular biologists began hunting for ways to prolong life, they found that life span was remarkably pliable. Reducing the activity of an insulinlike receptor more than doubles the life span of worms to a startling--for them--6 weeks. Put certain strains of mice on near-starvation but nutrient-rich diets, and they live 50% longer than normal. Some of these effects may not occur in other species. A worm's ability to enter a "dauer" state, which resembles hibernation, may be critical, for example. And shorter-lived species such as worms and fruit flies, whose aging has been delayed the most, may be more susceptible to life-span manipulation. But successful approaches are converging on a few key areas: calorie restriction; reducing levels of insulinlike growth factor 1 (IGF-1), a protein; and preventing oxidative damage to the body's tissues. All three might be interconnected, but so far that hasn't been confirmed (although calorie-restricted animals have low levels of IGF-1). Can these strategies help humans live longer? And how do we determine whether they will? Unlike drugs for cancer or heart disease, the benefits of antiaging treatments are fuzzier, making studies difficult to set up and to interpret. Safety is uncertain; calorie restriction reduces fertility in animals, and lab flies bred to live long can't compete with their wild counterparts. Furthermore, garnering results--particularly from younger volunteers, who may be likeliest to benefit because they've aged the least--will take so long that by the time results are in, those who began the study will be dead. That hasn't stopped scientists, some of whom have founded companies, from searching for treatments to slow aging. One intriguing question is whether calorie restriction works in humans. It's being tested in primates, and the National Institute on Aging in Bethesda, Maryland, is funding short-term studies in people. Volunteers in those trials have been on a stringent diet for up to 1 year while researchers monitor their metabolism and other factors that could hint at how they're aging. Insights could also come from genetic studies of centenarians, who may have inherited long life from their parents. Many scientists believe that average human life span has an inherent upper limit, although they don't agree on whether it's 85 or 100 or 150. One abiding question in the antiaging world is what the goal of all this work ought to be. Overwhelmingly, scientists favor treatments that will slow aging and stave off age-related diseases rather than simply extending life at its most decrepit. But even so, slowing aging could have profound social effects, upsetting actuarial tables and retirement plans. Then there's the issue of fairness: If antiaging therapies become available, who will receive them? How much will they cost? Individuals may find they can stretch their life spans. But that may be tougher to achieve for whole populations, although many demographers believe that the average life span will continue to climb as it has consistently for decades. If that happens, much of the increase may come from less dramatic strategies, such as heart disease and cancer prevention, that could also make the end of a long life more bearable. _________________________________________________________________ What Controls Organ Regeneration? R. John Davenport* Unlike automobiles, humans get along pretty well for most of their lives with their original parts. But organs do sometimes fail, and we can't go to the mechanic for an engine rebuild or a new water pump--at least not yet. Medicine has battled back many of the acute threats, such as infection, that curtailed human life in past centuries. Now, chronic illnesses and deteriorating organs pose the biggest drain on human health in industrialized nations, and they will only increase in importance as the population ages. Regenerative medicine--rebuilding organs and tissues--could conceivably be the 21st century equivalent of antibiotics in the 20th. Before that can happen, researchers must understand the signals that control regeneration. Researchers have puzzled for centuries over how body parts replenish themselves. In the mid-1700s, for instance, Swiss researcher Abraham Trembley noted that when chopped into pieces, hydra--tubelike creatures with tentacles that live in fresh water--could grow back into complete, new organisms. Other scientists of the era examined the salamander's ability to replace a severed tail. And a century later, Thomas Hunt Morgan scrutinized planaria, flatworms that can regenerate even when whittled into 279 bits. But he decided that regeneration was an intractable problem and forsook planaria in favor of fruit flies. Mainstream biology has followed in Morgan's wake, focusing on animals suitable for studying genetic and embryonic development. But some researchers have pressed on with studies of regeneration superstars, and they've devised innovative strategies to tackle the genetics of these organisms. These efforts and investigations of new regeneration models--such as zebrafish and special mouse lines--are beginning to reveal the forces that guide regeneration and those that prevent it. Animals exploit three principal strategies to regenerate organs. First, working organ cells that normally don't divide can multiply and grow to replenish lost tissue, as occurs in injured salamander hearts. Second, specialized cells can undo their training--a process known as dedifferentiation--and assume a more pliable form that can replicate and later respecialize to reconstruct a missing part. Salamanders and newts take this approach to heal and rebuild a severed limb, as do zebrafish to mend clipped fins. Finally, pools of stem cells can step in to perform required renovations. Planaria tap into this resource when reconstructing themselves. Figure 1 Self-repair. Newts reprogram their cells to reconstruct a severed limb. Humans already plug into these mechanisms to some degree. For instance, after surgical removal of part of a liver, healing signals tell remaining liver cells to resume growth and division to expand the organ back to its original size. Researchers have found that when properly enticed, some types of specialized human cells can revert to a more nascent state (see p. [38]85). And stem cells help replenish our blood, skin, and bones. So why do our hearts fill with scar tissue, our lenses cloud, and our brain cells perish? Animals such as salamanders and planaria regenerate tissues by rekindling genetic mechanisms that guide the patterning of body structures during embryonic development. We employ similar pathways to shape our parts as embryos, but over the course of evolution, humans may have lost the ability to tap into it as adults, perhaps because the cell division required for regeneration elevated the likelihood of cancer. And we may have evolved the capacity to heal wounds rapidly to repel infection, even though speeding the pace means more scarring. Regeneration pros such as salamanders heal wounds methodically and produce pristine tissue. Avoiding fibrotic tissue could mean the difference between regenerating and not: Mouse nerves grow vigorously if experimentally severed in a way that prevents scarring, but if a scar forms, nerves wither. Unraveling the mysteries of regeneration will depend on understanding what separates our wound-healing process from that of animals that are able to regenerate. The difference might be subtle: Researchers have identified one strain of mice that seals up ear holes in weeks, whereas typical strains never do. A relatively modest number of genetic differences seems to underlie the effect. Perhaps altering a handful of genes would be enough to turn us into superhealers, too. But if scientists succeed in initiating the process in humans, new questions will emerge. What keeps regenerating cells from running amok? And what ensures that regenerated parts are the right size and shape, and in the right place and orientation? If researchers can solve these riddles--and it's a big "if"--people might be able to order up replacement parts for themselves, not just their '67 Mustangs. R. John Davenport is an editor of Science's SAGE KE. _________________________________________________________________ How Can a Skin Cell Become a Nerve Cell? Gretchen Vogel Like Medieval alchemists who searched for an elixir that could turn base metals into gold, biology's modern alchemists have learned how to use oocytes to turn normal skin cells into valuable stem cells, and even whole animals. Scientists, with practice, have now been able to make nuclear transfer nearly routine to produce cattle, cats, mice, sheep, goats, pigs, and--as a Korean team announced in May--even human embryonic stem (ES) cells. They hope to go still further and turn the stem cells into treatments for previously untreatable diseases. But like the medieval alchemists, today's cloning and stem cell biologists are working largely with processes they don't fully understand: What actually happens inside the oocyte to reprogram the nucleus is still a mystery, and scientists have a lot to learn before they can direct a cell's differentiation as smoothly as nature's program of development does every time fertilized egg gives rise to the multiple cell types that make up a live baby. Scientists have been investigating the reprogramming powers of the oocyte for half a century. In 1957, developmental biologists first discovered that they could insert the nucleus of adult frog cells into frog eggs and create dozens of genetically identical tadpoles. But in 50 years, the oocyte has yet to give up its secrets. The answers lie deep in cell biology. Somehow, scientists know, the genes that control development--generally turned off in adult cells--get turned back on again by the oocyte, enabling the cell to take on the youthful potential of a newly fertilized egg. Scientists understand relatively little about these on-and-off switches in normal cells, however, let alone the unusual reversal that takes place during nuclear transfer. Figure 1 Cellular alchemist. A human oocyte. As cells differentiate, their DNA becomes more tightly packed, and genes that are no longer needed--or those which should not be expressed--are blocked. The DNA wraps tightly around proteins called histones, and genes are then tagged with methyl groups that prevent the proteinmaking machinery in the cell from reaching them. Several studies have shown that enzymes that remove those methyl groups are crucial for nuclear transfer to work. But they are far from the only things that are needed. If scientists could uncover the oocyte's secrets, it might be possible to replicate its tricks without using oocytes themselves, a resource that is fairly difficult to obtain and the use of which raises numerous ethical questions. If scientists could come up with a cell-free bath that turned the clock back on already-differentiated cells, the implications could be enormous. Labs could rejuvenate cells from patients and perhaps then grow them into new tissue that could repair parts worn out by old age or disease. But scientists are far from sure if such cell-free alchemy is possible. The egg's very structure, its scaffolding of proteins that guide the chromosomes during cell division, may also play a key role in turning on the necessary genes. If so, developing an elixir of proteins that can turn back a cell's clock may remain elusive. To really make use of the oocyte's power, scientists still need to learn how to direct the development of the rejuvenated stem cells and guide them into forming specific tissues. Stem cells, especially those from embryos, spontaneously form dozens of cell types, but controlling that development to produce a single type of cell has proved more difficult. Although some teams have managed to produce nearly pure colonies of certain kinds of neural cells from ES cells, no one has managed to concoct a recipe that will direct the cells to become, say, a pure population of dopamine-producing neurons that could replace those missing in Parkinson's disease. Scientists are just beginning to understand how cues interact to guide a cell toward its final destiny. Decades of work in developmental biology have provided a start: Biologists have used mutant frogs, flies, mice, chicks, and fish to identify some of the main genes that control a developing cell's decision to become a bone cell or a muscle cell. But observing what goes wrong when a gene is missing is easier than learning to orchestrate differentiation in a culture dish. Understanding how the roughly 25,000 human genes work together to form tissues--and tweaking the right ones to guide an immature cell's development--will keep researchers occupied for decades. If they succeed, however, the result will be worth far more than its weight in gold. _________________________________________________________________ How Does a Single Somatic Cell Become a Whole Plant? Gretchen Vogel It takes a certain amount of flexibility for a plant to survive and reproduce. It can stretch its roots toward water and its leaves toward sunlight, but it has few options for escaping predators or finding mates. To compensate, many plants have evolved repair mechanisms and reproductive strategies that allow them to produce offspring even without the meeting of sperm and egg. Some can reproduce from outgrowths of stems, roots, and bulbs, but others are even more radical, able to create new embryos from single somatic cells. Most citrus trees, for example, can form embryos from the tissues surrounding the unfertilized gametes--a feat no animal can manage. The house-plant Bryophyllum can sprout embryos from the edges of its leaves, a bit like Athena springing from Zeus's head. Nearly 50 years ago, scientists learned that they could coax carrot cells to undergo such embryogenesis in the lab. Since then, people have used so-called somatic embryogenesis to propagate dozens of species, including coffee, magnolias, mangos, and roses. A Canadian company has planted entire forests of fir trees that started life in tissue culture. But like researchers who clone animals (see p. [37]85), plant scientists understand little about what actually controls the process. The search for answers might shed light on how cells' fates become fixed during development, and how plants manage to retain such flexibility. Scientists aren't even sure which cells are capable of embryogenesis. Although earlier work assumed that all plant cells were equally labile, recent evidence suggests that only a subset of cells can transform into embryos. But what those cells look like before their transformation is a mystery. Researchers have videotaped cultures in which embryos develop but found no visual pattern that hints at which cells are about to sprout, and staining for certain patterns of gene expression has been inconclusive. Figure 1 Power of one. Orange tree embryos can sprout from a single somatic cell. Researchers do have a few clues about the molecules that might be involved. In the lab, the herbicide 2,4-dichlorophenoxyacetic acid (sold as weed killer and called 2,4-D) can prompt cells in culture to elongate, build a new cell wall, and start dividing to form embryos. The herbicide is a synthetic analog of the plant hormones called auxins, which control everything from the plant's response to light and gravity to the ripening of fruit. Auxins might also be important in natural somatic embryogenesis: Embryos that sprout on top of veins near the leaf edge are exposed to relatively high levels of auxins. Recent work has also shown that over- or underexpression of certain genes in Arabidopsis plants can prompt embryogenesis in otherwise normal-looking leaf cells. Sorting out sex-free embryogenesis might help scientists understand the cellular switches that plants use to stay flexible while still keeping growth under control. Developmental biologists are keen to learn how those mechanisms compare in plants and animals. Indeed, some of the processes that control somatic embryogenesis may be similar to those that occur during animal cloning or limb regeneration (see p. 84). On a practical level, scientists would like to be able to use lab-propagation techniques on crop plants such as maize that still require normal pollination. That would speed up both breeding of new varieties and the production of hybrid seedlings--a flexibility that farmers and consumers could both appreciate. _________________________________________________________________ How Does Earth's Interior Work? Richard A. Kerr The plate tectonics revolution went only so deep. True, it made wonderful sense of most of the planet's geology. But that's something like understanding the face of Big Ben; there must be a lot more inside to understand about how and why it all works. In the case of Earth, there's another 6300 kilometers of rock and iron beneath the tectonic plates whose churnings constitute the inner workings of a planetary heat engine. Tectonic plates jostling about the surface are like the hands sweeping across the clock face: informative in many ways but largely mute as to what drives them. Figure 1 A long way to go. Grasping the workings of plate tectonics will require deeper probing. Earth scientists inherited a rather simple picture of Earth's interior from their pre-plate tectonics colleagues. Earth was like an onion. Seismic waves passing through the deep Earth suggested that beneath the broken skin of plates lies a 2800-kilometer layer of rocky mantle overlying 3470 kilometers of molten and--at the center--solid iron. The mantle was further subdivided at a depth of 670 kilometers into upper and lower layers, with a hint of a layer a couple of hundred kilometers thick at the bottom of the lower mantle. In the postrevolution era, the onion model continued to loom large. The dominant picture of Earth's inner workings divided the planet at the 670-kilometer depth, forming with the core a three-layer machine. Above the 670, the mantle churned slowly like a very shallow pot of boiling water, delivering heat and rock at mid-ocean ridges to make new crust and cool the interior and accepting cold sinking slabs of old plate at deep-sea trenches. A plume of hot rock might rise from just above the 670 to form a volcanic hot spot like Hawaii. But no hot rock rose up through the 670 barrier, and no cold rock sank down through it. Alternatively, argued a smaller contingent, the mantle churned from bottom to top like a deep stockpot, with plumes rising all the way from the core-mantle boundary. Forty years of probing inner Earth with ever more sophisticated seismic imaging has boosted the view of the engine's complexity without much calming the debate about how it works. Imaging now clearly shows that the 670 is no absolute barrier. Slabs penetrate the boundary, although with difficulty. Layered-earth advocates have duly dropped their impenetrable boundary to 1000 kilometers or deeper. Or maybe there's a flexible, semipermeable boundary somewhere that limits mixing to only the most insistent slabs or plumes. Now seismic imaging is also outlining two great globs of mantle rock standing beneath Africa and the Pacific like pistons. Researchers disagree whether they are hotter than average and rising under their own buoyancy, denser and sinking, or merely passively being carried upward by adjacent currents. Thin lenses of partially melted rock dot the mantle bottom, perhaps marking the bottom of plumes, or perhaps not. Geochemists reading the entrails of elements and isotopes in mantle-derived rocks find signs of five long-lived "reservoirs" that must have resisted mixing in the mantle for billions of years. But they haven't a clue where in the depths of the mantle those reservoirs might be hiding. How can we disassemble the increasingly complex planetary machine and find what makes it tick? With more of the same, plus a large dose of patience. After all, plate tectonics was more than a half-century in the making, and those revolutionaries had to look little deeper than the sea floor. Seismic imaging will continue to improve as better seismometers are spread more evenly about the globe. Seismic data are already distinguishing between temperature and compositional effects, painting an even more complex picture of mantle structure. Mineral physicists working in the lab will tease out more properties of rock under deep mantle conditions to inform interpretation of the seismic data, although still handicapped by the uncertain details of mantle composition. And modelers will more faithfully simulate the whole machine, drawing on seismics, mineral physics, and subtle geophysical observations such as gravity variations. Another 40 years should do it. _________________________________________________________________ Are We Alone in the Universe? Richard A. Kerr Alone, in all that space? Not likely. Just do the numbers: Several hundred billion stars in our galaxy, hundreds of billions of galaxies in the observable universe, and 150 planets spied already in the immediate neighborhood of the sun. That should make for plenty of warm, scummy little ponds where life could come together to begin billions of years of evolution toward technology-wielding creatures like ourselves. No, the really big question is when, if ever, we'll have the technological wherewithal to reach out and touch such intelligence. With a bit of luck, it could be in the next 25 years. Workers in the search for extraterrestrial intelligence (SETI) would have needed more than a little luck in the first 45 years of the modern hunt for like-minded colleagues out there. Radio astronomer Frank Drake's landmark Project Ozma was certainly a triumph of hope over daunting odds. In 1960, Drake pointed a 26-meter radio telescope dish in Green Bank, West Virginia, at two stars for a few days each. Given the vacuum-tube technology of the time, he could scan across 0.4 megahertz of the microwave spectrum one channel at a time. Almost 45 years later, the SETI Institute in Mountain View, California, completed its 10-year-long Project Phoenix. Often using the 350-meter antenna at Arecibo, Puerto Rico, Phoenix researchers searched 710 star systems at 28 million channels simultaneously across an 1800-megahertz range. All in all, the Phoenix search was 100 trillion times more effective than Ozma was. Besides stunning advances in search power, the first 45 years of modern SETI have also seen a diversification of search strategies. The Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) has scanned billions of radio sources in the Milky Way by piggybacking receivers on antennas in use by observational astronomers, including Arecibo. And other groups are turning modest-sized optical telescopes to searching for nanosecond flashes from alien lasers. Figure 1 Listening for E.T. The SETI Institute is deploying an array of antennas and tying them into a giant "virtual telescope." Still, nothing has been heard. But then, Phoenix, for example, scanned just one or two nearby sunlike stars out of each 100 million stars out there. For such sparse sampling to work, advanced, broadcasting civilizations would have to be abundant, or searchers would have to get very lucky. To find the needle in a galaxy-size haystack, SETI workers are counting on the consistently exponential growth of computing power to continue for another couple of decades. In northern California, the SETI Institute has already begun constructing an array composed of individual 6-meter antennas. Ever-cheaper computer power will eventually tie 350 such antennas into "virtual telescopes," allowing scientists to search many targets at once. If Moore's law--that the cost of computation halves every 18 months--holds for another 15 years or so, SETI workers plan to use this antenna array approach to check out not a few thousand but perhaps a few million or even tens of millions of stars for alien signals. If there were just 10,000 advanced civilizations in the galaxy, they could well strike pay dirt before Science turns 150. The technology may well be available in coming decades, but SETI will also need money. That's no easy task in a field with as high a "giggle factor" as SETI has. The U.S. Congress forced NASA to wash its hands of SETI in 1993 after some congressmen mocked the whole idea of spending federal money to look for "little green men with misshapen heads," as one of them put it. Searching for another tippy-top branch of the evolutionary tree still isn't part of the NASA vision. For more than a decade, private funding alone has driven SETI. But the SETI Institute's planned $35 million array is only a prototype of the Square Kilometer Array that would put those tens of millions of stars within reach of SETI workers. For that, mainstream radio astronomers will have to be onboard--or we'll be feeling alone in the universe a long time indeed. _________________________________________________________________ How and Where Did Life on Earth Arise? Carl Zimmer* For the past 50 years, scientists have attacked the question of how life began in a pincer movement. Some approach it from the present, moving backward in time from life today to its simpler ancestors. Others march forward from the formation of Earth 4.55 billion years ago, exploring how lifeless chemicals might have become organized into living matter. Working backward, paleontologists have found fossils of microbes dating back at least 3.4 billion years. Chemical analysis of even older rocks suggests that photosynthetic organisms were already well established on Earth by 3.7 billion years ago. Researchers suspect that the organisms that left these traces shared the same basic traits found in all life today. All free-living organisms encode genetic information in DNA and catalyze chemical reactions using proteins. Because DNA and proteins depend so intimately on each other for their survival, it's hard to imagine one of them having evolved first. But it's just as implausible for them to have emerged simultaneously out of a prebiotic soup. Experiments now suggest that earlier forms of life could have been based on a third kind of molecule found in today's organisms: RNA. Once considered nothing more than a cellular courier, RNA turns out to be astonishingly versatile, not only encoding genetic information but also acting like a protein. Some RNA molecules switch genes on and off, for example, whereas others bind to proteins and other molecules. Laboratory experiments suggest that RNA could have replicated itself and carried out the other functions required to keep a primitive cell alive. Only after life passed through this "RNA world," many scientists now agree, did it take on a more familiar cast. Proteins are thousands of times more efficient as a catalyst than RNA is, and so once they emerged they would have been favored by natural selection. Likewise, genetic information can be replicated from DNA with far fewer errors than it can from RNA. Other scientists have focused their efforts on figuring out how the lifeless chemistry of a prebiotic Earth could have given rise to an RNA world. In 1953, working at the University of Chicago, Stanley Miller and Harold Urey demonstrated that experiments could shed light on this question. They ran an electric current through a mix of ammonia, methane, and other gases believed at the time to have been present on early Earth. They found that they could produce amino acids and other important building blocks of life. Figure 1 Cauldron of life? Deep-sea vents are one proposed site for life's start. Today, many scientists argue that the early atmosphere was dominated by other gases, such as carbon dioxide. But experiments in recent years have shown that under these conditions, many building blocks of life can be formed. In addition, comets and meteorites may have delivered organic compounds from space. Just where on Earth these building blocks came together as primitive life forms is a subject of debate. Starting in the 1980s, many scientists argued that life got its start in the scalding, mineral-rich waters streaming out of deep-sea hydrothermal vents. Evidence for a hot start included studies on the tree of life, which suggested that the most primitive species of microbes alive today thrive in hot water. But the hot-start hypothesis has cooled off a bit. Recent studies suggest that heat-loving microbes are not living fossils. Instead, they may have descended from less hardy species and evolved new defenses against heat. Some skeptics also wonder how delicate RNA molecules could have survived in boiling water. No single strong hypothesis has taken the hot start's place, however, although suggestions include tidal pools or oceans covered by glaciers. Research projects now under way may shed more light on how life began. Scientists are running experiments in which RNA-based cells may be able to reproduce and evolve. NASA and the European Space Agency have launched probes that will visit comets, narrowing down the possible ingredients that might have been showered on early Earth. Most exciting of all is the possibility of finding signs of life on Mars. Recent missions to Mars have provided strong evidence that shallow seas of liquid water once existed on the Red Planet--suggesting that Mars might once have been hospitable to life. Future Mars missions will look for signs of life hiding in under-ground refuges, or fossils of extinct creatures. If life does turn up, the discovery could mean that life arose independently on both planets--suggesting that it is common in the universe--or that it arose on one planet and spread to the other. Perhaps martian microbes were carried to Earth on a meteorite 4 billion years ago, infecting our sterile planet. __________________________________________ Carl Zimmer is the author of Soul Made Flesh: The Discovery of the Brain--and How it Changed the World. _________________________________________________________________ What Determines Species Diversity? Elizabeth Pennisi Countless species of plants, animals, and microbes fill every crack and crevice on land and in the sea. They make the world go 'round, converting sunlight to energy that fuels the rest of life, cycling carbon and nitrogen between inorganic and organic forms, and modifying the landscape. In some places and some groups, hundreds of species exist, whereas in others, very few have evolved; the tropics, for example, are a complex paradise compared to higher latitudes. Biologists are striving to understand why. The interplay between environment and living organisms and between the organisms themselves play key roles in encouraging or discouraging diversity, as do human disturbances, predator-prey relationships, and other food web connections. But exactly how these and other forces work together to shape diversity is largely a mystery. The challenge is daunting. Baseline data are poor, for example: We don't yet know how many plant and animal species there are on Earth, and researchers can't even begin to predict the numbers and kinds of organisms that make up the microbial world. Researchers probing the evolution of, and limits to, diversity also lack a standardized time scale because evolution takes place over periods lasting from days to millions of years. Moreover, there can be almost as much variation within a species as between two closely related ones. Nor is it clear what genetic changes will result in a new species and what their true influence on speciation is. Understanding what shapes diversity will require a major interdisciplinary effort, involving paleontological interpretation, field studies, laboratory experimentation, genomic comparisons, and effective statistical analyses. A few exhaustive inventories, such as the United Nations' Millennium Project and an around-the-world assessment of genes from marine microbes, should improve baseline data, but they will barely scratch the surface. Models that predict when one species will split into two will help. And an emerging discipline called evo-devo is probing how genes involved in development contribute to evolution. Together, these efforts will go a long way toward clarifying the history of life. Paleontologists have already made headway in tracking the expansion and contraction of the ranges of various organisms over the millennia. They are finding that geographic distribution plays a key role in speciation. Future studies should continue to reveal large-scale patterns of distribution and perhaps shed more light on the origins of mass extinctions and the effects of these catastrophes on the evolution of new species. From field studies of plants and animals, researchers have learned that habitat can influence morphology and behavior--particularly sexual selection--in ways that hasten or slow down speciation. Evolutionary biologists have also discovered that speciation can stall out, for example, as separated populations become reconnected, homogenizing genomes that would otherwise diverge. Molecular forces, such as low mutation rates or meiotic drive--in which certain alleles have an increased likelihood of being passed from one generation to the next--influence the rate of speciation. And in some cases, differences in diversity can vary within an ecosystem: Edges of ecosystems sometimes support fewer species than the interior. Evolutionary biologists are just beginning to sort out how all these factors are intertwined in different ways for different groups of organisms. The task is urgent: Figuring out what shapes diversity could be important for understanding the nature of the wave of extinctions the world is experiencing and for determining strategies to mitigate it. _________________________________________________________________ What Genetic Changes Made Us Uniquely Human? Elizabeth Culotta Every generation of anthropologists sets out to explore what it is that makes us human. Famed paleoanthropologist Louis Leakey thought tools made the man, and so when he uncovered hominid bones near stone tools in Tanzania in the 1960s, he labeled the putative toolmaker Homo habilis, the earliest member of the human genus. But then primatologist Jane Goodall demonstrated that chimps also use tools of a sort, and today researchers debate whether H. habilis truly belongs in Homo. Later studies have honed in on traits such as bipedality, culture, language, humor, and, of course, a big brain as the unique birthright of our species. Yet many of these traits can also be found, at least to some degree, in other creatures: Chimps have rudimentary culture, parrots speak, and some rats seem to giggle when tickled. What is beyond doubt is that humans, like every other species, have a unique genome shaped by our evolutionary history. Now, for the first time, scientists can address anthropology's fundamental question at a new level: What are the genetic changes that make us human? With the human genome in hand and primate genome data beginning to pour in, we are entering an era in which it may become possible to pinpoint the genetic changes that help separate us from our closest relatives. A rough draft of the chimp sequence has already been released, and a more detailed version is expected soon. The genome of the macaque is nearly complete, the orangutan is under way, and the marmoset was recently approved. All these will help reveal the ancestral genotype at key places on the primate tree. The genetic differences revealed between humans and chimps are likely to be profound, despite the oft-repeated statistic that only about 1.2% of our DNA differs from that of chimps. A change in every 100th base could affect thousands of genes, and the percentage difference becomes much larger if you count insertions and deletions. Even if we document all of the perhaps 40 million sequence differences between humans and chimps, what do they mean? Many are probably simply the consequence of 6 million years of genetic drift, with little effect on body or behavior, whereas other small changes--perhaps in regulatory, noncoding sequences--may have dramatic consequences. Half of the differences might define a chimp rather than a human. How can we sort them all out? One way is to zero in on the genes that have been favored by natural selection in humans. Studies seeking subtle signs of selection in the DNA of humans and other primates have identified dozens of genes, in particular those involved in host-pathogen interactions, reproduction, sensory systems such as olfaction and taste, and more. But not all of these genes helped set us apart from our ape cousins originally. Our genomes reveal that we have evolved in response to malaria, but malaria defense didn't make us human. So some researchers have started with clinical mutations that impair key traits, then traced the genes' evolution, an approach that has identified a handful of tantalizing genes. For example, MCPH1 and ASPM cause microcephaly when mutated, FOXP2 causes speech defects, and all three show signs of selection pressure during human, but not chimp, evolution. Thus they may have played roles in the evolution of humans' large brains and speech. But even with genes like these, it is often difficult to be completely sure of what they do. Knockout experiments, the classic way to reveal function, can't be done in humans and apes for ethical reasons. Much of the work will therefore demand comparative analyses of the genomes and phenotypes of large numbers of humans and apes. Already, some researchers are pushing for a "great ape 'phenome' project" to match the incoming tide of genomic data with more phenotypic information on apes. Other researchers argue that clues to function can best be gleaned by mining natural human variability, matching mutations in living people to subtle differences in biology and behavior. Both strategies face logistical and ethical problems, but some progress seems likely. A complete understanding of uniquely human traits will, however, include more than DNA. Scientists may eventually circle back to those long-debated traits of sophisticated language, culture, and technology, in which nurture as well as nature plays a leading role. We're in the age of the genome, but we can still recognize that it takes much more than genes to make the human. _________________________________________________________________ How Are Memories Stored and Retrieved? Greg Miller Packed into the kilogram or so of neural wetware between the ears is everything we know: a compendium of useful and trivial facts about the world, the history of our lives, plus every skill we've ever learned, from riding a bike to persuading a loved one to take out the trash. Memories make each of us unique, and they give continuity to our lives. Understanding how memories are stored in the brain is an essential step toward understanding ourselves. Neuroscientists have already made great strides, identifying key brain regions and potential molecular mechanisms. Still, many important questions remain unanswered, and a chasm gapes between the molecular and whole-brain research. The birth of the modern era of memory research is often pegged to the publication, in 1957, of an account of the neurological patient H.M. At age 27, H.M. had large chunks of the temporal lobes of his brain surgically removed in a last-ditch effort to relieve chronic epilepsy. The surgery worked, but it left H.M. unable to remember anything that happened--or anyone he met--after his surgery. The case showed that the medial temporal lobes (MTL), which include the hippocampus, are crucial for making new memories. H.M.'s case also revealed, on closer examination, that memory is not a monolith: Given a tricky mirror drawing task, H.M.'s performance improved steadily over 3 days even though he had no memory of his previous practice. Remembering how is not the same as remembering what, as far as the brain is concerned. Thanks to experiments on animals and the advent of human brain imaging, scientists now have a working knowledge of the various kinds of memory as well as which parts of the brain are involved in each. But persistent gaps remain. Although the MTL has indeed proved critical for declarative memory--the recollection of facts and events--the region remains something of a black box. How its various components interact during memory encoding and retrieval is unresolved. Moreover, the MTL is not the final repository of declarative memories. Such memories are apparently filed to the cerebral cortex for long-term storage, but how this happens, and how memories are represented in the cortex, remains unclear. More than a century ago, the great Spanish neuro-anatomist Santiago Ram?n y Cajal proposed that making memories must require neurons to strengthen their connections with one another. Dogma at the time held that no new neurons are born in the adult brain, so Ram?n y Cajal made the reasonable assumption that the key changes must occur between existing neurons. Until recently, scientists had few clues about how this might happen. Figure 1 Memorable diagram. Santiago Ram?n y Cajal's drawing of the hippocampus. He proposed that memories involve strengthened neural connections. Since the 1970s, however, work on isolated chunks of nervous-system tissue has identified a host of molecular players in memory formation. Many of the same molecules have been implicated in both declarative and nondeclarative memory and in species as varied as sea slugs, fruit flies, and rodents, suggesting that the molecular machinery for memory has been widely conserved. A key insight from this work has been that short-term memory (lasting minutes) involves chemical modifications that strengthen existing connections, called synapses, between neurons, whereas long-term memory (lasting days or weeks) requires protein synthesis and probably the construction of new synapses. Tying this work to the whole-brain research is a major challenge. A potential bridge is a process called long-term potentiation (LTP), a type of synaptic strengthening that has been scrutinized in slices of rodent hippocampus and is widely considered a likely physiological basis for memory. A conclusive demonstration that LTP really does underlie memory formation in vivo would be a big breakthrough. Meanwhile, more questions keep popping up. Recent studies have found that patterns of neural activity seen when an animal is learning a new task are replayed later during sleep. Could this play a role in solidifying memories? Other work shows that our memories are not as trustworthy as we generally assume. Why is memory so labile? A hint may come from recent studies that revive the controversial notion that memories are briefly vulnerable to manipulation each time they're recalled. Finally, the no-new-neurons dogma went down in flames in the 1990s, with the demonstration that the hippocampus, of all places, is a virtual neuron nursery throughout life. The extent to which these newborn cells support learning and memory remains to be seen. _________________________________________________________________ How Did Cooperative Behavior Evolve? Elizabeth Pennisi When Charles Darwin was working out his grand theory on the origin of species, he was perplexed by the fact that animals from ants to people form social groups in which most individuals work for the common good. This seemed to run counter to his proposal that individual fitness was key to surviving over the long term. By the time he wrote The Descent of Man, however, he had come up with a few explanations. He suggested that natural selection could encourage altruistic behavior among kin so as to improve the reproductive potential of the "family." He also introduced the idea of reciprocity: that unrelated but familiar individuals would help each other out if both were altruistic. A century of work with dozens of social species has borne out his ideas to some degree, but the details of how and why cooperation evolved remain to be worked out. The answers could help explain human behaviors that seem to make little sense from a strict evolutionary perspective, such as risking one's life to save a drowning stranger. Animals help each other out in many ways. In social species from honeybees to naked mole rats, kinship fosters cooperation: Females forgo reproduction and instead help the dominant female with her young. And common agendas help unrelated individuals work together. Male chimpanzees, for example, gang up against predators, protecting each other at a potential cost to themselves. Generosity is pervasive among humans. Indeed, some anthropologists argue that the evolution of the tendency to trust one's relatives and neighbors helped humans become Earth's dominant vertebrate: The ability to work together provided our early ancestors more food, better protection, and better childcare, which in turn improved reproductive success. However, the degree of cooperation varies. "Cheaters" can gain a leg up on the rest of humankind, at least in the short term. But cooperation prevails among many species, suggesting that this behavior is a better survival strategy, over the long run, despite all the strife among ethnic, political, religious, even family groups now rampant within our species. Evolutionary biologists and animal behavior researchers are searching out the genetic basis and molecular drivers of cooperative behaviors, as well as the physiological, environmental, and behavioral impetus for sociality. Neuroscientists studying mammals from voles to hyenas are discovering key correlations between brain chemicals and social strategies. Others with a more mathematical bent are applying evolutionary game theory, a modeling approach developed for economics, to quantify cooperation and predict behavioral outcomes under different circumstances. Game theory has helped reveal a seemingly innate desire for fairness: Game players will spend time and energy to punish unfair actions, even though there's nothing to be gained by these actions for themselves. Similar studies have shown that even when two people meet just once, they tend to be fair to each other. Those actions are hard to explain, as they don't seem to follow the basic tenet that cooperation is really based on self-interest. The models developed through these games are still imperfect. They do not adequately consider, for example, the effect of emotions on cooperation. Nonetheless, with game theory's increasing sophistication, researchers hope to gain a clearer sense of the rules that govern complex societies. Together, these efforts are helping social scientists and others build on Darwin's observations about cooperation. As Darwin predicted, reciprocity is a powerful fitness tactic. But it is not a pervasive one. Modern researchers have discovered that a good memory is a prerequisite: It seems reciprocity is practiced only by organisms that can keep track of those who are helpful and those who are not. Humans have a great memory for faces and thus can maintain lifelong good--or hard--feelings toward people they don't see for years. Most other species exhibit reciprocity only over very short time scales, if at all. Limited to his personal observations, Darwin was able to come up with only general rationales for cooperative behavior. Now, with new insights from game theory and other promising experimental approaches, biologists are refining Darwin's ideas and, bit by bit, hope that one day they will understand just what it takes to bring out our cooperative spirit. _________________________________________________________________ How Will Big Pictures Emerge From a Sea of Biological Data? Elizabeth Pennisi Biology is rich in descriptive data--and getting richer all the time. Large-scale methods of probing samples, such as DNA sequencing, microarrays, and automated gene-function studies, are filling new databases to the brim. Many subfields from biomechanics to ecology have gone digital, and as a result, observations are more precise and more plentiful. A central question now confronting virtually all fields of biology is whether scientists can deduce from this torrent of molecular data how systems and whole organisms work. All this information needs to be sifted, organized, compiled, and--most importantly--connected in a way that enables researchers to make predictions based on general principles. Enter systems biology. Loosely defined and still struggling to find its way, this newly emerging approach aims to connect the dots that have emerged from decades of molecular, cellular, organismal, and even environmental observations. Its proponents seek to make biology more quantitative by relying on mathematics, engineering, and computer science to build a more rigid framework for linking disparate findings. They argue that it is the only way the field can move forward. And they suggest that biomedicine, particularly deciphering risk factors for disease, will benefit greatly. The field got a big boost from the completion of the human genome sequence. The product of a massive, trip-to-the-moon logistical effort, the sequence is now a hard and fast fact. The biochemistry of human inheritance has been defined and measured. And that has inspired researchers to try to make other aspects of life equally knowable. Molecular geneticists dream of having a similarly comprehensive view of networks that control genes: For example, they would like to identify rules explaining how a single DNA sequence can express different proteins, or varying amounts of protein, in different circumstances (see p. [36]80). Cell biologists would like to reduce the complex communication patterns traced by molecules that regulate the health of the cell to a set of signaling rules. Developmental biologists would like a comprehensive picture of how the embryo manages to direct a handful of cells into a myriad of specialized functions in bone, blood, and skin tissue. These hard puzzles can only be solved by systems biology, proponents say. The same can be said for neuroscientists trying to work out the emergent properties--higher thought, for example--hidden in complex brain circuits. To understand ecosystem changes, including global warming, ecologists need ways to incorporate physical as well as biological data into their thinking. Figure 1 Systems approach. Circuit diagrams help clarify nerve cell functions. Today, systems biologists have only begun to tackle relatively simple networks. They have worked out the metabolic pathway in yeast for breaking down galactose, a carbohydrate. Others have tracked the first few hours of the embryonic development of sea urchins and other organisms with the goal of seeing how various transcription factors alter gene expression over time. Researchers are also developing rudimentary models of signaling networks in cells and simple brain circuits. Progress is limited by the difficulty of translating biological patterns into computer models. Network computer programs themselves are relatively simple, and the methods of portraying the results in ways that researchers can understand and interpret need improving. New institutions around the world are gathering interdisciplinary teams of biologists, mathematicians, and computer specialists to help promote systems biology approaches. But it is still in its early days. No one yet knows whether intensive interdisciplinary work and improved computational power will enable researchers to create a comprehensive, highly structured picture of how life works. _________________________________________________________________ How Far Can We Push Chemical Self-Assembly? Robert F. Service Most physical scientists nowadays focus on uncovering nature's mysteries; chemists build things. There is no synthetic astronomy or synthetic physics, at least for now. But chemists thrive on finding creative new ways to assemble molecules. For the last 100 years, they have done that mostly by making and breaking the strong covalent bonds that form when atoms share electrons. Using that trick, they have learned to combine as many as 1000 atoms into essentially any molecular configuration they please. Impressive as it is, this level of complexity pales in comparison to what nature flaunts all around us. Everything from cells to cedar trees is knit together using a myriad of weaker links between small molecules. These weak interactions, such as hydrogen bonds, van der Waals forces, and [pi.gif] - [pi.gif] interactions, govern the assembly of everything from DNA in its famous double helix to the bonding of H[2]O molecules in liquid water. More than just riding herd on molecules, such subtle forces make it possible for structures to assemble themselves into an ever more complex hierarchy. Lipids coalesce to form cell membranes. Cells organize to form tissues. Tissues combine to create organisms. Today, chemists can't approach the complexity of what nature makes look routine. Will they ever learn to make complex structures that self-assemble? Well, they've made a start. Over the past 3 decades, chemists have made key strides in learning the fundamental rules of noncovalent bonding. Among these rules: Like prefers like. We see this in hydrophobic and hydrophilic interactions that propel lipid molecules in water to corral together to form the two-layer membranes that serve as the coatings surrounding cells. They bunch their oily tails together to avoid any interaction with water and leave their more polar head groups facing out into the liquid. Another rule: Self-assembly is governed by energetically favorable reactions. Leave the right component molecules alone, and they will assemble themselves into complex ordered structures. Chemists have learned to take advantage of these and other rules to design selfassembling systems with a modest degree of complexity. Drug-carrying liposomes, made with lipid bilayers resembling those in cells, are used commercially to ferry drugs to cancerous tissues in patients. And selfassembled molecules called rotaxanes, which can act as molecular switches that oscillate back and forth between two stable states, hold promise as switches in future molecular-based computers. But the need for increased complexity is growing, driven by the miniaturization of computer circuitry and the rise of nanotechnology. As features on computer chips continue to shrink, the cost of manufacturing these ever-smaller components is skyrocketing. Right now, companies make them by whittling materials down to the desired size. At some point, however, it will become cheaper to design and build them chemically from the bottom up. Self-assembly is also the only practical approach for building a wide variety of nanostructures. Making sure the components assemble themselves correctly, however, is not an easy task. Because the forces at work are so small, self-assembling molecules can get trapped in undesirable conformations, making defects all but impossible to avoid. Any new system that relies on self-assembly must be able either to tolerate those defects or repair them. Again, biology offers an example in DNA. When enzymes copy DNA strands during cell division, they invariably make mistakes--occasionally inserting an A when they should have inserted a T, for example. Some of those mistakes get by, but most are caught by DNA-repair enzymes that scan the newly synthesized strands and correct copying errors. Strategies like that won't be easy for chemists to emulate. But if they want to make complex, ordered structures from the ground up, they'll have to get used to thinking a bit more like nature. _________________________________________________________________ What Are the Limits of Conventional Computing? Charles Seife At first glance, the ultimate limit of computation seems to be an engineering issue. How much energy can you put in a chip without melting it? How fast can you flip a bit in your silicon memory? How big can you make your computer and still fit it in a room? These questions don't seem terribly profound. In fact, computation is more abstract and fundamental than figuring out the best way to build a computer. This realization came in the mid-1930s, when Princeton mathematicians Alonzo Church and Alan Turing showed--roughly speaking--that any calculation involving bits and bytes can be done on an idealized computer known as a Turing machine. By showing that all classical computers are essentially alike, this discovery enabled scientists and mathematicians to ask fundamental questions about computation without getting bogged down in the minutiae of computer architecture. For example, theorists can now classify computational problems into broad categories. P problems are those, broadly speaking, that can be solved quickly, such as alphabetizing a list of names. NP problems are much tougher to solve but relatively easy to check once you've reached an answer. An example is the traveling salesman problem, finding the shortest possible route through a series of locations. All known algorithms for getting an answer take lots of computing power, and even relatively small versions might be out of reach of any classical computer. Mathematicians have shown that if you could come up with a quick and easy shortcut to solving any one of the hardest type of NP problems, you'd be able to crack them all. In effect, the NP problems would turn into P problems. But it's uncertain whether such a shortcut exists--whether P = NP. Scientists think not, but proving this is one of the great unanswered questions in mathematics. In the 1940s, Bell Labs scientist Claude Shannon showed that bits are not just for computers; they are the fundamental units of describing the information that flows from one object to another. There are physical laws that govern how fast a bit can move from place to place, how much information can be transferred back and forth over a given communications channel, and how much energy it takes to erase a bit from memory. All classical information-processing machines are subject to these laws--and because information seems to be rattling back and forth in our brains, do the laws of information mean that our thoughts are reducible to bits and bytes? Are we merely computers? It's an unsettling thought. But there is a realm beyond the classical computer: the quantum. The probabilistic nature of quantum theory allows atoms and other quantum objects to store information that's not restricted to only the binary 0 or 1 of information theory, but can also be 0 and 1 at the same time. Physicists around the world are building rudimentary quantum computers that exploit this and other quantum effects to do things that are provably impossible for ordinary computers, such as finding a target record in a database with too few queries. But scientists are still trying to figure out what quantum-mechanical properties make quantum computers so powerful and to engineer quantum computers big enough to do something useful. By learning the strange logic of the quantum world and using it to do computing, scientists are delving deep into the laws of the subatomic world. Perhaps something as seemingly mundane as the quest for computing power might lead to a newfound understanding of the quantum realm. _________________________________________________________________ Can We Selectively Shut Off Immune Responses? Jon Cohen In the past few decades, organ transplantation has gone from experimental to routine. In the United States alone, more than 20,000 heart, liver, and kidney transplants are performed every year. But for transplant recipients, one prospect has remained unchanged: a lifetime of taking powerful drugs to suppress the immune system, a treatment that can have serious side effects. Researchers have long sought ways to induce the immune system to tolerate a transplant without blunting the body's entire defenses, but so far, they have had limited success. They face formidable challenges. Although immune tolerance can occur--in rare cases, transplant recipients who stop taking immunosuppressants have not rejected their foreign organs--researchers don't have a clear picture of what is happening at the molecular and cellular levels to allow this to happen. Tinkering with the immune system is also a bit like tinkering with a mechanical watch: Fiddle with one part, and you may disrupt the whole mechanism. And there is a big roadblock to testing drugs designed to induce tolerance: It is hard to know if they work unless immunosuppressant drugs are withdrawn, and that would risk rejection of the transplant. But if researchers can figure out how to train the immune system to tolerate transplants, the knowledge could have implications for the treatment of autoimmune diseases, which also result from unwanted immune attack--in these cases on some of the body's own tissues. A report in Science 60 years ago fired the starting gun in the race to induce transplant tolerance--a race that has turned into a marathon. Ray Owen of the University of Wisconsin, Madison, reported that fraternal twin cattle sometimes share a placenta and are born with each other's red blood cells, a state referred to as mixed chimerism. The cattle tolerated the foreign cells with no apparent problems. A few years later, Peter Medawar and his team at the University of Birmingham, U.K., showed that fraternal twin cattle with mixed chimerism readily accept skin grafts from each other. Medawar did not immediately appreciate the link to Owen's work, but when he saw the connection, he decided to inject fetal mice in utero with tissue from mice of a different strain. In a publication in Nature in 1953, the researchers showed that, after birth, some of these mice tolerated skin grafts from different strains. This influential experiment led many to devote their careers to transplantation and also raised hopes that the work would lead to cures for autoimmune diseases. Immunologists, many of them working with mice, have since spelled out several detailed mechanisms behind tolerance. The immune system can, for example, dispatch "regulatory" cells that suppress immune attacks against self. Or the system can force harmful immune cells to commit suicide or to go into a dysfunctional stupor called anergy. Researchers indeed now know fine details about the genes, receptors, and cell-to-cell communications that drive these processes. Yet it's one matter to unravel how the immune system works and another to figure out safe ways to manipulate it. Transplant researchers are pursuing three main strategies to induce tolerance. One builds on Medawar's experiments by trying to exploit chimerism. Researchers infuse the patient with the organ donor's bone marrow in hopes that the donor's immune cells will teach the host to tolerate the transplant; donor immune cells that come along with the transplanted organ also, some contend, can teach tolerance. A second strategy uses drugs to train T cells to become anergic or commit suicide when they see the foreign antigens on the transplanted tissue. The third approach turns up production of T regulatory cells, which prevent specific immune cells from copying themselves and can also suppress rejection by secreting biochemicals called cytokines that direct the immune orchestra to change its tune. All these strategies face a common problem: It is maddeningly diff icult to judge whether the approach has failed or succeeded because there are no reliable "biomarkers" that indicate whether a person has become tolerant to a transplant. So the only way to assess tolerance is to stop drug treatment, which puts the patient at risk of rejecting the organ. Similarly, ethical concerns often require researchers to test drugs aimed at inducing tolerance in concert with immunosuppressive therapy. This, in turn, can undermine the drugs' effectiveness because they need a fully functioning immune system to do their job. If researchers can complete their 50-year quest to induce immune tolerance safely and selectively, the prospects for hundreds of thousands of transplant recipients would be greatly improved, and so, too, might the prospects for controlling autoimmune diseases. _________________________________________________________________ Do Deeper Principles Underlie Quantum Uncertainty and Nonlocality? Charles Seife "Quantum mechanics is very impressive," Albert Einstein wrote in 1926. "But an inner voice tells me that it is not yet the real thing." As quantum theory matured over the years, that voice has gotten quieter--but it has not been silenced. There is a relentless murmur of confusion underneath the chorus of praise for quantum theory. Quantum theory was born at the very end of the 19th century and soon became one of the pillars of modern physics. It describes, with incredible precision, the bizarre and counterintuitive behavior of the very small: atoms and electrons and other wee beasties of the submicroscopic world. But that success came with the price of discomfort. The equations of quantum mechanics work very well; they just don't seem to make sense. No matter how you look at the equations of quantum theory, they allow a tiny object to behave in ways that defy intuition. For example, such an object can be in "superposition": It can have two mutually exclusive properties at the same time. The mathematics of quantum theory says that an atom, for example, can be on the left side of a box and the right side of the box at the very same instant, as long as the atom is undisturbed and unobserved. But as soon as an observer opens the box and tries to spot where the atom is, the superposition collapses and the atom instantly "chooses" whether to be on the right or the left. This idea is almost as unsettling today as it was 80 years ago, when Erwin Schr?dinger ridiculed superposition by describing a half living, half-dead cat. That is because quantum theory changes what the meaning of "is" is. In the classical world, an object has a solid reality: Even a cloud of gas is well described by hard little billiard ball-like pieces, each of which has a well-defined position and velocity. Quantum theory seems to undermine that solid reality. Indeed, the famous Uncertainty Principle, which arises directly from the mathematics of quantum theory, says that objects' positions and moment a are smeary and ill defined, and gaining knowledge about one implies losing knowledge about the other. The early quantum physicists dealt with this unreality by saying that the "is"--the fundamental objects handled by the equations of quantum theory--were not actually particles that had an extrinsic reality but "probability waves" that merely had the capability of becoming "real" when an observer makes a measurement. This so-called Copenhagen Interpretation makes sense, if you're willing to accept that reality is probability waves and not solid objects. Even so, it still doesn't sufficiently explain another weirdness of quantum theory: nonlocality. In 1935, Einstein came up with a scenario that still defies common sense. In his thought experiment, two particles fly away from each other and wind up at opposite ends of the galaxy. But the two particles happen to be "entangled"--linked in a quantum-mechanical sense--so that one particle instantly "feels" what happens to its twin. Measure one, and the other is instantly transformed by that measurement as well; it's as if the twins mystically communicate, instantly, over vast regions of space. This "nonlocality" is a mathematical consequence of quantum theory and has been measured in the lab. The spooky action apparently ignores distance and the flow of time; in theory, particles can be entangled after their entanglement has already been measured. On one level, the weirdness of quantum theory isn't a problem at all. The mathematical framework is sound and describes all these bizarre phenomena well. If we humans can't imagine a physical reality that corresponds to our equations, so what? That attitude has been called the "shut up and calculate" interpretation of quantum mechanics. But to others, our difficulties in wrapping our heads around quantum theory hint at greater truths yet to be understood. Some physicists in the second group are busy trying to design experiments that can get to the heart of the weirdness of quantum theory. They are slowly testing what causes quantum superpositions to "collapse"--research that may gain insight into the role of measurement in quantum theory as well as into why big objects behave so differently from small ones. Others are looking for ways to test various explanations for the weirdnesses of quantum theory, such as the "many worlds" interpretation, which explains superposition, entanglement, and other quantum phenomena by positing the existence of parallel universes. Through such efforts, scientists might hope to get beyond the discomfort that led Einstein to declare that "[God] does not play dice." _________________________________________________________________ Is an Effective HIV Vaccine Feasible? Jon Cohen In the 2 decades since researchers identified HIV as the cause of AIDS, more money has been spent on the search for a vaccine against the virus than on any vaccine effort in history. The U.S. National Institutes of Health alone invests nearly $500 million each year, and more than 50 different preparations have entered clinical trials. Yet an effective AIDS vaccine, which potentially could thwart millions of new HIV infections each year, remains a distant dream. Although AIDS researchers have turned the virus inside-out and carefully detailed how it destroys the immune system, they have yet to unravel which immune responses can fend off an infection. That means, as one AIDS vaccine researcher famously put it more than a decade ago, the field is "flying without a compass." Some skeptics contend that no vaccine will ever stop HIV. They argue that the virus replicates so quickly and makes so many mistakes during the process that vaccines can't possibly fend off all the types of HIV that exist. HIV also has developed sophisticated mechanisms to dodge immune attack, shrouding its surface protein in sugars to hide vulnerable sites from antibodies and producing proteins that thwart production of other immune warriors. And the skeptics point out that vaccine developers have had little success against pathogens like HIV that routinely outwit the immune system--the malaria parasite, hepatitis C virus, and the tuberculosis bacillus are prime examples. Yet AIDS vaccine researchers have solid reasons to believe they can succeed. Monkey experiments have shown that vaccines can protect animals from SIV, a simian relative of HIV. Several studies have identified people who repeatedly expose themselves to HIV but remain uninfected, suggesting that something is stopping the virus. A small percentage of people who do become infected never seem to suffer any harm, and others hold the virus at bay for a decade or more before showing damage to their immune systems. Scientists also have found that some rare antibodies do work powerfully against the virus in test tube experiments. At the start, researchers pinned their hopes on vaccines designed to trigger production of antibodies against HIV's surface protein. The approach seemed promising because HIV uses the surface protein to latch onto white blood cells and establish an infection. But vaccines that only contained HIV's surface protein looked lackluster in animal and test tube studies, and then proved worthless in large-scale clinical trials. Now, researchers are intensely investigating other approaches. When HIV manages to thwart antibodies and establish an infection, a second line of defense, cellular immunity, specifically targets and eliminates HIV-infected cells. Several vaccines which are now being tested aim to stimulate production of killer cells, the storm troopers of the cellular immune system. But cellular immunity involves other players--such as macrophages, the network of chemical messengers called cytokines, and so-called natural killer cells--that have received scant attention. The hunt for an antibody-based vaccine also is going through something of a renaissance, although it's requiring researchers to think backward. Vaccine researchers typically start with antigens--in this case, pieces of HIV--and then evaluate the antibodies they elicit. But now researchers have isolated more than a dozen antibodies from infected people that have blocked HIV infection in test tube experiments. The trick will be to figure out which specific antigens triggered their production. It could well be that a successful AIDS vaccine will need to stimulate both the production of antibodies and cellular immunity, a strategy many are attempting to exploit. Perhaps the key will be stimulating immunity at mucosal surfaces, where HIV typically enters. It's even possible that researchers will discover an immune response that no one knows about today. Or perhaps the answer lies in the interplay between the immune system and human genetic variability: Studies have highlighted genes that strongly influence who is most susceptible--and who is most resistant--to HIV infection and disease. Wherever the answer lies, the insights could help in the development of vaccines against other diseases that, like HIV, don't easily succumb to immune attack and that kill millions of people. Vaccine developers for these diseases will probably also have to look in unusual places for answers. The maps created by AIDS vaccine researchers currently exploring uncharted immunologic terrain could prove invaluable. From shovland at mindspring.com Sun Jul 3 15:36:59 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 08:36:59 -0700 Subject: [Paleopsych] when the body runs riot Message-ID: <01C57FAA.624341F0.shovland@mindspring.com> The body has mechanism for maintaining homeostatis, but those can be overwhelmed, resulting in disease. The genes are blueprints for making healthy cells, but they don't work unless they get the correct inputs. Steve Hovland www.stevehovland.net -----Original Message----- From: Christian Rauh [SMTP:christian.rauh at uconn.edu] Sent: Sunday, July 03, 2005 7:40 AM To: The new improved paleopsych list Subject: Re: [Paleopsych] when the body runs riot << File: ATT00000.txt; charset = UTF-8 >> From shovland at mindspring.com Sun Jul 3 17:54:31 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 3 Jul 2005 10:54:31 -0700 Subject: [Paleopsych] Bio-Entanglement Message-ID: <01C57FBD.98559820.shovland@mindspring.com> >From Entangled Minds, by Dean Radin, PDF available online at the url below. "In physics, the idea of entanglement -- the quantum theory prediction that under certain circumstances particles that appear to be isolated are actually instantaneously connected through space and time -- is not only known to be demonstrably real, but is far more pervasive and robust than anyone had imagined even a few years ago.Devising new forms of entanglement has become a central focus in the accelerating race towards developing practical quantum computers. The growing pressure to develop workable quantum computers is rapidly expanding our ability to create ever more robust forms of entanglement in increasingly complex systems, for longer lifetimes, and at room temperature. Researchers will discover that under certain conditions, living cells also exhibit properties associated with quantum entanglement. Then the idea of bioentanglement will emerge, a concept that is more general than today's special cases of entanglement involving inanimate particles and photons. Someone will get a bright idea and ask,"I wonder what would happen if two brains were entangled? Would they show correlated behavior at a distance, just like other forms of bioentanglement? Then someone will ask,"I wonder what it would feel like when my brain is entangled with the outside world? Are mind fields bioentangled with the rest of the universe?" http://www.noetic.org/publications/shift/issue5/s5_radin.pdf From checker at panix.com Mon Jul 4 01:27:11 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:11 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Radical Evolution' and 'More Than Human': The Incredibles Message-ID: 'Radical Evolution' and 'More Than Human': The Incredibles New York Times Book Review, 5.7.3 http://www.nytimes.com/2005/07/03/books/review/03PAULL.html [First chapter of More than Human appended.] RADICAL EVOLUTION The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human. By Joel Garreau. 384 pp. Doubleday. $26. MORE THAN HUMAN Embracing the Promise of Biological Enhancement. By Ramez Naam. 276 pp. Broadway Books. $24.95. By ANNIE MURPHY PAUL ''This book can't begin with the tale of the telekinetic monkey.'' So opens Joel Garreau's captivating, occasionally brilliant and often exasperating ''Radical Evolution.'' Garreau, a reporter and editor at The Washington Post and the author of the influential work of social demography ''Edge City,'' acknowledges his authorial choice is a sacrifice. After all, ''how often does someone writing nonfiction get to lead with a monkey who can move objects with her thoughts?'' But to begin his book about the technological enhancement of the human mind and body with this kind of gee-whiz gimmick would send a misleading signal. Garreau makes it clear he's more interested in people than in machines. Readers will be grateful, since an airless sterility often creeps into books like ''Radical Evolution,'' which is focused on the near future. In the next generation or two, Garreau writes, advances in genetics, robotics, information technology and nanotechnology (the science that permits the construction of infinitesimally tiny devices) may allow us to raise our intelligence, refine our bodies and even become immortal -- or they could lead to a ruinous disruption of our individual identities and shared institutions, and if things go really wrong, to the total destruction of humanity. Unless you've cultivated a taste for the hypothetical, the situations mapped out here, in which computers take over, can become so much numbing science fiction. Wisely, Garreau devotes himself to embedding these unfamiliar technologies in a human context. We meet researchers from the federal government's mysterious Defense Advanced Research Projects Agency, now engineering soldiers who don't need sleep and who can stop a wound from bleeding just by thinking about it. We spend time with scientists at a biotechnology firm called Functional Genetics, engaged in research on ''anti-infectives'' that could one day make humans invulnerable to AIDS, Alzheimer's and cancer. Garreau focuses on three camps of thinkers who have paused to contemplate the future. The first espouse what Garreau terms the ''Heaven Scenario.'' They believe enhancement technology will allow us to live forever in perfect happiness without pain, more or less. The most vigorous advocate of what one skeptic calls ''techno-exuberance'' is Ray Kurzweil, an inventor and futurist. ''I'm not planning to die,'' Kurzweil announces. Instead, he speculates that humans will one day upload the contents of their brains to a computer and shed their physical bodies altogether. Set opposite Kurzweil and his buoyancy is Bill Joy, a founder of Sun Microsystems, whose musings tend toward the apocalyptic. Well known for his dire warnings about the growing power of technology, the misnamed Joy represents what Garreau calls the ''Hell Scenario.'' Joy speculates that we may meet an undignified end in ''gray goo,'' a scenario in which self-replicating devices designed to improve our bodies and minds instead take on a life of their own, becoming ''too tough, too small and too rapidly spreading to stop.'' They may, Joy continues, eventually ''suck everything vital out of all living things, reducing their husks to ashy mud in a matter of days.'' Things really get interesting when Garreau meets up with Jaron Lanier, a computer scientist and originator of the concept of virtual reality. Lanier foresees neither nirvana nor apocalypse, but a future in which every technological crisis is met and matched by our own ingenuity and resilience. Garreau christens this the ''Prevail Scenario,'' and confesses his personal preference for this vision animated by what he calls his ''faith in human cussedness.'' Heaven and hell share the same story line, he writes: ''We are in for revolutionary change; there's not much we can do about it; hang on tight; the end. The Prevail Scenario, if nothing else, has better literary qualities.'' Garreau's style often takes the form of a notebook dump, in which he deposits his assorted jottings directly onto the page. Sometimes the results are stultifying, but when the subject has a mind as original as Lanier's, they're enthralling. Lanier's reflections are at once whimsical and serious: What if we could project our thoughts and feelings so that they were instantly visible to others? What if our superintelligent machines are felled by a Windows crash, just as they're about to take over? To read Garreau's dazzling, disorderly book is to be thrust into a bewildering new world, where ambiguity rules and familiar signposts are few. As he observes, ''by the time the future has all its wires carefully tucked away in a nice metal box where you can no longer see the gaffer tape, it is no longer the future.'' Whereas Garreau's portraits make it clear that ideas about the future are always idiosyncratic and subjective, rooted as much in emotional need as in rational analysis, there's no such nuance in Ramez Naam's ''More Than Human.'' Naam, a professional technologist who helped develop Microsoft Outlook and Microsoft Internet Explorer, is a relentlessly positive pitchman, unburdened by doubt or complexity. But his conception of our enhanced future looks less like Kurzweil's sunny utopia and more like a fluorescent-lighted superstore, in which we roam the aisles selecting from displays of brain implants and anti-aging pills. To Naam, the technological augmentation of our minds and bodies is not an ethical or philosophical question but just one more consumer choice. Accordingly, his main concern is with governmental interference in the free market for such devices. People should be allowed to make up their own minds about enhancements, Naam argues, since ''millions of individuals weighing costs and benefits have a greater collective intelligence, better collective judgment, than a small number of centralized regulators and controllers.'' Never mind that we don't allow citizens' ''collective judgment'' to decide which drugs are safe; that's why we have the F.D.A. Expert guidance, based on long-term, large-scale research, would seem even more essential in the case of activities like germline genetic engineering, which permanently changes the genetic code of an individual and all his or her descendants. Naam's other targets are those who seek to slow or even arrest research on biotechnology. Though these objectors span the ideological spectrum -- from Bill McKibben, the liberal author of ''Enough,'' to Leon R. Kass, the conservative chairman of the President's Council on Bioethics -- Naam lumps them all together as curmudgeonly sticks in the mud, ''advocates of the biological status quo.'' Yet just one page earlier Naam talks up the wonders of ''keeping people young longer'' through science. He seems not to notice that eternal youth -- along with faultless functioning, perpetual fertility and unfailingly pleasant mood -- is its own kind of frozen status quo. In fact, there's something peculiarly adolescent about the blend of narcissism, self-indulgence and lust for control that appears to motivate this quest to become ''more than human.'' Naam's book fails to grapple adequately with the consequences that may follow if, through technology, some of these limits are lifted. In hailing a drug that makes long-married couples feel like newlyweds again, or a neural prosthesis that allows you to ''turn down the volume'' on your brain's ''empathy centers,'' or gene therapy that bulks up your muscles ''while you're watching television,'' Naam and his fellow enhancement boosters seem unwilling to reckon with the fact that the same limits that make life difficult also give it meaning. Annie Murphy Paul is the author of ''The Cult of Personality: How Personality Tests Are Leading Us to Miseducate Our Children, Mismanage Our Companies, and Misunderstand Ourselves.'' --------------- First chapter of 'More Than Human' http://www.nytimes.com/2005/07/03/books/chapters/0703-1st-naam.html By RAMEZ NAAM In 1989, Raj and Van DeSilva were desperate. Their daughter Ashanti, just four, was dying. She was born with a crippled immune system, a consequence of a problem in her genes. Every human being has around thirty thousand genes. In fact, we have two copies of each of those genes-one inherited from our mother, the other from our father. Our genes tell our cells what proteins to make, and when. Each protein is a tiny molecular machine. Every cell in your body is built out of millions of these little machines, working together in precise ways. Proteins break down food, ferry energy to the right places, and form scaffoldings that maintain cell health and structure. Some proteins synthesize messenger molecules to pass signals in the brain, and other proteins form receptors to receive those signals. Even the machines inside each of your cells that build new proteins-called ribosomes-are themselves made up of other proteins. Ashanti DeSilva inherited two broken copies of the gene that contains the instructions for manufacturing a protein called adenoside deaminase (ADA). If she had had just one broken copy, she would have been fine. The other copy of the gene would have made up the difference. With two broken copies, her body didn't have the right instructions to manufacture ADA at all. ADA plays a crucial role in our resistance to disease. Without it, special white blood cells called T cells die off. Without T cells, ADA-deficient children are wide open to the attacks of viruses and bacteria. These children have what's called severe combined immune deficiency (SCID) disorder, more commonly known as bubble boy disease. To a person with a weak immune system, the outside world is threatening. Everyone you touch, share a glass with, or share the same air with is a potential source of dangerous pathogens. Lacking the ability to defend herself, Ashanti was largely confined to her home. The standard treatment for ADA deficiency is frequent injections of PEG-ADA, a synthetic form of the ADA enzyme. PEG-ADA can mean the difference between life and death for an ADA-deficient child. Unfortunately, although it usually produces a rapid improvement when first used, children tend to respond less and less to the drug each time they receive a dose. Ashanti DeSilva started receiving PEG-ADA injections at the age of two, and initially she responded well. Her T-cell count rose sharply and she developed some resistance to disease. But by the age of four, she was slipping away, no longer responding strongly to her injections. If she was to live, she'd need something more than PEG-ADA. The only other option at the time, a bone-marrow transplant, was ruled out by the lack of matching donors. In early 1990, while Ashanti's parents were searching frantically for help, French Anderson, a geneticist at the National Institutes of Health, was seeking permission to perform the first gene-therapy trials on humans. Anderson, an intense fifth-degree blackbelt in tae kwon do and respected researcher in the field of genetics, wanted to show that he could treat genetic diseases caused by faulty copies of genes by inserting new, working copies of the same gene. Scientists had already shown that it was possible to insert new genes into plants and animals. Genetic engineering got its start in 1972, when geneticists Stanley Cohen and Herbert Boyer first met at a scientific conference in Hawaii on plasmids, small circular loops of extra chromosomal DNA in which bacteria carry their genes. Cohen, then a professor at Stanford, had been working on ways to insert new plasmids into bacteria. Researchers in Boyer's lab at the University of California in San Francisco had recently discovered restriction enzymes, molecular tools that could be used to slice and dice DNA at specific points. Over hot pastrami and corned-beef sandwiches, the two Californian researchers concluded that their technologies complemented one another. Boyer's restriction enzymes could isolate specific genes, and Cohen's techniques could then deliver them to bacteria. Using both techniques researchers could alter the genes of bacteria. In 1973, just four months after meeting each other, Cohen and Boyer inserted a new gene into the Escherichia coli bacterium (a regular resident of the human intestine). For the first time, humans were tinkering directly with the genes of another species. The field of genetic engineering was born. Boyer would go on to found Genentech, the world's first biotechnology company. Cohen would go on to win the Nobel Prize in 1986 for his work on cell growth factors. Building on Cohen and Boyer's work with bacteria, hundreds of scientists went on to find ways to insert new genes into plants and animals. The hard work of genetically engineering these higher organisms lies in getting the new gene into the cells. To do this, one needs a gene vector-a way to get the gene to the right place. Most researchers use gene vectors provided by nature: viruses. In some ways, viruses are an ideal tool for ferrying genes into a cell, because penetrating cell walls is already one of their main abilities. Viruses are cellular parasites. Unlike plant or animal cells, or even bacteria, viruses can't reproduce themselves. Instead, they penetrate cells and implant their viral genes; these genes then instruct the cell to make more of the virus, one protein at a time. Early genetic engineers realized that they could use viruses to deliver whatever genes they wanted. Instead of delivering the genes to create more virus, a virus could be modified to deliver a different gene chosen by a scientist. Modified viruses were pressed into service as genetic "trucks," carrying a payload of genes loaded onto them by researchers; these viruses don't spread from cell to cell, because they don't carry the genes necessary for the cell to make new copies of the virus. By the late 1980s, researchers had used this technique to alter the genes of dozens of species of plants and animals-tobacco plants that glow, tomatoes that could survive freezing, corn resistant to pesticides. French Anderson and his colleagues reasoned that one could do the same in a human being. Given a patient who lacked a gene crucial to health, one ought to be able to give that person copies of the missing gene. This is what Anderson proposed to do for Ashanti. Starting in June of 1988, Anderson's proposed clinical protocols, or treatment plans, went through intense scrutiny and generated more than a little hostility. His first protocol was reviewed by both the National Institutes of Health (NIH) and the Food and Drug Administration (FDA). Over a period of seven months, seven regulatory committees conducted fifteen meetings and twenty hours of public hearings to assess the proposal. In early 1990, Anderson and his collaborators received the final approval from the NIH's Recombinant DNA Advisory Committee and had cleared all legal hurdles. By spring, they had identified Ashanti as a potential patient. Would her parents consent to an experimental treatment? Of course there were risks to the therapy, yet without it Ashanti would face a life of seclusion and probably death in the next few years. Given these odds, her parents opted to try the therapy. As Raj DeSilva told the Houston Chronicle, "What choice did we have?" Ashanti and her parents flew to the NIH Clinical Center at Bethesda, Maryland. There, over the course of twelve days, Anderson and his colleagues Michael Blaese and Kenneth Culver slowly extracted some of Ashanti's blood cells. Safely outside the body, the cells had new, working copies of the ADA gene inserted into them by a hollowed-out virus. Finally, starting on the afternoon of September 14, Culver injected the cells back into Ashanti's body. The gene therapy had roughly the same goal as a bone-marrow transplant-to give Ashanti a supply of her own cells that could produce ADA. Unlike a bone-marrow transplant, gene therapy carries no risk of rejection. The cells Culver injected back into Ashanti's bloodstream were her own, so her body recognized them as such. The impact of the gene therapy on Ashanti was striking. Within six months, her T-cell count rose to normal levels. Over the next two years, her health continued to improve, allowing her to enroll in school, venture out of the house, and lead a fairly normal childhood. Ashanti is not completely cured-she still takes a low dose of PEG-ADA. Normally the dose size would increase with the patient's age, but her doses have remained fixed at her four-year-old level. It's possible that she could be taken off the PEG-ADA therapy entirely, but her doctors don't think it's yet worth the risk. The fact that she's alive today-let alone healthy and active-is due to her gene therapy, and also helps prove a crucial point: genes can be inserted into humans to cure genetic diseases. From Healing to Enhancing After Ashanti's treatment, the field of gene therapy blossomed. Since 1990, hundreds of labs have begun experimenting with gene therapy as a technique to cure disease, and more than five hundred human trials involving over four thousand patients have been launched. Researchers have shown that it may be possible to use gene therapy to cure diabetes, sickle-cell anemia, several kinds of cancer, Huntington's disease and even to open blocked arteries. While the goal of gene therapy researchers is to cure disease, gene therapy could also be used to boost human athletic performance. In many cases, the same research that is focused on saving lives has also shown that it can enhance the abilities of animals, with the suggestion that it could enhance men and women as well. Consider the use of gene therapy to combat anemia. Circulating through your veins are trillions of red blood cells. Pumped by your heart, they serve to deliver oxygen from the lungs to the rest of your tissues, and carry carbon dioxide from the tissues back out to the lungs and out of the body. Without enough red blood cells, you can't function. Your muscles can't get enough oxygen to produce force, and your brain can't get enough oxygen to think clearly. Anemia is the name of the condition of insufficient red blood cells. Hundreds of thousands of people worldwide live with anemia, and with the lethargy and weakness that are its symptoms. In the United States, at least eighty-five thousand patients are severely anemic as a result of kidney failure. Another fifty thousand AIDS patients are anemic due to side effects of the HIV drug AZT. In 1985, researchers at Amgen, a biotech company based in Thousand Oaks, California, looking for a way to treat anemia isolated the gene responsible for producing the growth hormone erythropoietin (EPO). Your kidneys produce EPO in response to low levels of oxygen in the blood. EPO in turn causes your body to produce more red blood cells. For a patient whose kidneys have failed, injections of Amgen's synthetic EPO can take up some of the slack. The drug is a lifesaver, so popular that the worldwide market for it is as high as $5 billion per year, and therein lies the problem: the cost of therapy is prohibitive. Three injections of EPO a week is a standard treatment, and patients who need this kind of therapy end up paying $7,000 to $9,000 a year. In poor countries struggling even to pay for HIV drugs like AZT, the added burden of paying for EPO to offset the side effects just isn't feasible. What if there was another way? What if the body could be instructed to produce more EPO on its own, to make up for that lost to kidney failure or AZT? That's the question University of Chicago professor Jeffrey Leiden asked himself in the mid-1990s. In 1997, Leiden and his colleagues performed the first animal study of EPO gene therapy, injecting lab monkeys and mice with a virus carrying an extra copy of the EPO gene. The virus penetrated a tiny proportion of the cells in the mice and monkeys and unloaded the gene copies in them. The cells began to produce extra EPO, causing the animals' bodies to create more red blood cells. In principle, this was no different from injecting extra copies of the ADA gene into Ashanti, except in this case the animals already had two working copies of the EPO gene. The one being inserted into some of their cells was a third copy; if the experiment worked, the animals' levels of EPO production would be boosted beyond the norm for their species. That's just what happened. After just a single injection, the animals began producing more EPO, and their red-blood-cell counts soared. The mice went from a hematocrit of 49 percent (meaning that 49 percent of their blood volume was red blood cells) to 81 percent. The monkeys went from 40 percent to 70 percent. At least two other biotech companies, Chiron and Ariad Gene Therapies, have produced similar results in baboons and monkeys, respectively. The increase in red-blood-cell count is impressive, but the real advantage of gene therapy is in the long-lasting effects. Doctors can produce an increase in red-blood-cell production in patients with injections of EPO itself-but the EPO injections have to be repeated three times a week. EPO gene therapy, on the other hand, could be administered just every few months, or even just once for the patient's entire lifetime. The research bears this out. In Leiden's original experiment, the mice each received just one shot, but showed higher red-blood-cell counts for a year. In the monkeys, the effects lasted for twelve weeks. The monkeys in the Ariad trial, which went through gene therapy more than four years ago, still show higher red-blood-cell counts today. This is a key difference between drug therapy and gene therapy. Drugs sent into the body have an effect for a while, but eventually are broken up or passed out. Gene therapy, on the other hand, gives the body the ability to manufacture the needed protein or enzyme or other chemical itself. The new genes can last for a few weeks or can become a permanent part of the patient's genome. The duration of the effect depends on the kind of gene vector used and where it delivers its payload of DNA. Almost all of the DNA you carry is located on twenty-three pairs of chromosomes that are inside the nuclei of your cells. The nucleus forms a protective barrier that shields your chromosomes from damage. It also contains sophisticated DNA repair mechanisms that patch up most of the damage that does occur. Insertional gene vectors penetrate all the way into the nucleus of the cell and splice the genes they carry into the chromosomes. From that point on, the new genes get all the benefits your other genes enjoy. The new genes are shielded from most of the damage that can happen inside your cells. If the cell divides, the new genes get copied to the daughter cells, just like the rest of your DNA. Insertional vectors make more or less permanent changes to your genome. . . . From checker at panix.com Mon Jul 4 01:27:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:17 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Brain science and romantic love Message-ID: Brain science and romantic love [This is a dubious site. Links to other articles there below.] THE SENSED PRESENCE AND ROMANTIC LOVE Love seems to be an experience of the 'other.' Even though it's really about ourselves, we experience it as being to do with another person. To see more deeply into it, we need to look at the experience of 'the other' more deeply. There hasn't been a lot of research on the subject. There are many studies that have yielded interesting statistics about how being in love affects academic performance, how it affects the immune system, how it influences the perceived quality of life, and a range of other findings. But the experience itself remains elusive, especially in terms of neurology. There is one line of research that suggests something about the nature of love, and it seems that love is only instance of a larger group of experiences: relating to the 'other'. After looking at the evidence, it seems to me that the 'other' is one's self. I'm thinking of some research into an experience called "The Sensed Presence." Its that feeling people get, usually at night, where they feel that there is someone or something in the room with them, an 'energy' or a 'presence' perhaps. They might feel simply that they are 'not alone' or that they're 'being watched.' There is an indefinable feeling that there is an 'other' of some kind in the room with them. To understand this, we need to look at the self, and not the other. To start, we need to see that the self is more than our ordinary experience shows us. Even in our most quiet moments, when we are still, it can be very hard to see our 'self'. Buddhism teaches that there is no such thing. If they're right, then we're on a wild goose chase in looking for it. Other religions say that the self is God. If these teachings are right, then our self is so elevated that we may have no hope of ever understanding it. Fortunately, brain science is a bit more down-to-earth than religion. There, we have a chance of understanding what the self is, even though the information won't tell us the whole story. The latest 'teachings' from neuroscience tell us that we actually have two selves, one on each side of the brain. And they're specialized. Like anything else in the brain, they each have specific jobs to do. One of them, on the left, is the one that experiences things through language. It's very socialized. Language is mostly a tool for relating to other people, so the 'linguistic' self is very conscious of where it stands with others. Its very sensitive to its social rank, as its reflected in the words of others. A simple string of words from another, like: 'you're fired' or 'I love you', can have an amazing impact on the person hearing them. They'll feel that 'they' are affected by these words. And they, as social beings, have been. When we lose a job, our social rank is reduced. When we start a new relationship, or we can feel that we are secure in a present one, our social rank is raised a bit. The other side of the brain, on the right, also has a self. It experiences the world in non-verbal ways. Its more introspective. Its silent. Its affected by music, art, pictures, and our perceptions of how others feel, rather than what others say. Its more likely to manifest in situations where we aren't able to take the need of others into account. Its usually subordinate; operating underneath the left hemispheric one. It takes this role because the linguistic self is actively interpreting the world and our experiences with words all the time. For most people, this keeps the silent self hidden, so that it operates without out knowing about it consciously. In many ways, the 'conscious' self is the one on the left, with only intermittent input from the one on the right. The Sensed Presence experience occurs when our two senses of self fall out of phase with one another. The subordinate sense of self is experienced directly by the dominant, linguistic one. Because we can't have two senses of self, the intruding silent sense of self is experienced as an external presence, and 'felt' to be happening outside one's self. I believe that the sensed presence also happens when we relate to other people. Some presences mean threats, while others can mean support, comfort or safety. We use our experiences of past states as a repertoire from which we select the state best suited to arising situations, and presences known from the past are projected onto presences encountered in the present. I want to suggest that we are projecting a part of ourselves onto real people whenever we're relating to them. The presences we experience in other people are the creations of our own minds, externalized and projected onto others. Each separate presence will call up a separate state of consciousness, although the differences between many of these states might be slight. From infancy onwards, we have acquaintances; people who are too socially distant to be called friends, but close enough that we must pay some attention to them. The default settings for relating to acquaintances are derived from our own sense of self while we are with such people in the past. Other people are absolutely unique. Some people catch our attention very sharply. We fall in love with them, or we come to hate their guts. These people are not mere aquaintances. Their presence cuts closer to home. Their words, for whatever reason, affects our self-esteem. Because we are such an intensely social species, our self esteem is largely a function of what we think our value is in the eyes of others. Most of the time, people speak to each other in ways that reflect their respect, or lack of it. Respect has a lot to do with social standing and rank. For the most part, we respect ourselves when we feel respected by those around us, even though it doesn't necessarily have to be that way. Our self-esteem changes almost constantly. Most of these changes are experienced through our emotions, although it also has a serious impact on the way a person thinks. Our moods can be elevated and depressed through the words of others. Words like you're hired' and you're fired.' Or, I love you' and leave me alone.' Our moods are directly connected to our level of self-esteem in each moment. In normal conditions, our experience of our selves is sensitive to how we are treated and spoken to by others. Each and every state of consciousness carries its own level of self-esteem. Whether or not one is in a subordinate position in any given situation initiates an appropriate state. The state enables a set of responses that minimize the situation's stress level by fulfilling the expectations of the dominant person in the situation. Each state has its own ways of thinking, feeling, speaking, and acting. Even for the most aware people, its hard to see all these thing happening at once. We live on autopilot, so to speak. If we were to try to make a conscious decision about each way we 'act out' our state of consciousness, we'd crash the system. We have to be on automatic, for the most part, because there are so many controls to adjust for each state. We want positive states to repeat, and to avoid the negative, unpleasant ones. This creates a tendency to bond with people that feel good to be around. Simple, eh? Not really. We 'decide' who feels good according to what we choose to project. And we make these choices largely out of habit. It begins in infancy, when we first begin to experience ourselves as individuals. There has been some research in pre natal psychology that suggests that the fetus experiences its mother's states of consciousness as though they are it's own. Sometime around birth, the newborn begins to experience its own states for the first time. Before birth, the mother gets angry, the fetus experiences the same state, although certainly it will now have very different phenomenological correlates. After birth, when the mother gets angry, the infant no longer experiences it with the same intimacy. The boundaries of the infants new self must be found. In the womb, the fetus probably didn't distinguish between itself and its mother. She must now be experienced as an external presence. For the first time, the ambient chemical environment in the womb is experienced as its mother's smell. Its mother, now experienced as separate from itself, becomes the source from which all its physical and emotional needs are met, almost without exception. Many commentators on the experience of romantic love have argued that the experience of early childhood comfort and nurturing provides a template from which later expectations about relationships are drawn. We begin to feel that our lover ought to treat us much as our mothers did. Women, of course, have the additional process of mapping their senses of comforting, loving presences onto men. In looking for romantic fulfillment, we are looking to find an experience that will change our experience of ourselves. Not by looking for love within ourselves, as so many spiritual teachers suggest, but by allowing a part of our 'self' to manifest through another. When I stop and remember that we're a social species, I cannot help but see it differently. For some people, or at some times in a person's life, 'true happiness' might be found only outside one's self. Our brains and minds are configured for relating to others in so many ways. Humans have a long childhood compared to other primate species, and most of it is spent relying on others to meet their most basic needs. Children are so engaged with the presence of others that they can usually play with anything and imagine it's alive. Children imagine their toys have a presence to them, so that a crayon becomes Mr. Crayon'. The Buddhist faithful imagine that a Buddha statue has the presence of the Enlightened One. The disciple sees God in his Guru. And these are projections. In the same way, lovers project their own loving presence onto their romantic partners. I want to suggest that falling in love is the process of projecting one's right-sided sense of self onto one's beloved. Because the same pathways that are involved in the maintenance of the right-sided, silent sense of self are also specialized for negative feelings, the maintenance of the romantic illusion is delicate at best. It's easily broken, and rarely lasts for more than a few weeks in most cases and a few years in cases where people feel strongly enough to marry. People often want to feel really passionate love before starting a relationship. But that kind usually doesn't last. When it fades, very few people escape disappointment of one kind or another. People are angered when their lover turns out to be who they are instead of who they were supposed to be. Sustaining relationships past this point calls for either denial or relationship and communication skills. I can use a fancy neuro scientific phrase to describe the nature of love (a sustained interhemispheric intrusion), but even I don't enjoy seeing my romantic side reduced to so sterile a set of words. Like the sensed presence experience, being in love happens when the silent sense of self comes out where linguistic sense of self can see it, except that instead of being sensed as a feeling that one is being watched, its projected directly onto the beloved. In the process, the normal division of other and self is blurred. Lovers speak of losing themselves in the other, or that they can't tell where they end and their lover begins. So long as one is able to sustain the illusion that one's partner will be the source of fulfillment, the projection continues undisturbed. It's been said that, when it comes to relationships, everybody is looking for a tailor-made fit, even though its an off-the-rack world. Inevitably, something happens to disturb the illusion. The 'interhemispheric intrusion' ends. The honeymoon is over. 'Hemispheric intrusions' are often very brief events. A vision of an angel might last just a few seconds. The first flush of true love' might continue for only weeks or months. There was a study of the relationship between hemisphericity and self-esteem, and it found that the higher a person's level of right hemispheric 'dominance', the lower their self-esteem. Right hemisphericity means that a person's experience of their self is dominated by their right side. This is the side of the brain that is specialized for both negative feelings and non-verbal ways of processing our experiences. All other conditions being equal, the more intuitive and spontaneous a person is, the lower their self esteem will be. Of course, people compensate in various ways, so that 'all other conditions' usually aren't equal. When a person is in love, their right hemispheric self' has access to the positive emotions on the left. Love feels good. However, after the experience is over, the person finds themselves more vulnerable to fear and sadness in response to things that threaten their sense of self. Such threats occur almost every moment in our lives. Those whose sense of self is mostly derived from the left side are much less vulnerable. They are better able to feel good about themselves even in the face of verbal assaults, but they are also less likely to fall so deeply in love in the first place. The typical aftermath of a mystical experience finds the person feeling somewhat shaky. They will avoid those whose energy' tends to bring them down.' In other words, they won't be able to cope well in many social interactions. They may even retreat into solitude, and avoid relating to others as much as they can. They tend to reject the mind-set that supports the opinions of those whose company they don't enjoy. At the same time, there can be an almost obsessive desire to share' their experience with anyone willing to listen. They seek out validation in the eyes of those around them; 'shouting it from the rooftops', making up for the fragmentation their sense of self sustained in their epiphany. They may cling to those whose company they find supportive. Left hemispheric personalities are judged and labeled using such phrases as there are none so blind as those who will not see.' Ideas about karma are invoked to explain how some just aren't ready to hear the truth.' Someone who is in unrequited love, or is losing a lover they still want to be with, finds themselves in much the same position. They, too, are vulnerable. They also feel that others just don't understand.' Their self esteem falls. They may cling to those who are willing to support them, just like those processing' in the wake of spiritual experiences and awakenings. They may also feel that they are not the same person they were before they experienced their romantic disappointment, just as the religious experient is also a changed' man or woman. In the Sufi tradition, God is referred to as the beloved, and it preserves many metaphors that convey the idea that separation from God is as painful as separation from whoever one is in love with. Union with God is seen as similar to romantic fulfillment. I suggest that romantic love is underpinned by the same brain mechanisms that are involved in the experience of God. While a mystic experience is often short and intense, romantic episodes may last a long time. Both of them involve the silent, right-sided self coming out where the left-sided self can see it, along with intense positive feelings. The after-effects are based on similar neural and psychological mechanisms. The dark night of the soul and the despair of unrequited love are made of the same stuff'. There is some truth in the sayings that the beloved is God, and that when we love God we are loving ourselves. I and thou are one. The other is the self. [5]Shakti - Magnetic Brain Stimulation [6]Deja Vu [7]Darwinian Reincarnation [8]Consciousness [9]Romantic Love and the Brain [10]Origins of spirituality in Human Evolution [11]Sacred Lands [12]"The Sensed Presence" [13]Glasses For Enhanced Visual Acuity [14]God in the Brain [15]Spiritual Aptitude Test [16]Stimulating My BrainAs A Spiritual Path [17]Inventing Shakti [18]Sex_and States of Consciousness [19]The Gay Male Brain - Evolutionary Speculations [20]Visions [21]The Spiritual Personality [22]Enlightenment And the Brain [23]Archetypes [24]A Diet For Epileptics? [25]Odd Experiences - Online Poll Results [26]Brain_News [27]Out-Of-Body Experiences [28]Near-Death Experiences - Thai Case histories [29]The Big Bang [30]Meditations from Brain Science [31]Near-Death Experiences in Thailand - Discussion [32]Downloads [33]The Terrorist Brain [34]Publications by Dr. M.A. Persinger [35]Credentials [36]Hippocrates on Epilepsy References 4. mailto:brainsci at jps.net 5. http://www.innerworlds.50megs.com/winshakti/index.htm 6. http://www.innerworlds.50megs.com/dejavu.htm 7. http://www.innerworlds.50megs.com/rebirth.htm 8. http://www.innerworlds.50megs.com/consciousness.htm 9. http://www.innerworlds.50megs.com/romance.htm 10. http://www.innerworlds.50megs.com/deathanxiety.htm 11. http://www.innerworlds.50megs.com/earthfee.htm 12. http://www.innerworlds.50megs.com/sp.htm 13. http://www.innerworlds.50megs.com/evaglasses.htm 14. http://www.innerworlds.50megs.com/god.htm 15. http://www.innerworlds.50megs.com/anchored_TL_test.htm 16. http://www.innerworlds.50megs.com/me_myTL.htm 17. http://www.innerworlds.50megs.com/neuromag.htm 18. http://www.innerworlds.50megs.com/sex_ascs.htm 19. http://www.innerworlds.50megs.com/gaybrain.htm 20. http://www.innerworlds.50megs.com/Visions.htm 21. http://www.innerworlds.50megs.com/traits.htm 22. http://www.innerworlds.50megs.com/moksha.htm 23. http://www.innerworlds.50megs.com/archetypes.htm 24. http://www.innerworlds.50megs.com/ketogenic.htm 25. http://www.innerworlds.50megs.com/pollresults.htm 26. http://www.innerworlds.50megs.com/brain_news.htm 27. http://www.innerworlds.50megs.com/obe.htm 28. http://www.innerworlds.50megs.com/bkknde.htm 29. http://www.innerworlds.50megs.com/bigbang.htm 30. http://www.innerworlds.50megs.com/neuromed.htm 31. http://www.innerworlds.50megs.com/thaindes.htm 32. http://www.innerworlds.50megs.com/downloads.htm 33. http://www.innerworlds.50megs.com/terrorism.htm 34. http://www.innerworlds.50megs.com/Persinger_pubs.htm 35. http://www.innerworlds.50megs.com/credentials.htm 36. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:27:25 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:25 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Neurobiology of Religious Terrorism Message-ID: Neurobiology of Religious Terrorism http://www.innerworlds.50megs.com/terrorism.htm [This is a dubious site. Links to other articles below.] Todd Murphy, Researching Behavioral Neuroscientist Understanding the mind of a suicidal terrorist is a special challenge in psychology. Not only do their actions show a highly aggressive personality, but their motivations seem to outweigh even the imperative for self-preservation. The profile of the suicide bomber is not at all simple. We think of them as maniacs; madmen driven by national and religious hatred, or a simplistic 'will to power'. Yet, should soldiers do the same thing fighting in a cause we support, we're quick to quote our own culture's holy books: "There can be no greater love than to lay down one's life..." The suicide himself is usually not forced to his actions. He is not held prisoner, nor forced at gunpoint to complete his mission. The Kamikaze of the second world war was motivated differently. They were told that they were defending their homes and families from imminent destruction by US forces, and that they had a chance to stop the American fleet from arriving in Japan. If they succeeded, their families would escape danger. For the contemporary Islamic terrorist, no such threat exists. Their failure will not mean the end of their way of life, nor the deaths of their families. So, how to account for their unprecedented dedication? Or their 'greatest love'? The answer lies in the unique psychology of the religious killer. Fortunately, a recent study addresses the issue: "I Would Kill in God's Name:" Role of Sex, Weekly Church Attendance, Report of Religious Experience, and Limbic Lability" M.A. Persinger, Perceptual And Motor Skills, 1997, 85, 128-130 [3]link The study is part of a larger research effort in the neurological bases of religious experience, including religious personalities, religious conversions, and now, extreme religious views. It was done by administering a set of questionnaires to 1480 university students that asked about a wide range of religious beliefs, habits and behaviors. It also asked about how often the subjects had more common, 'altered state' experiences, like deja vu, the sense of a presence, electric-like sensations, and many others. Taken together, these latter experiences (complex partial epileptic signs) give a measure of a person's "Limbic Lability". The statistical analysis involved taking each questionnaire that included a 'yes' response to an item that asked if they would be willing to kill for God. All the questionnaires that included a 'yes' to this were examined to see what other items emerged in association with a willingness to kill in 'His' name. Four factors emerged. 1) Having had a religious experience. 2) Weekly church attendance (religious orthodoxy). 3) Being Male. 4) Limbic lability (which will be explained). The next step was to look at all the questionnaires that showed all four traits, creating a second group. 44% of this second group stated that they would kill another person if God told them to. The study was based on university students, and if generalizable, then one out of 20 Canadian university students would be willing to kill another person if they were to attribute the instruction to God. Let's examine these factors, one-by-one. RELIGOUS EXPERIENCE In dismissing religious experience, both modern scientists and politicians miss a factor that offers a more powerful motivation than patriotism, national defense, greed, or military pride. The absolute conviction that firsthand experience creates. Religious experience (1). Religious experience can take many forms. In its most intense manifestations, it can involve seeing God, or hearing his voice. Out-of-body experiences, a lucid or exceptionally powerful dream, sensing the presence of an angel, and even moments of creative inspiration. It can also appear as an emotional peak that might happen during a religious meeting, or a political meeting with religious overtones. In the aftermath of a religious experience, the individual will almost inevitably interpret it terms of their religious or spiritual views(2). Whatever the idea the person uses most readily will lend a context to their epiphany. From then on, it acquires an ethos of absolute truth. It 'feels' like the truth. At present, many Islamic religious leaders are being quite vocal that violence and terrorism have nothing to do with Islam, and wonder openly at how terrorists reconcile their actions with the teachings of Islam, with it's constant rejoinders to compassion and mercy. The answer probably lies in the sense of certainty that a religious experience can create. They do not need to consider the teachings of the Koran. Instead, they 'feel' the world in terms of their religious 'awakening', a direct message from God; one that supersedes scripture. All Islamic teachings are seen through it's lens. When the idea that dominates their world is that of Jihad, then war and destruction are seen as a 'higher' form of compassion. Before we jump to the conclusion that we could never see any wisdom or compassion in being aggressive, we might recall how we support, in our minds, Jesus as he drove the money-changers from the temple of Jerusalem. That was an act of violence, guided by a higher wisdom. So, in Truman's mind, was the bombing of Hiroshima. The step from higher wisdom to violence is all too common. Anyone who has ever gone too far in punishing a child has made it. For the Jihadi, like anyone else, the full range of religious experiences is quite large. A person might hear a religious statement, and feel a parasthesia, a tingling or electric-like feeling, or more commonly, an intense burst of emotion. The statement, whatever it is, suddenly acquires tremendous force. The person having the experience is easily convinced that the statement is true. The powerful emotions experienced during a political rally, given a religious interpretation, make it's goal or theme into a religious truth. At least insofar as the person will then accept the message as part of their own faith. If it glorifies the individual; if it 'saves' them, or makes them feel that they are one of the "chosen few" or the "anointed" or that their service to God is special, the person can begin to identify themselves more as the one 'touched by god' than simply as a person. Once a person identifies with God, their own life can actually be seen as being in the way, a hindrance to unity with "Him". Suicide is sanctioned when it is believed to a part of God's plan, and the person who commits it in his name may seem to be spiritually elevated; a high and holy being. "A leading Islamist authority, Sheikh Yusuf al-Qaradawi, recently explained the distinction this way: attacks on enemies are not suicide operations but "heroic martyrdom operations". (3) In all probability, Jihad suicides are not motivated by a desire to elicit fear. Their military superiors certainly are, but the individual is much more likely to be motivated by the desire for an intangible, personal, reward. The desire to experience, once again, the peak moments of their lives. People who have had religious experiences will often do whatever it takes to re-capture those moments. Indeed, suicide in this context is not without precedent. One type of religious experience (far more intense than the type that can occur in moments of extreme religious fervor) is the now-well-known Near-Death Experience (NDE). There have been reports of suicides by people who have had deeply spiritual experiences while clinically dead, but were later revived. While the overwhelming majority of NDE experiences say that they would never consider suicide, they also often have a deep longing to return to the blissful state they had while they were having their NDEs. History records many examples of suicide for religious reasons. The Jim Jones cult, the mass-suicide of Jewish Zealots under siege by the Roman General Vespasian at Masada, the Vietnamese Patriarch who immolated himself in protest during the Vietnam War, Joan of Arc's refusal to recant her preaching, based on her visions, even though it meant being burned at the stake. The point is that religious experience can introduce priorities for a person that are quite 'larger than life.' So much so that death can lose it's repellent quality. It can even become quite exalted. Both the killer and the killed can seem trivial compared to God's larger plan. Revulsion at suicide in "His" service can seem almost self-indulgent. Islamic tradition records that having visions can help a Jihadi to fulfill his mission. The very word assassin derives from the Arabic word Hashishim, a medieval word that referred to a group of assassins who were offered hashish-induced visions as a preview of the heavenly rewards that awaited them after giving their lives in Jihad. In fact, the relationship between the leaders and the individual terrorists attackers is probably better compared to a Guru and his disciples than a general and their troops. The FBI has considerable valid material on the psychology of the 'surrendered disciple', gathered during their investigation of Bhagwan Shree Rajneesh prior to his departure from the US in 1985. Many trained psychotherapists were involved in his organization, and later repudiated him as their Guru. Important former disciples from this group may have useful insights to offer. Today's terrorists have probably replaced hashish with emotionally intense moments during rallies and prayer meetings. Both can have profound effects on the limbic system, and both can cement beliefs solidly into place. The suicidal Jihadi perceives themselves as making the supreme sacrifice for God. They believe they will enjoy special rewards in the hereafter, and, because of the architecture all human brains share, they have a sense that the fulfillment for the longing their peak experience created will happen at death. Having the sense of being on a mission from God in dying, they simply cannot imagine that they will not attain a "peace which passeth understanding". Further, terrorist orthodoxy would interpret the suicidal Jihadi as a traitor against god should he change his mind, once he had volunteered. Social pressures to follow through are enormous. The Jihadi who, in a moment of fervor or deep reflection, decides to give his life for the holy war, may have little choice but to follow through. Like the Japanese kamikaze who chose to pilot his plane to it's destination, the Jihadi remains on the course he is given. Such a Jihadi will either not have been exposed to alternative interpretations for their behavior, or will reject them out of hand due to their unorthodox origin. When confronted with vital decisions, the orthodox ally with orthodoxy. Because people who have had religious experience use them as the benchmark for their spirituality, rather than the scriptures or teachings, they will often be very reticent to share their experiences and missions with others. Most people who have had deep religious experiences feel that others just 'do not understand'. With phrases like 'there are none so deaf as those who will not hear', and "you shall not cast your pearls before swine", people tend to keep very quiet about the sense of destiny that their religious experience gives them. In more practical terms, they very likely prefer to avoid the challenges to their self identity that challenges to their ideology would create. A bit like a child who avoids showing off their writing in class for fear of ridicule. The need for security among religious terrorists can acquire a sacred connotation in this way. In relating to their fellow Moslems, they may see themselves as consecrated holy men, an 'inner circle', moving among a spiritless herd of sheep, if they relate to them at all. WEEKLY CHURCH ATTENDANCE The item in the study asked about weekly church attendance is worth looking at. The study was done in Canada, in a predominantly Christian community (Sudbury, Ontario). It would seem to evidence traditional, orthodox, or institutionalized beliefs. Going to church every week is, in this paper, interpreted as evidence of religious orthodoxy. Like the churchgoer, the Jihadi embraces a set of beliefs they share in common with a community, and they participate in the life of that community regularly. This now refers to their own, 'inner' circle, and not that of other Moslems. Orthodoxy in belief is reinforced by subtle social rewards from the community almost constantly. Leaders within the community are also the ones who demonstrate the deepest understanding of it's beliefs. It's heroes are the ones who do the most for the cause. Missionary work, charity, and religious practice are good sacrifices, but nothing, absolutely nothing, compares with martyrdom. This is where some beliefs peculiar to Islam come into play. Not only does it sanction war in the name of Allah, but it promises acceptance into heaven to those who die in these wars, called Jihads, and considers them to be martyrs. This allows suicides to see themselves as martyrs, as well. In fact, they probably consider themselves as the highest form of martyr. Although the term "Mohajadinn" is usually used for fighters in Jihads, we will use the term "Jihadi" here, so as to avoid accidental pejorative reference to legitimate Islamic freedom fighters, who appear regularly in recent history. GENDER One of the features of the religious killer is that they are usually, but not always, male. One exception to this has been the female suicide bomber in Sri Lanka who attempted to assassinate it's president, Chandrika Kumaratunga, in 1999. The male brain differs from the female brain in that it seems to be less "multitasking" (4). When a male is engaged in a task, fewer brain structures are activated, in most cases, than a female brain engaged in the same task. Males, therefore, are better able to maintain the orthodoxy required, and to exclude items from their thoughts (denial), like, for example, the long-term consequences of their actions. The dominant point of reference will always be the religious experience, and the framework of beliefs used to interpret them. The male brain seems better adapted to handle the 'single-pointedness' religious mania requires. TEMPORAL LOBE LABILITY Temporal lobe lability refers to a person's sensitivity to altered states of consciousness. I don't mean the dramatic ones, such as religious visions. I mean the more subtle ones, with phenomena like like deja vu, 'sensing a presence', pins and needles sensations, fleeting visions during twilight sleep, and other common episodes (5). These occur in a continuum across the human population (5, 27). Some are very sensitive, having these experiences very often, and others never have them at all. Not all people who have these experiences also have religious experiences, but almost everyone who has had a religious experience has (6, 26). In the body of research to date (7, 28), these experiences appear when two structures within the temporal lobes have their normal communications between one another disturbed. This can happen between the two hemispheres, or between two structures , within the limbic system (deep in the temporal lobes), or between a deep structure and the surface of the temporal lobes. In this model (vectorial hemisphericity and interhemispheric intrusions (8, 9), the event of the religious experience is likened to an extremely small epileptic event that stays in the temporal lobes of the brain. These are called "microseizures (10)". Like larger epileptic seizures, these experiences make lasting changes in the brain (11). The personality of the person, their 'sense of self', is forever changed as their limbic system now has a few pathways (matrices of neurons) 'burned in' in a process known as 'kindling'. Pathways that relate to the human sense of self. The limbic system plays a crucial role in the production of thought and emotion. It also deals in two rather subtle phenomena. Meaningfulness and contextualization (12). These direct our thoughts and feelings into recurring patterns unique to each individual, and with them, unique behavior patterns. The connotations of words are processed through their meanings and in the context in which we hear them. The sense that an experience "means something" or that words 'mean something more' than they say, are two examples of somewhat raw experiences of meaningfulness. If a thought feels meaningful, then it will try to find a context for itself. The more meaningful, the larger the context must be. God's will becomes larger, or more important, than life itself. A more salient point at this juncture is that the religious experience begs the largest possible context, even if only in one's thought's. The largest context, for Peoples of the Book, ( to coin a phrase) has always been God, and "His" whole world. In order to encompass their experience the person must make use of some rather exotic ideas. Ideas which are as far from their ordinary experience as their peak moments. For the Islamic terrorist, these are well known. Jihad, doing God's will, martyrdom, etc. Death. In 'His' name. GEOLOGY In recent years, it has emerged that the human psyche can be affected by seismic activity, or rather, seismic (or tectonic) strain (13, 14). The more earthquakes and tremors in a given area, the more often the earth's magnetic field changes, and that can have an impact on how our brains function. Our brains are sensitive to changes in the earth's magnetic field because they contain large numbers of organically-grown magnetite crystals. By coincidence or design, patterns seem to appear in the geomagnetic field, and these seem to have some overlap with the magnetic signals that are created when our brains are engaged in normal electrical firing. Specifically, in the limbic system. Exposure to the earth's magnetic field is 'chronic', meaning constant and long-term. Chronic exposure to changing magnetic fields could make the populations of seismically-active areas demonstrate a higher-than-average limbic lability. In other words, areas with a lot of earthquakes and tremors are more likely to produce a population willing to kill in God's name, all other conditions being equal. According to the Israeli Seismic Network's "Galilee" data set, Israel had 28 earthquakes in one three-year period. (1987 to 1991). That's about one every five weeks. (15) One possibility that cannot be discounted presently is that the Israeli / Palestinian territories might produce populations with either higher than normal limbic lability, or that their limbic phenomena might show specific patterns whose behavioral correlates with aggressive thoughts (ideation) and behavior. Although so far, there have been no statistical studies of temporal-lobe-based behaviors in seismically active areas, the notion has been considered with California's southern lake county designated as a good area to carry out such a study. Eventual studies in this field may allow a meaningful measure of the aggressiveness of in seismically-active areas, and with that, an estimate of the size of the population within the Palestinian minority willing to go to the furthest extremes, "in God's name". There are three cultures best known for the glorification of suicide, the Arabic-speaking terrorist sub-culture, the Aztecs, who exalted the act of volunteering as a human sacrifice, and medieval Japan, where suicide reached the status of a cult behavior within the Shinto religion, complete with it's own ceremony. All of these areas are subject to frequent seismic activity. FURTHER CONSIDERATIONS. 1) Given a sufficiently large population, large numbers of individuals who fit the criteria (about 1 in 20, based on the Canadian data) should be readily available. When the data is corrected for local seismically, we should find that the number increases noticeably. Any estimates based on this data should be reduced substantially in recognition that suicide is a less-probable behavior, all other conditions being equal, than homicide. However, the relative incidence of each behavior in normal populations may not necessarily provide meaningful estimates. Suicide is prohibited in normal religious belief, but sanctioned, even suggested, for the Jihadi. 2) Jihadi who have had a religious experience and fit the other criteria should be much more willing to volunteer than others, due to their personal sense of destiny, which martyrdom will appear to fulfill. 3) The more successful attacks this group performs, the more willing their volunteers will be, as they see their predecessors attaining the highest spiritual promotion, while they themselves only stand and wait. In short, the territorial and ideological conditions in the Middle East may favor the production of populations willing to kill for God. These considerations may help us to see that terrorists are not entirely the products of hate-mongers, and that they may not beyond help and reform. End _________________________________________________________________ REFERENCES (1) Persinger, Michael A. ", Neuropsychological Bases of God Beliefs", Praeger, 1987 (2) Persinger, M.A., ";Vectorial Cerebral Hemisphericity as Differerential Sources for The Sensed Presence, Mystical Experiences and Religious Conversions"; Perceptual and Motor Skills, 1993, 76, 915-930 3) The Jerusalem Post July 27, 2001 http://www.danielpipes.org/articles/20010727.shtml 4) Moir, Anne, Ph.D. & Jessel, David "Brain Sex: The Real Difference between Men & Woman" Laurel Publications, 1991 5) Persinger, Michael A.,& Makarec, Katherine "Complex partial Signs As A continum From Normals To Epileptics: Normative Data And Clinical Populations" Journal Of Clinical Psychology, Jan 1993, 49 (1) 33-37 6) Persinger, M.A., "People Who report Religious Experiences May Also Display Enhanced Temporal-Lobe Signs". Perceptual and Motor Skills,, 1984, 58, 963-975 7) Persinger, M.A. "Religious and Mystical Experiences as Artifacts of Temporal Lobe Function: A General Hypothesis.", Perceptual and Motor Skills 1983, 57, 1255-1262 8) Persinger, M.A. ";Enhanced Incidence of the ";Sensed Presence"; in People Who have learned to Meditate: Support for the Right Hemispheric Intrusion Hypothesis"; Perceptual and Motor Skills, 1992, 75, 1308-1310 9) Persinger, Michael A. Bureau, Yves, R.J. Peredery, Oksana, P., Richards, Pauline M. ";The sensed Presence as Right Hemispheric Intrusions into the Left Hemispheric Awareness of self: An Illustrative Case Study."; Perceptual And Motor Skills, 1994, 78, 999-1009 10) Persinger, Michael A. "Striking EEG Profiles From Single Episodes of Glossolalia and Transcendental Meditation" Perceptual & Motor Skills. 1984, 58, 127-133 11) Persinger, Michael A., "Near-Death Experiences: Determining the Neuroanatomical Pathways by Experiential Patterns, and Simulation In Experimental Settings."; Appeared in "Healing: Beyond Suffering or Death." Ministry of Mental Health Publications, Quebec, Canada, 1994. 12) Miller, Robert, "Cortico-Hippocampal Interplay And The Representation of Contexts in the Human Brain" Springer-Verag, 1991 13) M.A. Persinger, Out-of Body -Like experiences are More Probable in People With Elevated Complex Partial Epileptic-Like Signs During Periods of Enhanced Geomagnetic Activity: A Nonlinear Effect' Perceptual and Motor Skills, 1995, 80, 563-569 14) Conesa, Jorge, "Isolated Sleep paralysis, vivid dreams, and geomagnetic influence: II Perceptual And Motor Skills, 1997, 85, 579-584 15) [4]http://es1.multimax.com/~gtdb/galilee/eq.html [7]Shakti - Magnetic Brain Stimulation [8]Deja Vu [9]Darwinian Reincarnation [10]Consciousness [11]Romantic Love and the Brain [12]Origins of spirituality in Human Evolution [13]Sacred Lands [14]"The Sensed Presence" [15]Glasses For Enhanced Visual Acuity [16]God in the Brain [17]Spiritual Aptitude Test [18]Stimulating My BrainAs A Spiritual Path [19]Inventing Shakti [20]Sex_and States of Consciousness [21]The Gay Male Brain - Evolutionary Speculations [22]Visions [23]The Spiritual Personality [24]Enlightenment And the Brain [25]Archetypes [26]A Diet For Epileptics? [27]Odd Experiences - Online Poll Results [28]Brain_News [29]Out-Of-Body Experiences [30]Near-Death Experiences - Thai Case histories [31]The Big Bang [32]Meditations from Brain Science [33]Near-Death Experiences in Thailand - Discussion [34]Downloads [35]The Terrorist Brain [36]Publications by Dr. M.A. Persinger [37]Credentials [38]Hippocrates on Epilepsy References 2. mailto:brainsci at jps.net 3. http://tinyurl.com/4b3y9 4. http://es1.multimax.com/~gtdb/galilee/eq.html 7. http://www.innerworlds.50megs.com/winshakti/index.htm 8. http://www.innerworlds.50megs.com/dejavu.htm 9. http://www.innerworlds.50megs.com/rebirth.htm 10. http://www.innerworlds.50megs.com/consciousness.htm 11. http://www.innerworlds.50megs.com/romance.htm 12. http://www.innerworlds.50megs.com/deathanxiety.htm 13. http://www.innerworlds.50megs.com/earthfee.htm 14. http://www.innerworlds.50megs.com/sp.htm 15. http://www.innerworlds.50megs.com/evaglasses.htm 16. http://www.innerworlds.50megs.com/god.htm 17. http://www.innerworlds.50megs.com/anchored_TL_test.htm 18. http://www.innerworlds.50megs.com/me_myTL.htm 19. http://www.innerworlds.50megs.com/neuromag.htm 20. http://www.innerworlds.50megs.com/sex_ascs.htm 21. http://www.innerworlds.50megs.com/gaybrain.htm 22. http://www.innerworlds.50megs.com/Visions.htm 23. http://www.innerworlds.50megs.com/traits.htm 24. http://www.innerworlds.50megs.com/moksha.htm 25. http://www.innerworlds.50megs.com/archetypes.htm 26. http://www.innerworlds.50megs.com/ketogenic.htm 27. http://www.innerworlds.50megs.com/pollresults.htm 28. http://www.innerworlds.50megs.com/brain_news.htm 29. http://www.innerworlds.50megs.com/obe.htm 30. http://www.innerworlds.50megs.com/bkknde.htm 31. http://www.innerworlds.50megs.com/bigbang.htm 32. http://www.innerworlds.50megs.com/neuromed.htm 33. http://www.innerworlds.50megs.com/thaindes.htm 34. http://www.innerworlds.50megs.com/downloads.htm 35. http://www.innerworlds.50megs.com/terrorism.htm 36. http://www.innerworlds.50megs.com/Persinger_pubs.htm 37. http://www.innerworlds.50megs.com/credentials.htm 38. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:27:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:27:33 -0400 (EDT) Subject: [Paleopsych] Inner Worlds: Origins of spirituality in human evolution: what happened when our species learned of our own mortality. Message-ID: Origins of spirituality in human evolution: what happened when our species learned of our own mortality. http://www.innerworlds.50megs.com/deathanxiety.htm [Again, this is a dubious site.] The Beginnings of Spirituality and Death Anxiety in Human Evolution. [2]brainsci at jps.net Mommy, mommy, I feel sick. Run for the doctor, quick, quick, quick. Doctor, doctor, will I die? Yes my dear, and so shall I. (Whitley Strieber) Human spirituality had an origin in our history. It began soon after we acquired our language skills, and is related to the linguistic aspects of our sense of self. If we didn't have language, it would have been very easy to go into a total denial of the fact of personal death(1). Nobody has ever experienced their own death. You have to figure it out while you're still alive. How do you know you'll die? Unless you have some fairly intense psychic powers ( and you believe in reincarnation), you won't remember dying, and even then your memories will bear other interpretations. Most people, most of the time, only know that they will die because they've learned it, usually during childhood. "Does everybody have to die, Daddy?" "Do dogs go to Heaven?" "Can people in Heaven see us?" "Is it a long time?" As children grow up, they experience the deaths of those around them, and learn that people actually die. Their religions tell them about life after death, making sure that kids think about it in their own terms as soon as they learn to think about it at all. Tales are told. Death is heaven and hell. Death is rebirth. Death is where the ancestors are. Death is a lush spirit world. Death is being in the arms of God. Cultures and religions have co-opted death, turning it into a story written by living cultures, for living people. Near-Death studies have found that experiences very much like traditional afterlife stories (or something like them) can actually be found in near-death accounts. This can explain the source of these stories, but this chapter is more about why humans have the need for these stories in the first place. When we first appeared as a species, our brains expanded in two important areas. The frontal lobes, which have to do with planning, anticipating things, and projecting into the future, and the temporal lobes, which have to do with memory. Both of these large areas have many other functions, but these two stand out when we are talking about understanding death. The temporal lobes expanded, and now included language comprehension areas, and the frontal lobes grew to include language production areas. The human sense of self changed to include a component that dealt in language, so that we began to take words personally, and to feel our selves' affected by what others say to us. Our minds were re-shaped with a new 'top priority': talking to others. Each person had to fit the way they related to others into a vocabulary they shared with others. The process of actually identifying with others was probably enhanced as well. We were more able to assume that our experiences were like those of others, and that their experiences were also like ours, because they used the same words and gestures we do. This must have enhanced our capacity for bonding, but it also introduced a defense mechanism that helps people to feel that anyone who seems to experience the world differently than themselves is somehow less than fully human. Other nations were thought of as though they were other species. We began to judge others. Not just dislike them, but actually entertain thoughts that they shouldn't be the way they are. At this point in our evolutionary history, a fundamentally new experience became possible. A person could look at a dead body, remember the experience, think about it, personalize the whole thing, and conclude that the same thing is going to happen to them. Language skills are utilized, and the sentence appears in the mind: "I will die." The conclusion is reached without the person having any first hand experience at all. The concept is very threatening. Our new cognitive skills would allow a lot more imagination than before, and it would have been very adaptive for us to use this skill to imagine as many way of dying as possible. The more ways of dying we can imagine, the more ways we can avoid. But death anxiety is very stressful. If we were aware of our death at all times, we would be at risk for several psychoses, like the ones that follow the development of the normally fear-laden temporal lobe seizures. (2, 3). Persinger (4) has theorized that we developed a mechanism that shuts death anxiety off. Spiritual experience. You have to know something about how the brain creates emotion before you can understand how this works. It starts with a structure called the amygdala. Actually, there's two of them, one on each side of the brain. The one on the right is specialized for negative feelings, especially fear and sadness. The one on the left manages positive feelings. There's an idea that keeps re-appearing in my work. That when a negative emotion becomes intense enough, it can actually create bliss. Here's how it works: As a negative emotion, especially fear, deepens, it involves more and more of the right amygdala. The source of the emotion stimulates it from within. When a certain point is reached, it 'overloads', and the activity spills into the amygdala on the left. All of a sudden, the left amygdala, which has been operating at a low level, is filled with activity, and the person is filled with bliss, joy, ecstasy, and a sense of meaningfulness. The point where this happens is very deep in the experience of fear or sadness. My interpretation of these events is that they're a rare example of a state of consciousness that's usually a part of the death process (5). Because these states are ordinarily reserved for the end of life, they might manifest only when a person only feels that their lives, their self' is threatened with extinction. When that threshold is crossed, a spiritual experience can occur, one that takes a part of the death process, and uses it to end a painful episode. Many near-death phenomena have appeared at times that a person only thought they were about to die, even when they weren't in any danger at all, as though the belief that one is about to die is as much of a trigger as death itself. There are many recorded accounts of near-death-like experiences happening because of threats to the sense of self without any threats to the person's life. Here's one such case (4) : "When Fred died, the world collapsed around me. I could not eat or sleep, everything seemed to lose its color-food was tasteless, I couldn't swallow because of this lump in my throat; it would not go away no matter how much I cried. My mental pain would come and go like chill waves. Sometimes I would forget for a few minutes and think it was all a bad dream. Other times, the reality of it would hit me like a cold shower. The fourth night after he died, I lay in bed, trying to piece my life together. I lay there for hours. Suddenly, I felt Fred's presence beside me in the bed. I looked over and saw him standing beside me. He was dressed in his old work clothes and had a big smile on his face. He said "Don't worry Maud. I'm in heaven now, God has let me come to you. All our friends are here too. Its all true, what we believed about God ...this is only a temporary separation." I went to sleep and didn't wake for hours. The next day I felt good, the sun was shining again; there was meaning to my life." Maud probably identified herself as Fred's wife. When he died, she died. Her sense of herself, that is. Her brains activity can be guessed at: when her grief passed a certain point, her left amygdala was triggered, and its positive contribution to her sense of self was restored. When objects identified with the self are lost, so is the self. In fact, one study found that the most prominent predisposing factor in sensing the presence of a deceased spouse was that their wife or husband had died unexpectedly (8). Without time to prepare themselves mentally, they weren't able to resist their own grief, and the threshold was passed. The human sense of self is partly a social thing. If a person experiences too much rejection at the hands of others, as in child abuse, their self-esteem can be lowered below a certain point, also triggering this process. There are several studies on child abuse that support this idea (6). As the cycle of abuse proceeds, dissociative states that first appear as ways to escape from the abuse can become permanent options, traits.' (7) The following case (author's collection-paraphrased) illustrates the point: "As a child, I severely abused in every way a child can be. I grew up never having even one toy. I would be locked in a closet for days at a time. I spent my whole childhood wanting to die. He (her father) wouldn't give me any food or water. I lost all sense of time in there. I felt myself falling into a space I came to think of as the pit of despair'. Eventually, I came to the bottom. There, I found angels waiting for me. They held me and comforted me and told me how my I was being prepared for something important that would come later on in my life. They promised me that they would never leave me and that they would always protect me. Now, when I do massage, these same angels appear and give me spirit guidance. They helped me to become a healer, and I can't imagine anything I'd rather be. I can't say that I'm glad I was abused, but having been abused is a part of my life, and I like my life. Now." There are several points that both these stories have in common with near-death experiences, such as the angels and meeting a dead person. It seems as though they both felt they were dying, and used the mechanism for healing these feelings that appears in the death process. A lot of my work is devote to exploring the idea that mystic experiences are instances of the death process occurring outside of their normal context. I don't see these as pathological instances of the experiences. It seems more likely that our continually evolving minds found additional applications of the new neural mechanisms associated with the death process, and that is the source of human spirituality. In another article ([3]This one), we have looked at the similarities between romantic love and religious devotion. Love, however its defined, has a powerful ability to lessen (attenuate) death anxiety. The death-process, as revealed in near-death experiences, seems to return, over and over, to the experience of love and being loved, of being reunited with loved ones, and of looking at life in terms of how much love we creat for ourselves while we live. Both love and the experience of religious bliss lessen the anxiety that threats to the sense of self create. Love from others heals wounds to the self, which is partly a social thing. Religious bliss and ecstasy heal threats to the more privately felt self. The death process begins with the fear and resistance that helps us to try to survive, but once death begins, and survival becomes impossible, the fear that expectating death creates is replaced by a feeling that's every bit as good as the fear of death was bad. Our species is the only one that can hold the thought in our minds: "I will die.' Ours is the only one that needs a way to cope with it, and its long-term effects may be among the most important factors that shaped our cultures. As children, we might run to our mommies when we hear things that hurt our feelings. As adults, we run to God when our feelings are hurt. The fact of death is understood as an idea first, so its natural salve is more ideas . Ideas like the ones that religion uses to assure people that death, somehow, does not really exist. Any idea will do so long as it makes it possible to face death without anxiety. A story comes to mind. When I was in India, I was walking in a main street in Jaipur. I came on a small crowd gathered on one side of the street, and went up to see what it was about. When I got in, I saw that they were staring at a very pale man lying on the ground, wearing only a loincloth. He was covered with a loosely woven cheese cloth. Next to him was a battered aluminum bowl with some money in it. I noticed a cheerful-looking man standing next to him wearing the khaki shirt and brass insignia of a government worker. I went up to him and asked: "What are you doing?" You can ask that sort of thing in India. He said: "I am collecting money for this poor fellow." "I work in municipalities office." I asked: "what's the matter with him?" He answered: "Nothing is matter with him. He is just dead. We need money to burn his body. You put money. Very good for you." This man wore the Hindu tilak that advertised his beliefs about death and dying. I looked at the man, now knowing that it was a corpse. As if he could read my mind, the agent said: "Very soon he is child again. Nothing Worry." I put down 20 Rupees. The man's comments illustrate the ease with which the mind can remove the threat of death, and turn it into something trivial. "He is just dead. ... Nothing Worry." Just a few hours before the present, I was in the grocery store. While I waited in line , an old man got in the line behind me, and quite out of the blue, he said "I'm 83 years old. Last week, I knew I was going to die soon, but I don't care. I tried to tell my son, but he didn't wanna listen." I told him, thinking of Near-Death Experiences, that I'd heard that the after life was usually a pretty good deal. He said "Oh...I don't believe in any of that horseshit." The man seemed very happy. I asked him about it, and he said "Oh ...I smile all the time now." Feeling the approach of death to be certain, whether real or imagined, natural or not, expected or not, can initiate experiences that commonly occur at death. Threats to the more subtle sense of self are handled differently, in more social ways, like receiving comfort from others, but the brain structures involved seem to be much the same. We use language to amplify the fear of death and that creates a deeper need to avoid it than any other species has. Our thoughts of death gave us a reason to want to be immortal. Our death-process, which continues our consciousness after death for a time makes it possible for us to feel its true while we're still alive. And feel that we are safe from dying. References (1) Persinger, M.A. "Death Anxiety as a Semantic Conditioned Suppression Paradigm" Perceptual and Motor Skills 1985, 60, 827-830 (2) Slater, E, & Beard, E.W., "Schizophrenia-like psychoses of Epilepsy" British Journal of Psychiatry, 1963, 109 95-150 (3) Umbricht, Daniel, Et Al, "Postictal and Chronic Psychoses in Patients with Temporal Lobe Epilepsy" American Journal of Psychiatry, 1995, 152:2 224-231 (4) Persinger, Michael A., "Neuropsychological Bases of God-Beliefs", Praeger, 1987 (5) Murphy, Todd, "[6]The Structure and Function of Near-Death Experiences: An algorithmic reincarnation hypothesis" Journal of Near-Death Studies (in press) (6) Hunt, Harry, Et. Al, "Transpersonal Effects in Childhood: An Exploratory Empirical Study of Selected Adult Groups" Perceptual and Motor Skills, 1992, 75, 1135-1153 (7) Perry, Bruce, Et, Al "Childhood Trauma, The Neurobiology of Adaptation, and Use-dependent" Development of the Brain: How "States" become"Traits," Infant Mental Health Journal, Vol 16, No. 4, Winter 1995 271-291 (8) Simon-Buller, Sherry, M.S. "Correlates of Sensing the Presence of a deceased Spouse" Omega, Vol 19(1) 1988-89 [9]Shakti - Magnetic Brain Stimulation [10]Deja Vu [11]Darwinian Reincarnation [12]Shakti LITE [13]Consciousness [14]Romantic Love and the Brain [15]Origins of spirituality in Human Evolution [16]Sacred Lands [17]"The Sensed Presence" [18]Glasses For Enhanced Visual Acuity [19]God in the Brain [20]Spiritual Aptitude Test [21]Stimulating My BrainAs A Spiritual Path [22]Inventing Shakti [23]Sex_and States of Consciousness [24]The Gay Male Brain - Evolutionary Speculations [25]Visions [26]The Spiritual Personality [27]Enlightenment And the Brain [28]Archetypes [29]A Diet For Epileptics? [30]Odd Experiences - Online Poll Results [31]Brain_News [32]Out-Of-Body Experiences [33]Near-Death Experiences - Thai Case histories [34]The Big Bang [35]Meditations from Brain Science [36]Near-Death Experiences in Thailand - Discussion [37]Downloads [38]The Terrorist Brain [39]Publications by Dr. M.A. Persinger [40]Credentials [41]Hippocrates on Epilepsy References 1. http://www.jps.net/brainsci/ 2. mailto:brainsci at jp.net 3. http://www.jps.net/brainsci/romance.htm 4. mailto:brainsci at jps.net 5. http://www.innerworlds.50megs.com/ 6. http://www.jps.net/brainsci/rebirth.htm 7. http://www.innerworlds.50megs.com/index.htm 8. mailto:brainsci at jps.net 9. http://www.innerworlds.50megs.com/shakti/index.htm 10. http://www.innerworlds.50megs.com/dejavu.htm 11. http://www.innerworlds.50megs.com/rebirth.htm 12. http://www.innerworlds.50megs.com/shakti_lite/index.htm 13. http://www.innerworlds.50megs.com/consciousness.htm 14. http://www.innerworlds.50megs.com/romance.htm 15. http://www.innerworlds.50megs.com/deathanxiety.htm 16. http://www.innerworlds.50megs.com/earthfee.htm 17. http://www.innerworlds.50megs.com/sp.htm 18. http://www.innerworlds.50megs.com/evaglasses.htm 19. http://www.innerworlds.50megs.com/god.htm 20. http://www.innerworlds.50megs.com/anchored_TL_test.htm 21. http://www.innerworlds.50megs.com/me_myTL.htm 22. http://www.innerworlds.50megs.com/neuromag.htm 23. http://www.innerworlds.50megs.com/sex_ascs.htm 24. http://www.innerworlds.50megs.com/gaybrain.htm 25. http://www.innerworlds.50megs.com/Visions.htm 26. http://www.innerworlds.50megs.com/traits.htm 27. http://www.innerworlds.50megs.com/moksha.htm 28. http://www.innerworlds.50megs.com/archetypes.htm 29. http://www.innerworlds.50megs.com/ketogenic.htm 30. http://www.innerworlds.50megs.com/pollresults.htm 31. http://www.innerworlds.50megs.com/brain_news.htm 32. http://www.innerworlds.50megs.com/obe.htm 33. http://www.innerworlds.50megs.com/bkknde.htm 34. http://www.innerworlds.50megs.com/bigbang.htm 35. http://www.innerworlds.50megs.com/neuromed.htm 36. http://www.innerworlds.50megs.com/thaindes.htm 37. http://www.innerworlds.50megs.com/downloads.htm 38. http://www.innerworlds.50megs.com/terrorism.htm 39. http://www.innerworlds.50megs.com/Persinger_pubs.htm 40. http://www.innerworlds.50megs.com/credentials.htm 41. http://www.innerworlds.50megs.com/hippocrat.htm From checker at panix.com Mon Jul 4 01:28:26 2005 From: checker at panix.com (Premise Checker) Date: Sun, 3 Jul 2005 21:28:26 -0400 (EDT) Subject: [Paleopsych] NS: Entering a dark age of innovation Message-ID: Entering a dark age of innovation http://www.newscientist.com/article.ns?id=dn7616&print=true * 14:00 02 July 2005 * Robert Adler SURFING the web and making free internet phone calls on your Wi-Fi laptop, listening to your iPod on the way home, it often seems that, technologically speaking, we are enjoying a golden age. Human inventiveness is so finely honed, and the globalised technology industries so productive, that there appears to be an invention to cater for every modern whim. But according to a new analysis, this view couldn't be more wrong: far from being in technological nirvana, we are fast approaching a new dark age. That, at least, is the conclusion of Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center in China Lake, California. He says the rate of technological innovation reached a peak a century ago and has been declining ever since. And like the lookout on the Titanic who spotted the fateful iceberg, Huebner sees the end of innovation looming dead ahead. His study will be published in Technological Forecasting and Social Change. It's an unfashionable view. Most futurologists say technology is developing at exponential rates. Moore's law, for example, foresaw chip densities (for which read speed and memory capacity) doubling every 18 months. And the chip makers have lived up to its predictions. Building on this, the less well-known Kurzweil's law says that these faster, smarter chips are leading to even faster growth in the power of computers. Developments in genome sequencing and nanoscale machinery are racing ahead too, and internet connectivity and telecommunications bandwith are growing even faster than computer power, catalysing still further waves of innovation. But Huebner is confident of his facts. He has long been struck by the fact that promised advances were not appearing as quickly as predicted. "I wondered if there was a reason for this," he says. "Perhaps there is a limit to what technology can achieve." In an effort to find out, he plotted major innovations and scientific advances over time compared to world population, using the 7200 key innovations listed in a recently published book, The History of Science and Technology (Houghton Mifflin, 2004). The results surprised him. Rather than growing exponentially, or even keeping pace with population growth, they peaked in 1873 and have been declining ever since (see Graphs). Next, he examined the number of patents granted in the US from 1790 to the present. When he plotted the number of US patents granted per decade divided by the country's population, he found the graph peaked in 1915. The period between 1873 and 1915 was certainly an innovative one. For instance, it included the major patent-producing years of America's greatest inventor, Thomas Edison (1847-1931). Edison patented more than 1000 inventions, including the incandescent bulb, electricity generation and distribution grids, movie cameras and the phonograph. Medieval future Huebner draws some stark lessons from his analysis. The global rate of innovation today, which is running at seven "important technological developments" per billion people per year, matches the rate in 1600. Despite far higher standards of education and massive R&D funding "it is more difficult now for people to develop new technology", Huebner says. Extrapolating Huebner's global innovation curve just two decades into the future, the innovation rate plummets to medieval levels. "We are approaching the 'dark ages point', when the rate of innovation is the same as it was during the Dark Ages," Huebner says. "We'll reach that in 2024." But today's much larger population means that the number of innovations per year will still be far higher than in medieval times. "I'm certainly not predicting that the dark ages will reoccur in 2024, if at all," he says. Nevertheless, the point at which an extrapolation of his global innovation curve hits zero suggests we have already made 85 per cent of the technologies that are economically feasible. But why does he think this has happened? He likens the way technologies develop to a tree. "You have the trunk and major branches, covering major fields like transportation or the generation of energy," he says. "Right now we are filling out the minor branches and twigs and leaves. The major question is, are there any major branches left to discover? My feeling is we've discovered most of the major branches on the tree of technology." But artificial intelligence expert Ray Kurzweil - who formulated the aforementioned law - thinks Huebner has got it all wrong. "He uses an arbitrary list of about 7000 events that have no basis as a measure of innovation. If one uses arbitrary measures, the results will not be meaningful." Eric Drexler, who dreamed up some of the key ideas underlying nanotechnology, agrees. "A more direct and detailed way to quantify technology history is to track various capabilities, such as speed of transport, data-channel bandwidth, cost of computation," he says. "Some have followed exponential trends, some have not." Drexler says nanotechnology alone will smash the barriers Huebner foresees, never mind other branches of technology. It's only a matter of time, he says, before nanoengineers will surpass what cells do, making possible atom-by-atom desktop manufacturing. "Although this result will require many years of research and development, no physical or economic obstacle blocks its achievement," he says. "The resulting advances seem well above the curve that Dr Huebner projects." At the Acceleration Studies Foundation, a non-profit think tank in San Pedro, California, John Smart examines why technological change is progressing so fast. Looking at the growth of nanotechnology and artificial intelligence, Smart agrees with Kurzweil that we are rocketing toward a technological "singularity" - a point sometime between 2040 and 2080 where change is so blindingly fast that we just can't predict where it will go. Smart also accepts Huebner's findings, but with a reservation. Innovation may seem to be slowing even as its real pace accelerates, he says, because it's slipping from human hands and so fading from human view. More and more, he says, progress takes place "under the hood" in the form of abstract computing processes. Huebner's analysis misses this entirely. Take a modern car. "Think of the amount of computation - design, supply chain and process automation - that went into building it," Smart says. "Computations have become so incremental and abstract that we no longer see them as innovations. People are heading for a comfortable cocoon where the machines are doing the work and the innovating," he says. "But we're not measuring that very well." Huebner disagrees. "It doesn't matter if it is humans or machines that are the source of innovation. If it isn't noticeable to the people who chronicle technological history then it is probably a minor event." A middle path between Huebner's warning of an imminent end to tech progress, and Kurzweil and Smart's equally imminent encounter with a silicon singularity, has been staked out by Ted Modis, a Swiss physicist and futurologist. Modis agrees with Huebner that an exponential rate of change cannot be sustained and his findings, like Huebner's, suggest that technological change will not increase forever. But rather than expecting innovation to plummet, Modis foresees a long, slow decline that mirrors technology's climb. At the peak "I see the world being presently at the peak of its rate of change and that there is ahead of us as much change as there is behind us," Modis says. "I don't subscribe to the continually exponential rate of growth, nor to an imminent drying up of innovation." So who is right? The high-tech gurus who predict exponentially increasing change up to and through a blinding event horizon? Huebner, who foresees a looming collision with technology's limits? Or Modis, who expects a long, slow decline? The impasse has parallels with cosmology during much of the 20th century, when theorists debated endlessly whether the universe would keep expanding, creep toward a steady state, or collapse. It took new and better measurements to break the log jam, leading to the surprising discovery that the rate of expansion is actually accelerating. Perhaps it is significant that all the mutually exclusive techno-projections focus on exponential technological growth. Innovation theorist Ilkka Tuomi at the Institute for Prospective Technological Studies in Seville, Spain, says: "Exponential growth is very uncommon in the real world. It usually ends when it starts to matter." And it looks like it is starting to matter. Related Articles Taking a trip down memory-chip lane http://www.newscientist.com/article.ns?id=dn7536 19 June 2005 Whatever happened to machines that think? http://www.newscientist.com/article.ns?id=mg18624961.700 23 April 2005 Developing countries work around the 'technology divide' http://www.newscientist.com/article.ns?id=mg18524826.300 15 January 2005 Weblinks Naval Air Warfare Centre http://www.nawcad.navy.mil/ Technological Forecasting and Social Change http://www.sciencedirect.com/science/journal/00401625 Ray Kurzweil http://www.kurzweilai.net/ Eric Drexler http://www.foresight.org/FI/Drexler.html Acceleration Studies Foundation http://www.accelerating.org/ Institute for Prospective Technological Studies http://www.jrc.es/home/index.htm E-mail me if you have problems getting the referenced articles. From shovland at mindspring.com Mon Jul 4 21:35:11 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 4 Jul 2005 14:35:11 -0700 Subject: [Paleopsych] Global Brain on Overdrive Message-ID: <01C580A5.967004D0.shovland@mindspring.com> These days when the President speaks we start talking back almost as soon as he starts. Emails, phone calls, faxes, and blogs begin to communicate our response to those who are listening on the other end, and these days they are listening. Then they adjust, if they can. Steve Hovland www.stevehovland.net From checker at panix.com Wed Jul 6 00:28:49 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:28:49 -0400 (EDT) Subject: [Paleopsych] Guardian: Where belief is born Message-ID: Where belief is born http://www.guardian.co.uk/print/0,3858,5226946-111414,00.html Scientists have begun to look in a different way at how the brain creates the convictions that mould our relationships and inform our behaviour. Alok Jha reports Thursday June 30, 2005 Belief can make people do the strangest things. At one level, it provides a moral framework, sets preferences and steers relationships. On another, it can be devastating. Belief can manifest itself as prejudice or persuade someone to blow up themselves and others in the name of a political cause. "Belief has been a most powerful component of human nature that has somewhat been neglected," says Peter Halligan, a psychologist at Cardiff University. "But it has been capitalised on by marketing agents, politics and religion for the best part of two millennia." That is changing. Once the preserve of philosophers alone, belief is quickly becoming the subject of choice for many psychologists and neuroscientists. Their goal is to create a neurological model of how beliefs are formed, how they affect people and what can manipulate them. And the latest steps in the research might just help to understand a little more about why the world is so fraught with political and social tension. Matthew Lieberman, a psychologist at the University of California, recently showed how beliefs help people's brains categorise others and view objects as good or bad, largely unconsciously. He demonstrated that beliefs (in this case prejudice or fear) are most likely to be learned from the prevailing culture. When Lieberman showed a group of people photographs of expressionless black faces, he was surprised to find that the amygdala - the brain's panic button - was triggered in almost two-thirds of cases. There was no difference in the response between black and white people. The amygdala is responsible for the body's fight or flight response, setting off a chain of biological changes that prepare the body to respond to danger well before the brain is conscious of any threat. Lieberman suggests that people are likely to pick up on stereotypes, regardless of whether their family or community agrees with them. The work, published last month in Nature Neuroscience, is the latest in a rapidly growing field of research called "social neuroscience", a wide arena which draws together psychologists, neuroscientists and anthropologists all studying the neural basis for the social interaction between humans. Traditionally, cognitive neuroscientists focused on scanning the brains of people doing specific tasks such as eating or listening to music, while social psychologists and social scientists concentrated on groups of people and the interactions between them. To understand how the brain makes sense of the world, it was inevitable that these two groups would have to get together. "In the West, most of our physical needs are provided for. We have a level of luxury and civilisation that is pretty much unparalleled," says Kathleen Taylor, a neuroscientist at Oxford University. "That leaves us with a lot more leisure and more space in our heads for thinking." Beliefs and ideas therefore become our currency, says Taylor. Society is no longer a question of simple survival; it is about choice of companions and views, pressures, ideas, options and preferences. "It is quite an exciting development but for people outside the field, a very obvious one," says Halligan. Understanding belief is not a trivial task, even for the seemingly simplest of human interactions. Take a conversation between two people. When one talks, the other's brain is processing information through their auditory system at a phenomenal rate. That person's beliefs act as filters for the deluge of sensory information and guide the brain's response. Lieberman's recent work echoed parts of earlier research by Joel Winston of the University of London's Wellcome Department of Imaging Neuroscience. Winston found that when he presented people with pictures of faces and asked them to rate the trustworthiness of each, the amygdalas showed a greater response to pictures of people who were specifically chosen to represent untrustworthiness. And it did not matter what each person actually said about the pictured faces. "Even people who believe to their core that they do not have prejudices may still have negative associations that are not conscious," says Lieberman. Beliefs also provide stability. When a new piece of sensory information comes in, it is assessed against these knowledge units before the brain works out whether or not it should be incorporated. People do it when they test the credibility of a politician or hear about a paranormal event. Physically speaking, then, how does a belief exist in the brain? "My own position is to think of beliefs and memories as very similar," says Taylor. Memories are formed in the brain as networks of neurons that fire when stimulated by an event. The more times the network is employed, the more it fires and the stronger the memory becomes. Halligan says that belief takes the concept of memory a step further. "A belief is a mental architecture of how we interpret the world," he says. "We have lots of fluid things moving by - perceptions and so forth - but at the level of who our friends are and so on, those things are consolidated in crystallised knowledge units. If we did not have those, every time we woke up, how would we know who we are?" These knowledge units help to assess threats - via the amygdala - based on experience. Ralph Adolphs, a neurologist at the University of Iowa, found that if the amygdala was damaged, the ability of a person to recognise expressions of fear was impaired. A separate study by Adolphs with Simon Baron-Cohen at Cambridge University showed that amygdala damage had a bigger negative impact on the brain's ability to recognise social emotions, while more basic emotions seemed unaffected. This work on the amygdala shows it is a key part of the threat-assessment response and, in no small part, in the formation of beliefs. Damage to this alarm bell - and subsequent inability to judge when a situation might be dangerous - can be life-threatening. In hunter-gatherer days, beliefs may have been fundamental to human survival. Neuroscientists have long looked at brains that do not function properly to understand how healthy ones work. Researchers of belief formation do the same thing, albeit with a twist. "You look at people who have delusions," says Halligan. "The assumption is that a delusion is a false belief. That is saying that the content of it is wrong, but it still has the construct of a belief." In people suffering from prosopagnosia, for example, parts of the brain are damaged so that the person can no longer recognise faces. In the Cotard delusion, people believe they are dead. Fregoli delusion is the belief that the sufferer is constantly being followed around by people in disguise. Capgras' delusion, named after its discoverer, the French psychiatrist Jean Marie Joseph Capgras, is a belief that someone emotionally close has been replaced by an identical impostor. Until recently, these conditions were regarded as psychiatric problems. But closer study reveals that, in the case of Capgras' delusion for example, a significant proportion of sufferers had lesions in their brain, typically in the right hemisphere. "There are studies indicating that some people who have suffered brain damage retain some of their religious or political beliefs," says Halligan. "That's interesting because whatever beliefs are, they must be held in memory." Another route to understanding how beliefs form is to look at how they can be manipulated. In her book on the history of brainwashing, Taylor describes how everyone from the Chinese thought reform camps of the last century to religious cults have used systematic methods to persuade people to change their ideas, sometimes radically. The first step is to isolate a person and control what information they receive. Their former beliefs need to be challenged by creating uncertainty. New messages need to be repeated endlessly. And the whole thing needs to be done in a pressured, emotional environment. "Beliefs are mental objects in the sense that they are embedded in the brain," says Taylor. "If you challenge them by contradiction, or just by cutting them off from the stimuli that make you think about them, then they are going to weaken slightly. If that is combined with very strong reinforcement of new beliefs, then you're going to get a shift in emphasis from one to the other." The mechanism Taylor describes is similar to the way the brain learns normally. In brainwashing though, the new beliefs are inserted through a much more intensified version of that process. This manipulation of belief happens every day. Politics is a fertile arena, especially in times of anxiety. "Stress affects the brain such that it makes people more likely to fall back on things they know well - stereotypes and simple ways of thinking," says Taylor. "It is very easy to want to do that when everything you hold dear is being challenged. In a sense, it was after 9/11." The stress of the terror attacks on the US in 2001 changed the way many Americans viewed the world, and Taylor argues that it left the population open to tricks of belief manipulation. A recent survey, for example, found that more than half of Americans thought Iraqis were involved in the attacks, despite the fact that nobody had come out and said it. This method of association uses the brain against itself. If an event stimulates two sets of neurons, then the links between them get stronger. If one of them activates, it is more likely that the second set will also fire. In the real world, those two memories may have little to do with each other, but in the brain, they get associated. Taylor cites an example from a recent manifesto by the British National Party, which argues that asylum seekers have been dumped on Britain and that they should be made to clear up rubbish from the streets. "What they are trying to do is to link the notion of asylum seekers with all the negative emotions you get from reading about garbage, [but] they are not actually coming out and saying asylum seekers are garbage," she says. The 9/11 attacks highlight another extreme in the power of beliefs. "Belief could drive people to agree to premeditate something like that in the full knowledge that they would all die," says Halligan of the hijacker pilots. It is unlikely that beliefs as wide-ranging as justice, religion, prejudice or politics are simply waiting to be found in the brain as discrete networks of neurons, each encoding for something different. "There's probably a whole combination of things that go together," says Halligan. And depending on the level of significance of a belief, there could be several networks at play. Someone with strong religious beliefs, for example, might find that they are more emotionally drawn into certain discussions because they have a large number of neural networks feeding into that belief. "If you happen to have a predisposition, racism for example, then it may be that you see things in a certain way and you will explain it in a certain way," says Halligan. He argues that the reductionist approach of social neuroscience will alter the way people study society. "If you are brain scanning, what are the implications for privacy in terms of knowing another's thoughts? And being able to use those, as some governments are implying, in terms of being able to detect terrorists and things like that," he says. "If you move down the line in terms of potential uses for these things, you have potential uses for education and for treatments being used as cognitive enhancers." So far, social neuroscience has provided more questions than answers. Ralph Adolphs of the University of Iowa looked to the future in a review paper for Nature. "How can causal networks explain the many correlations between brain and behaviour that we are discovering? Can large-scale social behaviour, as studied by political science and economics, be understood by studying social cognition in individual subjects? Finally, what power will insights from cognitive neuroscience give us to influence social behaviour, and hence society? And to what extent would such pursuit be morally defensible?" The answers to those questions may well shape people's understanding of what it really means to believe. From checker at panix.com Wed Jul 6 00:29:03 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:03 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: The Big Bang Message-ID: The Big Bang New York Times Op-Ed, 5.7.3 http://www.nytimes.com/2005/07/03/opinion/03grinspoon.html By DAVID GRINSPOON Boulder, Colo. THE future wasn't supposed to be like this. Not for space-age kids like me, growing up enchanted by the Apollo Moon landings and Arthur C. Clarke's "2001: A Space Odyssey." By now we should be living on the Moon and departing in marvelous ships for the outer solar system, while new technologies gradually make life back on Earth more bountiful and harmonious. Instead 2001 and the years since have been marked by terrorism and conflict. Starvation and environmental destruction have not been eradicated or even stemmed. We have, for now, lost the ability to send people to the Moon, let alone Jupiter and beyond, and, for many of us, the future is not as hopeful a place as it once seemed. Yet tomorrow, we should see one tiny part of Mr. Clarke's grand vision realized - through NASA's Deep Impact mission (an unfortunate echo of a less visionary film). The part of Mr. Clarke's vision I refer to is a small scene in "2001," halfway through the book, that didn't even make the movie. The Saturn-bound scientists, having rounded Mars and now approaching Jupiter, pass close by a small asteroid. They greet this rocky celestial nomad by shooting it with a slug of metal that explodes into the asteroid, leaving a new crater and a brief puff of vapor that soon vanishes into the void. This Sunday night, if all goes as planned, NASA will finally pull off this same stunt, firing a three-foot-wide 820-pound copper barrel directly into the path of a nine-mile-long, potato-shaped comet by the name of Tempel 1. The two will collide at 23,000 miles an hour while a mother craft photographs the action from what one hopes will be a safe distance, and sends the pictures home to us at the speed of light. Why? So we can watch what happens. We stand to learn a lot about impact cratering - one of the major forces that has shaped all the worlds of our solar system. We will also have the chance to peer into the newly formed crater and observe the ice and vapor blasted back into space, thereby learning what lies within this frigid little world. When I describe this mission to people outside the community of space scientists and enthusiasts, it receives mixed reactions. Some feel that this is a fine hello to a new world, blasting away at it just to see what happens, like greeting a stranger by shooting first and asking questions later. A lawsuit has even been filed in a Russian court by a 45-year-old mother of two in Moscow, demanding that the mission be called off on the basis of its environmental and spiritual, well, impact. This legal action seems even more certainly doomed than the spacecraft itself (which may miss its target). Yet perhaps it does epitomize the concerns of many who wonder why we would do such a thing. Aren't we going too far to satisfy our curiosity here, acting like cruel, senseless boys blowing up frogs for the fun of it? Um, no. This explosion is not going to hurt anyone or anything. Here's an analogy. You would be justifiably concerned if, in order to learn about shorelines, some scientist decided to dig up your favorite beach. But you wouldn't object if she took a few grains of sand to study. There are something like one trillion comets larger than one mile in diameter, several hundred for each human on Earth, in this solar system alone, and countless more in the wider universe. So even if we destroyed Tempel 1 entirely, we would not be making a dent in the cometary sandbox. What's more, this mission will not demolish the comet, alter its course, or otherwise affect the cosmic scheme. Comets collide with other celestial objects all the time. The only thing extraordinary about this particular impact is that we engineered it. Deep Impact will simply make one more small hole in an object that, like all planets large and small, has been repeatedly dinged by colliding space debris since our solar system's origin 4.6 billion years ago. It is those dusky beginnings that this experiment can illuminate. Beneath the dirty ice crust of a comet like Tempel 1 is material that has been in deep freeze since the birth of our solar system. Mixed into this timeless frozen treat are organic molecules like those that seeded the young Earth with raw materials for making life. This ice may hold some buried chapters of the story of our origin. As H. G. Wells, the Arthur C. Clarke of the paleoindustrial age, once wrote: "There is no way back into the past. The choice is the Universe - or nothing." It has been said that the dinosaurs ultimately got snuffed because they lacked a space program. Sooner or later a killer comet will again cross Earth's path, threatening all life. Only next time, armed with knowledge about comets and space engineering, life on Earth will have a fighting chance. Someday, some of our descendants may decide to declare independence from this planet, seeking a more perfect union with the cosmos from which we spring. If so, then our current, tentative efforts in space may carry evolutionary significance equal to life's first forays from the oceans onto land. Given the recent reckless talk from the Department of Defense about introducing offensive weapons into space, Deep Impact will probably be seen in some quarters as more evidence of American aggression. In reality, it is the opposite - a peaceful gift from our nation to the world. Deep Impact is pure exploration. In this sense, we have evolved. Unlike Apollo, which was meant in part as a cold war threat to the Russians, Deep Impact really is for all humankind: it could further our understanding of where we all came from. Of course, explosions are cool (when they aren't hurting anyone). They're also often quite beautiful. Why, after all, do we love to watch fireworks? The flash of Deep Impact exploding into Tempel 1 may be visible from Earth through telescopes (and even, just possibly, to the naked eye, but not from the Eastern United States) at 1:52 a.m. Eastern time on July 4, above the bright star Spica, and to the left of Jupiter. Public events, showing live images from the world's best telescopes and, 10 minutes later, the first pictures from Deep Impact itself, are planned at many science museums. If successful, first-ever images of the approaching comet, the brilliant impact, the new crater and the receding icy nucleus will be seen soon thereafter. The scientific analysis that reveals the true meaning will be slower in coming, but once it arrives, the knowledge will be here as long as we are. David Grinspoon, a planetary scientist at the Southwest Research Institute, is the author of "Lonely Planets: The Natural Philosophy of Alien Life." From checker at panix.com Wed Jul 6 00:29:09 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:09 -0400 (EDT) Subject: [Paleopsych] NYT: O.K., Japan Isn't Taking Over the World. But China... Message-ID: O.K., Japan Isn't Taking Over the World. But China... New York Times, 5.7.3 http://www.nytimes.com/2005/07/03/weekinreview/03port.html By [3]EDUARDO PORTER NOT even 20 years have passed since the apparently unstoppable Japanese economic juggernaut struck fear in the hearts of Americans, and now China has emerged to be seen as the new economic menace threatening the nation's vital strategic interests. America's boom in the 1990's, coupled with Japan's decline into an economic quagmire through much of the decade, quelled most fears that the Japanese were going to eat our economic lunch. But now China has set out to snap up everything from Unocal to Maytag, not to mention a steady diet of United States Treasury bonds. And many of the leading voices who worried about Japan in the 1980's are warning that China presents a much bigger and more complex conundrum. "In retrospect I probably did overstate the nature of the Japanese challenge," said Chalmers Johnson, a prominent expert on Asia who in the early 1990's argued that Japan was "the only nation with real leverage over the United States." But, he added, "China is several orders of magnitude different from Japan." China is not only much bigger and more populous. Its economy is likely to become the largest in the world at some point in the next 50 years. As China keeps growing at a rubber-burning pace, competition with the United States over energy resources alone could cause substantial tension. Americans' fear of Japan's ascendancy in the 1980's was inspired by economics and pride. The growing bilateral trade deficit, as Japanese companies acquired leadership in industries that were once dominated by American businesses, cast a pall on America's self-confidence. The Japanese purchase of high-profile American assets, whether Columbia Pictures or the Pebble Beach golf course, just rubbed it in. Relations with China have a more complex geopolitical dimension. Unlike Japan, China is likely to become a military power. And it is not an unconditional ally. From Taiwan to the Middle East, the strategic interests of China diverge from America's. As it throws its weight around, to secure supplies of energy, say, or to avail itself of strategic technology, China can cause American policy makers no end of discomfort. For instance, if the United States government were to block the China National Offshore Oil Corporation from acquiring Unocal, it might just push China into cutting energy deals that the United States government would rather it did not make with countries like Russia or Iran. "Clearly the relationship between the United States and China is much more ambivalent than that between the United States and Japan," said Clyde V. Prestowitz, a trade negotiator during the Reagan administration who in the 1980's warned that Japan's ascent could eclipse the United States' power and compromise its prosperity. Those differences might warrant a more careful review of deals like the attempted purchase of Unocal by China's state run oil company. Robert B. Reich, the former secretary of labor in the Clinton administration who as a Harvard professor in the early 1990's argued that big Japanese investments in the United States were not threatening, says today that a Chinese acquisition of a potentially strategic asset like Unocal could be problematic. "In economic terms there is no reason to block Chinese ownership of U.S. assets just as there was no reason to block Japan from buying U.S. assets in the 1980's," Mr. Reich said. "But in political terms in 2005 there may be a reason to take seriously the downside of China owning Unocal." But even those most concerned about China's rise up the economic, political and military ladder recognize another, perhaps more important difference between China and 1980's Japan. The United States has a vested interest in China's success. For starters, whereas Japan's success at the time was inevitably seen as America's failure, today American businesses are all rooting for China to succeed. Because they own a lot of it. "It was virtually impossible for a foreigner to make an acquisition in Japan," Mr. Prestowitz said. "In China its 'y'all come.' American business is part of the China lobby, not the anti-China lobby." Some of Japan's old foes view America's growing interdependence with China with suspicion. "Interdependence means dependence," said Susan J. Tolchin, a professor of public policy at George Mason University who in 1993 was the co-author of "Selling Our Security," which argued against allowing foreign investment in American technology companies. "If we lose our economic independence we are going to lose our independence of movement on foreign policy." Yet most acknowledge that China's transformation from a struggling Communist state to a prosperous nation with a growing stake in the global system of market economies is in America's best interest. For example, if the Chinese central bank owns oodles of United States Treasury bonds, it has a reason not to want to destabilize the bond market. "I cannot see how a rich bourgeois China could be not in our interest," Mr. Johnson said. "If we're interested in our security we should establish collaborative ties with China right now." The difference in emphasis appears in a shift in Mr. Prestowitz's writing about Asia. In 1988, he published "Trading Places: How We Are Giving Our Future to Japan and How to Reclaim It." This year, he published "Three Billion New Capitalists: The Great Shift of Wealth and Power to the East." Today, Mr. Prestowitz said, the biggest risk is not that China will succeed in rising to become an economic superpower. The biggest risk is that it will fail. From checker at panix.com Wed Jul 6 00:29:18 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:29:18 -0400 (EDT) Subject: [Paleopsych] NYT: Blockbuster Drugs Are So Last Century Message-ID: Blockbuster Drugs Are So Last Century New York Times, 5.7.3 http://www.nytimes.com/2005/07/03/business/yourmoney/03drug.html By [3]ALEX BERENSON INDIANAPOLIS DRUG companies do an awful job of finding new medicines. They rely too much on billion-dollar blockbuster drugs that are both overmarketed and overprescribed. And they have been too slow to disclose side effects of popular medicines. Typical complaints from drug industry critics, right? Well, yes. Only this time they come from executives at Eli Lilly, the sixth-largest American drug maker and the company that invented Prozac. From this placid Midwestern city, well removed from the Boston-to-Washington corridor that is the core of the pharmaceutical industry, Lilly is ambitiously rethinking the way drugs are discovered and sold. In a speech to shareholders in April, Sidney Taurel, Lilly's chief executive, presented the company's new strategy in a pithy phrase: "the right dose of the right drug to the right patient at the right time." In other words, Lilly sees its future not in blockbuster medicines like Prozac that are meant for tens of millions of patients, but rather in drugs that are aimed at smaller groups and can be developed more quickly and cheaply, possibly with fewer side effects. There is no guarantee, of course, that Lilly will succeed. And some Wall Street analysts complain about the recent track record of the company, saying that it has habitually overpromised the potential of its drugs and taken one-time charges that distort its reported profits. In the last year, Lilly's stock has fallen 21 percent, while shares in the average big drug maker have been flat. Still, since late 2001, Lilly's labs have produced five truly new drugs, including treatments for osteoporosis, depression and lung cancer. The total exceeds that of many of its much-larger competitors. And at a time when the drug industry seems adrift, that Lilly has any vision at all for the future is striking. "The challenge for us as an industry, as a company, is to move more from a blockbuster model to a targeted model," Mr. Taurel said at Lilly's headquarters here recently. "We need a better value proposition than today." For five years, drug companies have struggled to bring new medicines to market. But Lilly executives say they believe that the drought is not permanent. Advances in understanding the ways that cells and genes work will soon lead to important new drugs, said Peter Johnson, executive director of corporate strategy. Moreover, Lilly expects that drug makers without breakthrough medicines that are either the first or the best in their categories will face increasing pressure from insurers to cut prices or lose coverage. If that vision is correct, the industry's winners will be companies that invest heavily in research and differentiate themselves by focusing on a few diseases instead of on building size and cutting costs through mergers, as [4]Pfizer has done. Lilly, which spends nearly 20 percent of its sales on research, compared with about 16 percent for the average drug company, may be well positioned for the future. "We do not believe that size pays off for anybody, especially size acquired in an acquisition," Mr. Taurel said. But if Lilly is wrong about the industry's direction, or if its research efforts fail, it could wind up like [5]Merck, the third-biggest American drug company, which has also adamantly opposed mergers and bet instead on its labs. After its own eight-year drought of major new drugs, Merck has had a 65 percent decline in its stock price since 2000, and its chief executive was forced out in May. Mr. Johnson acknowledges that Lilly's strategy is risky. "You can't make a discovery operation invent what you want them to invent," he said. So Lilly is seeking to improve its odds and to cut research costs by changing the way it develops drugs, said Dr. Steven M. Paul, president of the company's laboratories. Bringing a drug to market cost more than $900 million on average in 2003, compared with $230 million in 1987, according to estimates from Lilly and industry groups. But the public's willingness to accept side effects is shrinking, and some drug-safety experts and lawmakers want even larger and longer clinical trials for new drugs, increasing development costs. If nothing changes, Lilly expects that by 2010, the cost of finding a single new drug may reach $2 billion by 2010, an unsustainable amount, Dr. Paul said. "We've got to do something to reduce the costs," he added. The biggest expense in drug development comes not from early-stage research, he said, but from the failure of drugs after they have left the labs and been tested in humans. A drug that has moved into first-stage human clinical trials now has only about an 8 percent chance of reaching the market. Even in late-stage trials, about half of all drugs fail, often because they do not prove better than existing treatments. To change that, Lilly is focusing its research efforts on finding biomarkers - genes or other cellular signals that will indicate which patients are most likely to respond to a given drug. Other drug makers are also searching for biomarkers, but Lilly executives are the most vocal in expressing their belief that this area of research will fundamentally change the way drugs are developed. Using biomarkers should make drugs more effective and reduce side effects, Dr. Paul said. If all goes as planned, the company will know sooner whether its drugs are working, and will develop fewer drugs that fail in clinical trials. The company may even be able to use shorter, smaller clinical trials because its drugs will demonstrate their effectiveness more quickly. To improve its chances further, Lilly has focused its research efforts on four types of diseases: diabetes, cancer, mental illness and some heart ailments. In each category, it has had a history of successful drugs. The company hopes to reduce the cost of new development to about $700 million a drug by 2010. Because Lilly now spends about $2.7 billion annually on research, that figure would imply that the company could develop as many as four new drugs a year, compared with just one a year if current trends do not change. Among the company's most promising drugs in development are ruboxistaurin, for diabetes complications; arzoxifene, for the prevention of osteoporosis and breast cancer; and enzastaurin, for brain tumors and other cancers. The flip side of Lilly's plan is that drugs it develops may be used more narrowly than current treatments. For example, the company may find that a diabetes drug works best in patients under 40 with a specific genetic marker, and enroll only those patients in its clinical trials. While doctors can legally prescribe any medicine for any reason once it is on the market, insurers would probably balk at covering the drug for diabetics over 40 or for patients without the genetic marker. "The old model was, one size fits a whole lot of people," said Mr. Johnson, Lilly's strategist. Last month, Lilly's vision of targeted therapies gained some ground - albeit at another company. The Food and Drug Administration approved BiDil, a heart drug from [6]NitroMed that is intended for use by African-Americans. The approval, based on a clinical trial that enrolled only black patients, was the first ever for a drug meant for one racial group. While race can be a crude characterization of groups, it can serve as an effective biomarker, scientists said. Lilly's road map may look appealing. But some analysts question whether the company is as different from the rest of the industry as it would like to believe. While it professes to see a future of narrowly marketed medicines, Lilly is more dependent than any other major drug maker on a single blockbuster drug: Zyprexa, its treatment for schizophrenia and manic depression. Zyprexa accounted for about $4.4 billion in sales last year, 30 percent of the company's total sales. And while Lilly executives say they want to avoid marketing its drugs too heavily or in anything less than a forthright way, federal prosecutors in Philadelphia are investigating its marketing practices for Zyprexa and Prozac. Last month, Lilly said it would pay $690 million to settle 8,000 lawsuits that contended that Zyprexa could cause obesity and diabetes and that the company had not properly disclosed that risk. Lilly says that it acted properly in marketing Zyprexa and that is cooperating with the federal investigation. Still, the controversy has hurt Zyprexa sales, which fell 8 percent in the United States last year. Some of Lilly's newest drugs have been commercial disappointments. The company and analysts hoped that annual sales of Xigris, a treatment introduced in late 2001 for a blood infection called sepsis, could reach $1 billion; Xigris's sales were $200 million last year. Sales of Strattera, for attention deficit disorder, slowed after a report in December that the drug can cause a rare but serious form of liver damage. Michael Krensavage, an analyst at Raymond James & Associates who rates Lilly shares as underperform, said that Lilly's emphasis on targeted therapies might be a defensive response to the industry's recent inability to produce blockbusters. Rather than targeted treatments, "drug companies would hope to produce a medicine that works for everybody," Mr. Krensavage said. "That's certainly the goal." Mr. Krensavage also criticized Lilly's accounting, noting that the company has taken one-time charges in each of the last three years that have muddied its financial results. Lilly said its accounting complied with all federal rules. Despite the company's recent stumbles with Zyprexa, other analysts say Lilly is well positioned, and they praise Mr. Taurel for looking for innovative ways to lower the cost of drug development. "Sidney has a better concept of what's happening outside his four walls and is far better in reflecting that in how the company runs on a day-to-day basis than any of his peers," said Richard Evans, an analyst at Sanford C. Bernstein & Company. Mr. Taurel acknowledged Lilly's dependence on Zyprexa and the fact that some new drugs had not met expectations. But he said the transition to targeted therapies would take years, if not decades. With earnings last year of $3.1 billion, before one-time charges, and no major patent expirations before 2011, Lilly can afford to make long-term bets, he said. "Our model needs to evolve," he said. "For the industry and for Lilly." From checker at panix.com Wed Jul 6 00:32:22 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:22 -0400 (EDT) Subject: [Paleopsych] CHE: A glance at the current issue of Academic Medicine: Cultural differences in end-of-life medical care Message-ID: A glance at the current issue of Academic Medicine: Cultural differences in end-of-life medical care The Chronicle of Higher Education: Magazine & journal reader News bulletin from the Chronicle of Higher Education, 5.7.4 http://chronicle.com/prm/daily/2005/07/2005070101j.htm Medical students and residents often feel unprepared to care for dying patients, according to a series of articles in the journal, which is published by the Association of American Medical Colleges. Baback B. Gabbay, a resident in psychiatry at the University of California at Los Angeles, and his co-authors found that residents in Japan are more likely than their American counterparts to withhold information about a terminal diagnosis from a patient, telling only the patient's family instead. While medical educators in the United States strongly favor disclosing such information to patients, residents often struggle when that policy conflicts with a patient's cultural traditions. Another article describes the emotional reactions of third-year medical students to their "most memorable" patient deaths, concluding that students are deeply affected, even when they do not have close contact with the patient. Jennifer Rhodes-Kropf, an assistant professor of medicine at Yeshiva University's Albert Einstein College of Medicine, and her co-authors found that medical professors rarely hold "debriefing" sessions to allow students to discuss their experiences. Instead, they expect students to remain stoic. Dr. Gabbay's article, "Negotiating End-of-Life Decision Making: A Comparison of Japanese and U.S. Residents' Approaches," is available online at http://www.academicmedicine.org/cgi/content/full/80/7/617 The rest of the issue is available to subscribers only at http://www.academicmedicine.org/content/vol80/issue7/ --By Katherine S. Mangan ------------------- Here's the contents of this issue of Academic Medicine. Volume 80(7) July 2005 2005 Association of American Medical Colleges ISSN: 1040-2446 1. Redesigning Clinical Education: A Major Challenge for Academic Health Centers. Whitcomb, Michael E. MD pg. 615-616 2. A New Item in the Journal pg. 616 3. Negotiating End-of-Life Decision Making: A Comparison of Japanese and U.S. Residents' Approaches. Gabbay, Baback B. MD; Matsumura, Shinji MD, MSHS; Etzioni, Shiri MD; Asch, Steven M. MD, MPH; Rosenfeld, Kenneth E. MD; Shiojiri, Toshiaki MD; Balingit, Peter P. MD; Lorenz, Karl A. MD, MSHS pg. 617-621 4. Residents' End-of-Life Decision Making with Adult Hospitalized Patients: A Review of the Literature. Gorman, Todd E. MD, FRCP(C); Ahern, Stephane P. MD, FRCP(C); Wiseman, Jeffrey MD, FRCP(C), MA; Skrobik, Yoanna MD, FRCP(C) pg. 622-633 5. "This is just too awful; I just can't believe I experienced that ": Medical Students' Reactions to Their "Most Memorable" Patient Death. Rhodes-Kropf, Jennifer MD; Carmody, Sharon S. MD; Seltzer, Deborah; Redinbaugh, Ellen PhD; Gadmer, Nina MHA; Block, Susan D. MD; Arnold, Robert M. MD[Featured Topic Research Report] pg. 634-640 6. Third-Year Medical Students' Experiences with Dying Patients during the Internal Medicine Clerkship: A Qualitative Study of the Informal Curriculum. Ratanawongsa, Neda MD; Teherani, Arianne PhD; Hauer, Karen E. MD pg. 641-647 7. "It was haunting": Physicians' Descriptions of Emotionally Powerful Patient Deaths. Jackson, Vicki A. MD, MPH; Sullivan, Amy M. EdD; Gadmer, Nina M. MHA; Seltzer, Deborah; Mitchell, Ann M. PhD, RN; Lakoma, Mathew D.; Arnold, Robert M. MD; Block, Susan D. MD pg. 648-656 8. Teaching and Learning End-of-Life Care: Evaluation of a Faculty Development Program in Palliative Care. Sullivan, Amy M. EdD; Lakoma, Matthew D.; Billings, J Andrew MD; Peters, Antoinette S. PhD; Block, Susan D. MD; the PCEP Core Faculty pg. 657-668 9. The Palliative Care Clinical Evaluation Exercise (CEX): An Experience-Based Intervention for Teaching End-of-Life Communication Skills. Han, Paul K. J. MD, MA, MPH; Keranen, Lisa B. PhD; Lescisin, Dianne A. MHPE; Arnold, Robert M. MD pg. 669-676 10. Cover Note: Indiana University School of Medicine. Perry, Pamela Su pg. 677 11. Blindness. Saramago, Jose pg. 678 12. Commentary. Miksanek, Tony MD pg. 679 13. How Can Physicians' Learning Styles Drive Educational Planning? Armstrong, Elizabeth PhD; Parsa-Parsi, Ramin MD, MPH pg. 680-684 14. Teaching Evidence-Based Medicine: Should We Be Teaching Information Management Instead? Slawson, David C. MD; Shaughnessy, Allen F. PharmD pg. 685-689 15. Responsibly Managing the Medical School-Teaching Hospital Power Relationship. Chervenak, Frank A. MD; McCullough, Laurence B. PhD pg. 690-693 16. Self-Reflection in Multicultural Training: Be Careful What You Ask For. Murray-Garcia, Jann L. MD, MPH; Harrell, Steven; Garcia, Jorge A. MD, MS; Gizzi, Elio MD; Simms-Mackey, Pamela MD pg. 694-701 17. The Irony of Osteopathic Medicine and Primary Care. Cummings, Mark PhD; Dobbs, Kathleen J. PA-C, MS pg. 702-705 18. Resident Teaching: A Tale of Two Places in Time. Wilson, Lynn D. MD, MPH pg. 705 19. Considering the Culture of Disability in Cultural Competence Education. Eddey, Gary E. MD; Robey, Kenneth L. PhD pg. 706-712 From checker at panix.com Wed Jul 6 00:32:27 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:27 -0400 (EDT) Subject: [Paleopsych] SW: Einstein on Physics and Progress Message-ID: History of Science: Einstein on Physics and Progress http://scienceweek.com/2005/sw050708-5.htm The following points are made by Albert Einstein (Physics Today 2005 June): 1) If philosophy is interpreted as a quest for the most general and comprehensive knowledge, it obviously becomes the mother of all scientific inquiry. But it is just as true that the various branches of science have, in their turn, exercised a strong influence on the scientists concerned and, beyond that, have affected the philosophical thinking of each generation. Let us glance, from this point of view, at the development of physics and its influence on the conceptual framework of the other natural sciences during the last hundred years. 2) Since the Renaissance, physics has endeavored to find the general laws governing the behavior of material objects in space and time. To consider the existence of these objects as a problem was left to philosophy. To the scientist, the celestial bodies, the objects on Earth, and their chemical peculiarities, simply existed as real objects in space and time, and his task consisted solely in abstracting these laws from experience by way of hypothetical generalizations. 3) The laws were supposed to hold without exceptions. A law was considered invalidated if, in a single case, any one of its properly deduced conclusions was disproved by experience. In addition, the laws of the external world were also considered to be complete, in the following sense: If the state of the objects is completely given at a certain time, then their state at any other time is completely determined by the laws of nature. This is just what we mean when we speak of "causality." Such was approximately the framework of the physical thinking a hundred years ago. 4) As a matter of fact, the framework was even more restrictive than it has been sketched. The objects of the external world were considered to consist of immutable mass points, acting upon each other with well-defined forces eternally attached to them and, under the influence of these forces, carrying out incessant motions to which, in the last analysis, all observable processes could be reduced. 5) From a philosophical point of view, the conception of the world, as it appears to those physicists, is closely related to naive realism, since they looked upon the objects in space as directly given by our sense perceptions. The introduction of immutable mass points, however, represented a step in the direction of a more sophisticated realism. For it was obvious from the beginning that the introduction of these atomistic elements was not induced by direct observation. 6) With the Faraday-Maxwell theory of the electromagnetic field, a further refinement of the realistic conception was unavoidable. It became necessary to ascribe the same irreducible reality to the electromagnetic field, continually distributed in space, as formerly to ponderable matter. But sense experiences certainly do not lead inevitably to the field concept. There was even a trend to represent physical reality entirely by the continuous field, without introducing mass points as independent entities into the theory. 7) Summing up, we may characterize the framework of physical thinking up to a quarter of a century ago as follows: There exists a physical reality independent of substantiation and perception. It can be completely comprehended by a theoretical construction which describes phenomena in space and time -- a construction whose justification, however, lies in its empirical confirmation. The laws of nature are mathematical laws connecting the mathematically describable elements of this construction. They imply complete reality in the sense mentioned before. 8) Under the pressure of overwhelming experimental evidence concerning atomistic phenomena, almost all of today's physicists are now convinced that this conceptual framework --notwithstanding its apparently wide scope -- cannot be retained. What appears untenable to physicists of our times is not only the requirement of complete causality but also the postulate of a reality which is independent of any measurement or observation. Physics Today http://www.physicstoday.org -------------------------------- Related Material: HISTORY OF PHYSICS: EINSTEIN AND BROWNIAN MOTION The following points are made by Giorgio Parisi (Nature 2005 433:221): 1) On 30 April 1905, Einstein completed his doctoral thesis on osmotic pressure, in which he developed a statistical theory of liquid behavior based on the existence of molecules. This work, together with his subsequent paper on "brownian motion", constitutes one of the most important, but often overlooked, contributions that Einstein made to physics. 2) In the closing decades of the 19th century, theoretical physics was in a state of turmoil. The big outstanding questions of that time have been much discussed. Such questions culminated in relativity and quantum mechanics -- theoretical developments in which Einstein's key role is being justly celebrated this year. But it should not be forgotten that the seemingly innocuous observations of Robert Brown (1773-1858) of the irregular motions of a suspension of pollen grains in water -- now known as brownian motion -- also heralded a revolution in physical thought. 3) Although the concepts of atoms and molecules are now universally accepted, this was not the case at the turn of the 20th century. The statistical interpretation by Ludwig Boltzmann ((1844-1906) of the laws of thermodynamics -- a body of work deeply rooted in the ensemble dynamical motion of material atoms -- had many adherents. But there were also many heavyweight dissenters (for a time including Max Planck (1858-1947)), who did not accept that thermodynamics had its origins in the reversible motion of invisible hypothetical particles. And many distinguished physicists of the time (among them Wilhelm Roentgen (1845-1923)) suspected that brownian motion indicated a clear failure of Boltzmann's formulation of the second law of thermodynamics. 4) It was in this context that Einstein's explanation for brownian motion made an initial impression. In particular, Einstein showed that the irregular motion of the suspended particles could be understood as arising from the random thermal agitation of the molecules in the surrounding liquid: these smaller entities act both as the driving force for the brownian fluctuations (through the impact of the liquid molecules on the larger particles), and as a means of damping these motions (through the viscosity experienced by the larger particles). This connection between displacement and the viscosity can be quantitatively expressed in one dimension as a relationship between displacement, viscosity, the universal gas constant, Avogadro's number, the Boltzmann constant, the temperature, and the radius of the suspended particles. This finding went beyond simply confirming the existence of atoms and molecules, and provided a new way of determining Avogadro's number. As Einstein himself remarked, the consequence of this relation is that one can see, directly through a microscope, a fraction of the thermal energy manifest as mechanical energy. By proving that a statistical mechanics description could explain quantitatively brownian motion, all doubts concerning Boltzmann's statistical interpretation of the thermodynamic laws suddenly faded.(1-3) References (abridged): 1. Pais, A. Subtle is the Lord... (Oxford Univ. Press, 1982) 2. Kuhn, T. S. Black Body Theory and the Quantum Discontinuity 1894-1911 (Oxford Univ. Press, 1978) 3. Mezard, M., Parisi, G. & Virasoro, M. A. Spin Glass Theory and Beyond (World Scientific, Singapore, 1987) Nature http://www.nature.com/nature -------------------------------- Related Material: HISTORY OF PHYSICS: EINSTEIN AND RADIATION The following points are made by Daniel Kleppner (Physics Today 2005 February): 1) Albert Einstein had a genius for extracting revolutionary theory from simple considerations: From the postulate of a universal velocity he created special relativity; from the equivalence principle he created general relativity; from elementary arguments based on statistics he discovered energy quanta. His 1905 paper on quantization of the radiation field (often referred to, inaccurately, as the photoelectric-effect paper) was built on simple statistical arguments, and in subsequent years he returned repeatedly to questions centered on statistics and thermal fluctuations. 2) In 1909, Einstein showed that statistical fluctuations in thermal radiation fields display both particle-like and wave-like behavior. His was the first demonstration of what would later become the principle of complementarity. In 1916, when he turned to the interplay of matter and radiation to create a quantum theory of radiation, he once again based his arguments on statistics and fluctuations. 3) Einstein's theory of radiation is a treasure trove of physics, for in it one can discern the seeds of quantum electrodynamics and quantum optics, the invention of masers and lasers, and later developments such as atom-cooling, Bose-Einstein condensation, and cavity quantum electrodynamics. Our understanding of the Cosmos comes almost entirely from images brought to us by radiation across the electromagnetic spectrum. Einstein's theory of radiation describes the fundamental processes by which those images are created. 4) Einstein's 1905 paper on quantization endowed Max Planck's quantum hypothesis with physical reality. The oscillators for which Planck proposed energy quantization were fictitious, and his theory for blackbody radiation lacked obvious physical consequences. But the radiation field for which Einstein proposed energy quantization was real, and his theory had immediate physical consequences. His paper, published in March 1905, was the first of his wonder year. In rapid succession he published papers on Brownian motion, special relativity, and his quantum theory of the specific heat of solids. 5) In 1907, his interest shifted to gravity, and he took the first tentative steps toward the theory of general relativity. His struggle with gravitational theory became all-consuming until November 1915, when he finally obtained satisfactory gravitational field equations. During those years of struggle, however, Einstein apparently had a simmering discontent with his understanding of thermal radiation, for in July 1916, he turned to the problem of how matter and radiation can achieve thermal equilibrium. One could argue that 1916 was too soon to deal with that problem because there were serious conceptual obstacles to the creation of a consistent theory. Einstein, in his Olympian fashion, simply ignored them. In the next eight months, he wrote three papers on the subject, publishing the third, and best known, in 1917.[1,2] References (abridged): 1. A. Einstein, Phys. Z. 18, 121 (1917); English translation On the Quantum Theory of Radiation, by D. ter Haar, The Old Quantum Theory, Pergamon Press, New York (1967), p. 167 2. A. Pais, Rev. Mod. Phys. 49, 925 (1977) Physics Today http://www.physicstoday.org From checker at panix.com Wed Jul 6 00:32:36 2005 From: checker at panix.com (Premise Checker) Date: Tue, 5 Jul 2005 20:32:36 -0400 (EDT) Subject: [Paleopsych] SW: On the Aether and Broken Symmetry Message-ID: Theoretical Physics: On the Aether and Broken Symmetry http://scienceweek.com/2005/sw050708-6.htm The following points are made by Frank Wilczek (Nature 2005 435:152): 1) The concept that what we ordinarily perceive as empty space is in fact a complicated medium is a profound and pervasive theme in modern physics. This invisible inescapable medium alters the behavior of the matter that we do see. Just as Earth's gravitational field allows us to select a unique direction as up, and thereby locally reduces the symmetry of the underlying equations of physics, so cosmic fields in "empty" space lower the symmetry of these fundamental equations everywhere. Or so theory has it. For although this concept of a symmetry-breaking aether has been extremely fruitful (and has been demonstrated indirectly in many ways), the ultimate demonstration of its validity --cleaning out the medium and restoring the pristine symmetry of the equations -- has never been achieved: that is, perhaps, until now. 2) In new work, Cramer et al.[1] claim to have found evidence that -- for very brief moments, and over a very small volume --experimentalists working at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in New York have vaporized one symmetry-breaking aether, and produced a more perfect emptiness. This pioneering attempt to decode the richly detailed (in other words, complicated and messy) data emerging from the RHIC experiments is intricate[2], and it remains to be seen whether the interpretation Cramer et al. propose evolves into a consensus. In any case, they've put a challenge on the agenda, and suggested some concrete ways to tackle it. 3) But what exactly is this underlying symmetry of nature that is broken by the aether? How is it broken, and how might it be restored? The symmetry in question is called chiral symmetry, and it involves the behavior of quarks, the principal constituents of the protons and neutrons in atomic nuclei (among other things). Chiral symmetry is easiest to describe if we adopt the slight idealization that the lightest quarks, the up quark (u) and down quark (d), are massless. (In reality their masses are small, on the scale of the energies in play, but not quite zero.) According to the equations of quantum chromodynamics (QCD), the theory that describes quarks and their interactions via the strong nuclear force, the possible transformations among quarks are very restricted. One rule is that u-quarks and d-quarks retain their "flavor" -- that is, a (u) never converts into a (d), nor a (d) into a (u). 4) Quarks also, like the more familiar photons, have an intrinsic spin. If the spin axis is aligned with the direction of motion, then the sense of the rotation defines a handedness, known as chirality, rather like a left- or right-handed screw. The two possible states of chirality of a quark, left and right, are essentially the same concept as left and right circular polarization for photons. The fundamental interaction between quarks and gluons, to which we ultimately trace the strong nuclear force, conserves chirality as well as flavor. Thus a u-quark with left-handed chirality (written uL) never converts into a right-handed uR, and so on. But these extra conservation laws, which follow from the symmetry of QCD's equations, are too good to be true. In reality, one finds that although the rule forbidding changes of flavor holds true, there is no additional conservation law for chirality -- chiral symmetry is broken. 5) The accepted explanation for this mismatch blames a form of aether. The idea is that there is such a powerful attractive interaction between uL-quarks and R-antiquarks (every quark has an antiquark with the opposite charge), and likewise between dL-quarks and R-antiquarks, that the energy gained from their attraction outweighs the cost of creating the particles in the first place. Thus, perfectly empty space, devoid of quarks, is unstable. One can lower the energy of the vacuum by filling it with bound uL-R and dL-R pairs (and their antiparticles, L-uR, L-dR). Physicists call this process the formation of the chiral condensate. In the stable state that finally results, the conservation of chirality is rendered ineffective, as space itself has become a reservoir containing, for example, an indefinite number of uL-quarks.[3-5] References (abridged): 1. Cramer, J., Miller, G., Wu, J. & Yoon, J. -H. preprint at http://www.arxiv.org/nucl-th/0411031 (2004) 2. Kolb, P. F. & Heinz, U. in Quark Gluon Plasma Vol. 3 (eds Hwa, R. C. & Wang, X.-N.) (World Scientific, Singapore, 2004); preprint at http://www.arxiv.org/nucl-th/0305084 (2003) 3. Adcox, K. et al. (The PHENIX collaboration) Nucl. Phys. A (submitted); preprint at http://arxiv.org/nucl-ex/0410003 (2005) 4. Adams, J. et al. (The STAR collaboration) Nucl. Phys. A (submitted); preprint at http://arxiv.org/nucl-ex/05010095 (2005) 5. Back, B. B. et al. (The PHOBOS collaboration) Nucl. Phys. A (in the press); doi:10.1016/j.nuclphysa.2005.03.084 (2005) Nature http://www.nature.com/nature -------------------------------- Related Material: ON THE ETHER CONCEPT IN PHYSICS Notes by ScienceWeek: In the late 19th century, what we now call "classical" physics incorporated the assumed existence of the "ether", a hypothetical medium believed to be necessary to support the propagation of electromagnetic radiation. The famous *Michelson-Morley experiment of 1887 was interpreted as demonstrating the nonexistence of the ether, and this experiment became a significant prelude to the subsequent formulation of Einstein's *special theory of relativity. Although it is often stated outside the physics community that the ether concept was abandoned after the Michelson-Morley experiment, this is not quite true, since the classical ether concept has been essentially reformulated into several modern *field concepts. The following points are made by Frank Wilczek (Physics Today January 1999): 1) Isaac Newton (1642-1727) believed in a continuous medium filling all space, but his equations did not require any such medium, and by the early 19th century the generally accepted ideal for fundamental physical theory was to discover mathematical equations for forces between indestructible atoms moving through empty space. 2) It was Michael Faraday (1791-1867) who revived the idea that space was filled with a medium having physical effects in itself... To summarize Faraday's results, James Clerk Maxwell (1831-1879) adapted and developed the mathematics used to describe fluids and elastic solids, and Maxwell postulated an elaborate mechanical model of electrical and magnetic fields. 3) The achievement of Einstein (1879-1955) in his paper on special relativity was to highlight and interpret the hidden symmetry of Maxwell's equations, not to change them. The Faraday-Maxwell concept of electric and magnetic fields, as media or ethers filling all space, was retained by Einstein. Later, Einstein was dissatisfied with the particle-field dualism inherent in the early atomic theory, and Einstein sought, without success, a unified field theory in which all fundamental particles would emerge as special solutions to the field equations. 4) Following Einstein, Paul Dirac (1902-1984) then showed that photons emerged as a logical consequence of applying the rules of quantum mechanics to Maxwell's electromagnetic ether. This connection was soon generalized so that particles of any sort could be represented as the small-amplitude excitations of quantum fields. Electrons, for example, can be regarded as excitations of an electron field, an ether that pervades all space and time uniformly. Our current and extremely successful theories of the *strong, electromagnetic, and weak forces are formulated as *relativistic quantum field theories with *local interactions. 5) The author states: "Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics." Physics Today http://www.physicstoday.org -------------------------------- Notes by ScienceWeek: Michelson-Morley experiment of 1887: Conducted by Albert Michelson (1852-1931) and Edward Morley (1838-1923), the experiment attempted to measure the velocity of the Earth through the "ether" by using an interferometer to detect a difference in the speed of light in the direction of Earth's rotation from the speed perpendicular to this direction. No difference was observed, indicating the absence of an ether "wind". special theory of relativity: Proposed by Einstein in 1905, the special theory refers to inertial (non-accelerated) frames of reference. It assumes physical laws are identical in all frames of reference and that the speed of light in a vacuum is constant throughout the Universe and is independent of the speed of the observer. In general, the special theory gives a unified account of the laws of mechanics and electromagnetism (including optics). The companion theory, the general theory of relativity (1915), deals with general relative motion between accelerated frames of reference, and it is the general theory that led to Einstein's analysis of gravitation. field: In this context, in general, the term "field" refers to a physical quantity (e.g., electric or magnetic field) that varies from point to point in space. strong, electromagnetic, and weak forces: The fundamental forces currently identified in physics are the gravitational force, the electromagnetic force, the nuclear strong force, and the nuclear weak force. The nuclear strong force is the dominant force that acts between hadrons (e.g., the force that binds neutrons and protons in nuclei). (A "hadron" is any object made of *quarks and/or antiquarks). The weak force occurs between leptons (particles without internal structure, e.g., electrons, neutrinos) and hadrons (particles with internal structure, e.g., neutrons and protons); In general, the weak force is responsible for radioactivity. quarks and antiquarks: A quark is a hypothetical fundamental particle, having charges whose magnitudes are one-third or two-thirds of the electron charge, and from which the elementary particles may in theory be constructed. The antiquark is the antimatter quark entity. In general, antiparticles are homologs of elementary particles but with opposite charge. The positron, for example, is the antimatter particle homologous to the electron. Matter composed entirely of antiparticles is called "antimatter". relativistic quantum field theories: In general, a "quantum field theory" is any quantum mechanical theory in which particles are represented by fields whose normal modes of oscillation are quantized. The term is also used to refer to a quantum mechanical theory applied to systems having an infinite number of *degrees of freedom. Quantum electrodynamics, for example, is a particular quantum field theory describing the emission or absorption of photons by charged particles. "Relativistic quantum field theories" are used to describe fundamental interactions between elementary particles (which exhibit relativistic velocities, i.e., velocities approaching the speed of light). local interactions: In this context, a local interaction is an interaction between particles whose quantum mechanical wave functions are confined to a small region of a large system rather than being extended throughout the system. -------------------------------- Related Material: ON FIELD THEORY IN PHYSICS Notes by ScienceWeek: In physics, a field is an entity that acts as intermediary in interactions between particles, and which is distributed over part or all of space, and whose properties are functions of space coordinates, and except for static fields, also functions of time. There is also a quantum-mechanical analog of this entity, in which the function of space and time is replaced by an operator at each point in space-time. The following points are made by Roman Jackiw (Proc. Natl. Acad. Sci. 1998 95:12776): 1) Present-day theory for fundamental processes (i.e., descriptions of elementary particles and forces) is phenomenally successful. Experimental data confirms theoretical prediction, and where accurate calculation and experiments are attainable, agreement is achieved to 6 or 7 figures. Two examples: a) The helium atom ground state energy (*Rydbergs) is experimentally measured as -5.8071394 and theoretically calculated as -5.8071380. b) The muon magnetic dipole moment is experimentally measured as 2.00233184600 and theoretically calculated as 2.00233183478. 2) The theoretical structure within which this success has been achieved is *local field theory, which offers a wide variety of applications, and which provides a model for fundamental physical reality as described by our theories of *strong, electroweak, and gravitational processes. No other framework exists in which one can calculate so many phenomena with such ease and accuracy. 3) But is spite of these successes, today there is little confidence that field theory will advance our understanding of nature at its fundamental workings beyond what has already been achieved. Although in principle all observed phenomena can be explained by present-day field theory, these accounts are still imperfect, requiring ad hoc inputs. Moreover, because of conceptual and technical obstacles, classical gravity theory has not been integrated into the *quantum field description of nongravitational forces: *quantizing the *metric tensor of Einstein's theory produces a quantum field theory beset by infinities that apparently cannot be controlled. 4) These shortcomings are actually symptoms of a deeper lack of understanding concerning *symmetry and symmetry breaking... Physicists are happy in the belief that Nature in its fundamental workings is essentially simple, but observed physical phenomena rarely exhibit overwhelming regularity. Therefore, at the very same time that we construct a physical theory with intrinsic symmetry, we must find a way to break the symmetry in physical consequences of the model. 5) These problems have produced a theoretical impasse for over two decades, and in the absence of new experiments to channel theoretical speculation, some physicists have concluded that it will not be possible to make progress on these questions within field theory, and they have turned to a new structure, "*string theory". In field theory, the quantized excitations are point particles with point interactions, and this gives rise to the infinities. In string theory, the excitations are extended objects -- strings -- with nonlocal interactions; there are no infinities in string theory, and that enormous defect of field theory is absent. 6) Yet in spite of its positive features, until now string theory has provided a framework rather than a definite structure, and a precise derivation of the *Standard Model has yet to be given. The author concludes: "On previous occasions when it appeared that quantum field theory was incapable of advancing our understanding of fundamental physics, new ideas and new approaches to the subject dispelled the pessimism. Today we do not know whether the impasse within field theory is due to a failure of imagination or whether indeed we have to present fundamental physical laws in a new framework, thereby replacing the field theoretic one, which has served us well for over 100 years." Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Notes by ScienceWeek: Rydbergs: A unit of energy used in atomic physics, value = 13.605698 electronvolts. local field theory: In this context, "locality" is the condition that two events at spatially separated locations are entirely independent of each other, provided that the time interval between the events is less than that required for a light signal to travel from one location to the other. For example, the quantum mechanical wave function is a "local" field. strong, electroweak, and gravitational processes: The fundamental forces comprise the gravitational force, the electromagnetic force, the nuclear strong force, and the nuclear weak force. The "electroweak" interactions are a unification of the electromagnetic and nuclear weak interactions, and are described by the Weinberg-Salam theory (sometimes called "quantum flavordynamics"; also called the Glashow-Weinberg-Salam theory). quantum field description: In general, a quantum field theory is a quantum mechanical theory applied to systems having an infinite number of *degrees of freedom. The term is also used to refer to any quantum mechanical theory in which particles are represented by fields whose normal modes of oscillation are quantized (see below). degrees of freedom: In general, the number of independent parameters required to specify the configuration of a system. quantizing: In experimental physics, a quantized variable is a variable taking only discrete multiple values of a quantum mechanical constant. In theoretical physics, "quantizing" means the consistent application of certain rules that lead from classical to quantum mechanics. In general, "quantization" is a transition from a classical theory or a classical quantity to a quantum theory or the corresponding quantity in quantum mechanics. metric tensor: The mathematical statement (involving a set of quantities) that describes the deviation of the Pythagoras theorem in a curved space. symmetry and symmetry breaking: If a theory or process does not change when certain operations are performed on it, the theory or process is said to possess a symmetry with respect to those operations. For example, a circle remains unchanged under rotation or reflection, and a circle therefore has rotational and reflection symmetry. The term "symmetry breaking" refers to the deviation from exact symmetry exhibited by many physical systems, and in general, symmetry breaking encompasses both "explicit" symmetry breaking and "spontaneous" symmetry breaking. Explicit symmetry breaking is a phenomenon in which a system is not quite, but almost, the same for two configurations related by exact symmetry. Spontaneous symmetry breaking refers to a situation in which the solution of a set of physical equations fails to exhibit a symmetry possessed by the equations themselves. string theory: In particle physics, string theory is a theory of elementary particles based on the idea that the fundamental entities are not point-like particles but finite lines (strings), or closed loops formed by strings, the strings one-dimensional curves with zero thickness and lengths (or loop diameters) of the order of the Planck length of 10^(-35) meters. Standard Model: In particle physics, the Standard Model is a theoretical framework whose basic idea is that all the visible matter in the universe can be described in terms of the elementary particles leptons and quarks and the forces acting between them. Leptons are a class of point-like fundamental particles showing no internal structure and no involvement with the strong forces. A quark is a hypothetical fundamental particle, having charges whose magnitudes are one-third or two-thirds of the electron charge, and from which the elementary particles may in theory be constructed. From anonymous_animus at yahoo.com Wed Jul 6 19:52:40 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Wed, 6 Jul 2005 12:52:40 -0700 (PDT) Subject: [Paleopsych] stereotypes In-Reply-To: <200507061800.j66I0CR24744@tick.javien.com> Message-ID: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> >>Lieberman suggests that people are likely to pick up on stereotypes, regardless of whether their family or community agrees with them.<< --I think that's true. Perhaps it's rational to avoid people who are "marked" by a large group, regardless of whether you're a member of that group or not. Germans without antisemitism in their families would still have known to avoid shopping at Jewish stores or being seen associating with Jews, picking up on their "marked" status and fearing the consequences of associating with a scapegoated class. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From andrewsa at newpaltz.edu Wed Jul 6 20:42:34 2005 From: andrewsa at newpaltz.edu (Alice Andrews) Date: Wed, 6 Jul 2005 16:42:34 -0400 Subject: [Paleopsych] David L. Smith: Natural-Born Liars Message-ID: <034501c5826b$3d421660$6501a8c0@callastudios> Dear paleos, A member of this list--David Livingstone Smith--has a recent and very good article on self-deception in Scientific American... http://www.sciam.com/article.cfm?SID=mail&articleID=0007B7A0-49D6-128A-89D683414B7F0000%20 And here's one of my favorite Trivers quote on deception: "One of the most important things to realize about systems of animal communication is that they are not systems for the dissemination of the truth." (From Social Evolution.) All best, Alice ps I'm of the mind that there exists a continuum (like many traits)....That is, I believe that some (people) are more naturally self-deceptive than others.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Wed Jul 6 21:18:31 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:31 -0400 (EDT) Subject: [Paleopsych] NYT: Straight, Gay or Lying? Bisexuality Revisited Message-ID: Straight, Gay or Lying? Bisexuality Revisited http://www.nytimes.com/2005/07/05/health/05sex.html By [3]BENEDICT CAREY Some people are attracted to women; some are attracted to men. And some, if Sigmund Freud, Dr. Alfred Kinsey and millions of self-described bisexuals are to be believed, are drawn to both sexes. But a new study casts doubt on whether true bisexuality exists, at least in men. The study, by a team of psychologists in Chicago and Toronto, lends support to those who have long been skeptical that bisexuality is a distinct and stable sexual orientation. People who claim bisexuality, according to these critics, are usually homosexual, but are ambivalent about their homosexuality or simply closeted. "You're either gay, straight or lying," as some gay men have put it. In the new study, a team of psychologists directly measured genital arousal patterns in response to images of men and women. The psychologists found that men who identified themselves as bisexual were in fact exclusively aroused by either one sex or the other, usually by other men. The study is the largest of several small reports suggesting that the estimated 1.7 percent of men who identify themselves as bisexual show physical attraction patterns that differ substantially from their professed desires. "Research on sexual orientation has been based almost entirely on self-reports, and this is one of the few good studies using physiological measures," said Dr. Lisa Diamond, an associate professor of psychology and gender identity at the University of Utah, who was not involved in the study. The discrepancy between what is happening in people's minds and what is going on in their bodies, she said, presents a puzzle "that the field now has to crack, and it raises this question about what we mean when we talk about desire." "We have assumed that everyone means the same thing," she added, "but here we have evidence that that is not the case." Several other researchers who have seen the study, scheduled to be published in the journal Psychological Science, said it would need to be repeated with larger numbers of bisexual men before clear conclusions could be drawn. Bisexual desires are sometimes transient and they are still poorly understood. Men and women also appear to differ in the frequency of bisexual attractions. "The last thing you want," said Dr. Randall Sell, an assistant professor of clinical socio-medical sciences at Columbia University, "is for some therapists to see this study and start telling bisexual people that they're wrong, that they're really on their way to homosexuality." He added, "We don't know nearly enough about sexual orientation and identity" to jump to these conclusions. In the experiment, psychologists at Northwestern University and the Center for Addiction and Mental Health in Toronto used advertisements in gay and alternative newspapers to recruit 101 young adult men. Thirty-three of the men identified themselves as bisexual, 30 as straight and 38 as homosexual. The researchers asked the men about their sexual desires and rated them on a scale from 0 to 6 on sexual orientation, with 0 to 1 indicating heterosexuality, and 5 to 6 indicating homosexuality. Bisexuality was measured by scores in the middle range. Seated alone in a laboratory room, the men then watched a series of erotic movies, some involving only women, others involving only men. Using a sensor to monitor sexual arousal, the researchers found what they expected: gay men showed arousal to images of men and little arousal to images of women, and heterosexual men showed arousal to women but not to men. But the men in the study who described themselves as bisexual did not have patterns of arousal that were consistent with their stated attraction to men and to women. Instead, about three-quarters of the group had arousal patterns identical to those of gay men; the rest were indistinguishable from heterosexuals. "Regardless of whether the men were gay, straight or bisexual, they showed about four times more arousal" to one sex or the other, said Gerulf Rieger, a graduate psychology student at Northwestern and the study's lead author. Although about a third of the men in each group showed no significant arousal watching the movies, their lack of response did not change the overall findings, Mr. Rieger said. Since at least the middle of the 19th century, behavioral scientists have noted bisexual attraction in men and women and debated its place in the development of sexual identity. Some experts, like Freud, concluded that humans are naturally bisexual. In his landmark sex surveys of the 1940's, Dr. Alfred Kinsey found many married, publicly heterosexual men who reported having had sex with other men. "Males do not represent two discrete populations, heterosexual and homosexual," Dr. Kinsey wrote. "The world is not to be divided into sheep and goats." By the 1990's, Newsweek had featured bisexuality on its cover, bisexuals had formed advocacy groups and television series like "Sex and the City" had begun exploring bisexual themes. Yet researchers were unable to produce direct evidence of bisexual arousal patterns in men, said Dr. J. Michael Bailey, a professor of psychology at Northwestern and the new study's senior author. A 1979 study of 30 men found that those who identified themselves as bisexuals were indistinguishable from homosexuals on measures of arousal. Studies of gay and bisexual men in the 1990's showed that the two groups reported similar numbers of male sexual partners and risky sexual encounters. And a 1994 survey by The Advocate, the gay-oriented newsmagazine, found that, before identifying themselves as gay, 40 percent of gay men had described themselves as bisexual. "I'm not denying that bisexual behavior exists," said Dr. Bailey, "but I am saying that in men there's no hint that true bisexual arousal exists, and that for men arousal is orientation." But other researchers - and some self-identified bisexuals - say that the technique used in the study to measure genital arousal is too crude to capture the richness - erotic sensations, affection, admiration - that constitutes sexual attraction. Social and emotional attraction are very important elements in bisexual attraction, said Dr. Fritz Klein, a sex researcher and the author of "The Bisexual Option." "To claim on the basis of this study that there's no such thing as male bisexuality is overstepping, it seems to me," said Dr. Gilbert Herdt, director of the National Sexuality Resource Center in San Francisco. "It may be that there is a lot less true male bisexuality than we think, but if that's true then why in the world are there so many movies, novels and TV shows that have this as a theme - is it collective fantasy, merely a projection? I don't think so." John Campbell, 36, a Web designer in Orange County, Calif., who describes himself as bisexual, also said he was skeptical of the findings. Mr. Campbell said he had been strongly attracted to both sexes since he was sexually aware, although all his long-term relationships had been with women. "In my case I have been accused of being heterosexual, but I also feel a need for sex with men," he said. Mr. Campbell rated his erotic attraction to men and women as about 50-50, but his emotional attraction, he said, was 90 to 10 in favor of women. "With men I can get aroused, I just don't feel the fireworks like I do with women," he said. About 1.5 percent of American women identify themselves bisexual. And bisexuality appears easier to demonstrate in the female sex. A study published last November by the same team of Canadian and American researchers, for example, found that most women who said they were bisexual showed arousal to men and to women. Although only a small number of women identify themselves as bisexual, Dr. Bailey said, bisexual arousal may for them in fact be the norm. Researchers have little sense yet of how these differences may affect behavior, or sexual identity. In the mid-1990's, Dr. Diamond recruited a group of 90 women at gay pride parades, academic conferences on gender issues and other venues. About half of the women called themselves lesbians, a third identified as bisexual and the rest claimed no sexual orientation. In follow-up interviews over the last 10 years, Dr. Diamond has found that most of these women have had relationships both with men and women. "Most of them seem to lean one way or the other, but that doesn't preclude them from having a relationship with the nonpreferred sex," she said. "You may be mostly interested in women but, hey, the guy who delivers the pizza is really hot, and what are you going to do?" "There's a whole lot of movement and flexibility," Dr. Diamond added. "The fact is, we have very little research in this area, and a lot to learn." From checker at panix.com Wed Jul 6 21:18:39 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:39 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: The Heterosexual Revolution Message-ID: The Heterosexual Revolution New York Times, 5.7.5 http://www.nytimes.com/2005/07/05/opinion/05coontz.html By STEPHANIE COONTZ Olympia, Wash. THE last week has been tough for opponents of same-sex marriage. First Canadian and then Spanish legislators voted to legalize the practice, prompting American social conservatives to renew their call for a constitutional amendment banning such marriages here. James Dobson of the evangelical group Focus on the Family has warned that without that ban, marriage as we have known it for 5,000 years will be overturned. My research on marriage and family life seldom leads me to agree with Dr. Dobson, much less to accuse him of understatement. But in this case, Dr. Dobson's warnings come 30 years too late. Traditional marriage, with its 5,000-year history, has already been upended. Gays and lesbians, however, didn't spearhead that revolution: heterosexuals did. Heterosexuals were the upstarts who turned marriage into a voluntary love relationship rather than a mandatory economic and political institution. Heterosexuals were the ones who made procreation voluntary, so that some couples could choose childlessness, and who adopted assisted reproduction so that even couples who could not conceive could become parents. And heterosexuals subverted the long-standing rule that every marriage had to have a husband who played one role in the family and a wife who played a completely different one. Gays and lesbians simply looked at the revolution heterosexuals had wrought and noticed that with its new norms, marriage could work for them, too. The first step down the road to gay and lesbian marriage took place 200 years ago, when Enlightenment thinkers raised the radical idea that parents and the state should not dictate who married whom, and when the American Revolution encouraged people to engage in "the pursuit of happiness," including marrying for love. Almost immediately, some thinkers, including Jeremy Bentham and the Marquis de Condorcet, began to argue that same-sex love should not be a crime. Same-sex marriage, however, remained unimaginable because marriage had two traditional functions that were inapplicable to gays and lesbians. First, marriage allowed families to increase their household labor force by having children. Throughout much of history, upper-class men divorced their wives if their marriage did not produce children, while peasants often wouldn't marry until a premarital pregnancy confirmed the woman's fertility. But the advent of birth control in the 19th century permitted married couples to decide not to have children, while assisted reproduction in the 20th century allowed infertile couples to have them. This eroded the traditional argument that marriage must be between a man and a woman who were able to procreate. In addition, traditional marriage imposed a strict division of labor by gender and mandated unequal power relations between men and women. "Husband and wife are one," said the law in both England and America, from early medieval days until the late 19th century, "and that one is the husband." This law of "coverture" was supposed to reflect the command of God and the essential nature of humans. It stipulated that a wife could not enter into legal contracts or own property on her own. In 1863, a New York court warned that giving wives independent property rights would "sow the seeds of perpetual discord," potentially dooming marriage. Even after coverture had lost its legal force, courts, legislators and the public still cleaved to the belief that marriage required husbands and wives to play totally different domestic roles. In 1958, the New York Court of Appeals rejected a challenge to the traditional legal view that wives (unlike husbands) couldn't sue for loss of the personal services, including housekeeping and the sexual attentions, of their spouses. The judges reasoned that only wives were expected to provide such personal services anyway. As late as the 1970's, many American states retained "head and master" laws, giving the husband final say over where the family lived and other household decisions. According to the legal definition of marriage, the man was required to support the family, while the woman was obligated to keep house, nurture children, and provide sex. Not until the 1980's did most states criminalize marital rape. Prevailing opinion held that when a bride said, "I do," she was legally committed to say, "I will" for the rest of her married life. I am old enough to remember the howls of protest with which some defenders of traditional marriage greeted the gradual dismantling of these traditions. At the time, I thought that the far-right opponents of marital equality were wrong to predict that this would lead to the unraveling of marriage. As it turned out, they had a point. Giving married women an independent legal existence did not destroy heterosexual marriage. And allowing husbands and wives to construct their marriages around reciprocal duties and negotiated roles - where a wife can choose to be the main breadwinner and a husband can stay home with the children- was an immense boon to many couples. But these changes in the definition and practice of marriage opened the door for gay and lesbian couples to argue that they were now equally qualified to participate in it. Marriage has been in a constant state of evolution since the dawn of the Stone Age. In the process it has become more flexible, but also more optional. Many people may not like the direction these changes have taken in recent years. But it is simply magical thinking to believe that by banning gay and lesbian marriage, we will turn back the clock. Stephanie Coontz, the director of public education for the Council on Contemporary Families, is the author of "Marriage, a History: From Obedience to Intimacy, or How Love Conquered Marriage." From checker at panix.com Wed Jul 6 21:18:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:18:51 -0400 (EDT) Subject: [Paleopsych] NYT: A Modern Refrain: My Genes Made Me Do It Message-ID: A Modern Refrain: My Genes Made Me Do It New York Times, 5.7.5 http://www.nytimes.com/2005/07/05/health/05comm.html By KENT SEPKOWITZ, M.D. Our theories about human disease are more the product of current fashion than we would like to admit. But just as the moment influences the hemline and the automobile fender, so too does a type of intellectual currency affect our understanding of how illness happens. Much of the 20th century was spent in pursuit of external causes of disease - cigarettes, E. coli, fatty foods, tick bites. Rather like the hero in an old western, medicine's job was to track down the bad guys, round 'em up and squish 'em before a real commotion got a-goin'. Antibiotics, vaccines, heart pills - these were our weapons in the epic battle between us and them, good versus evil. More recently, though, we have cast our gaze inward, mesmerized by our own adorable DNA. Just last decade, after 40 years of intense flirtation, this relationship was consummated as we cloned the entire human genome. Promises of improved health and longevity soon followed, as we had apparently found our way to the bedrock truths that underlie all illness. But with this orgy of molecular self-admiration has come a fundamental shift in thinking about human disease. We have moved from our long-held premise that the outside world (too much ice cream and flesh-eating bacteria) threatens us to a belief that the trouble arises from something much closer to home - our own double-crossing genes. Although packaged with the glint of modernity, this theory actually draws from something old and wintry - the harsh remedies proposed by John Calvin, predestination's No. 1 guy. According to Calvin, our fate is determined at first creation. Similar to this, the articles of gene-ism would have us believe that our medical fate is sealed by the genes we receive at conception. Seem a bit grim? Maybe not. Our unquestioning acceptance of the gene as prime mover has certain distinct - and ultramodern - advantages. Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool. In the Age of Genetics, you no longer have to try to cut out smoking or think twice about gobbling that candy bar in your desk drawer. And forget jogging on a cold morning. The die was cast long ago, from the moment the parental sperm and egg first integrated their spiraling nucleotides. The resulting package of chromosomes has programmed every step of your life. So sit back, relax and leave the driving to someone else. But one problem remains: this new world order is at sharp odds with an older theism, that blame can and must be assigned in every human transaction. We have built a vast judicial-industrial complex that offers lawsuits for every need, satisfying varied urges like the wish for fairness or revenge, for getting rich quick or simply getting your due. This all-blame all-the-time approach applies to much more than determining culpability should a neighbor trip on your lawn and break an arm. It also says that people are responsible for their own health - and illness. It is your fault if you develop cancer or a heart attack because you didn't eat, think or breathe right. You have allowed the corrosive effect of unresolved anger or stress or poor self-esteem to undermine your health. So if you are sick or miserable or both, it's your own darned fault. No wonder we fled. The transition from the chaotic, barking family feud character of lawsuits to the sleek silence of a future devoted to cloning and splicing genes surely derives from something larger than scientific opportunity or our fascination with "Star Trek." How modern to deflect blame suavely onto a poorly understood high-end concept, the manic twitches of deoxynucleic acid. Gosh, biology is so much bigger than we are. Nothing we can do about it, really. Our wholehearted endorsement of the science of no personal responsibility may sour as new insights and new intellectual fashion result in new bedrock truths. A future generation may castigate us for our unblinking narcissism. What were we thinking? How could genes be responsible for red hair and bad memory and atherosclerosis? But if they come after us wagging their stubby fingers, we have an airtight explanation. We'll tell them it was not really our idea, the whole gene thing. No, we will say, we were victims. Victims of fashion. From checker at panix.com Wed Jul 6 21:19:28 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:28 -0400 (EDT) Subject: [Paleopsych] The Chronicle: Wired Campus Blog Message-ID: The Chronicle: Wired Campus Blog http://wiredcampus.chronicle.com/ [URL not fixed, as this comes every week.] 5.7.1 Education-technology news from around the Web, brought to you by The Chronicle of Higher Education The Economics of Cheating Administrators at the University of Virginia are investigating claims that almost 35 graduate students [46]used an online answer key to cheat on homework assignments. A first-year student in the economics program reportedly found answers to problems in an introductory course on the Web and proceeded to share the answers with most of his or her peers in the class. Almost everyone in the course could be implicated in the cheating scandal. "I think about all the students were involved in some questionable behavior," said Steven A. Stern, the course's professor. (Richmond Times-Dispatch) Measuring 'Internet Intelligence' College students might be old pros when it comes to downloading music or swapping instant messages, but that doesn't necessarily make them [50]wise to the ways of the Internet. So a team of colleges, along with the Educational Testing Service, is developing a test that gauges students' "Internet intelligence." The Information and Communication Technology Literacy Assessment, as the exam is called, could become popular with professors who bemoan their students' poor Web-research skills. The test measures students' ability to find information online, verify it, and credit it properly. (Associated Press) Justice Department Raids Piracy Dens Four people suspected of working in digital-piracy rings were arrested Wednesday in [54]an ambitious sting operation that spanned 11 countries. The sting, dubbed "Operation Site Down" by the U.S. Department of Justice, included raids of 20 "warez" groups -- underground communities that post pirated software and movies online. Officials did not say whether any of the four people arrested were college students, but an earlier bust conducted by the Justice Department did lead to [55]the arrest of a student at the University of Maryland at College Park. (Los Angeles Times) Beep Beep Forget those tedious piano lessons that your mother made you suffer through as a child. Researchers at the University of Southern California have developed a computerized system that allows the user to [59]play music using a steering wheel and gas and brake pedals. Essentially, the researchers say, the user "drives" his or her way through the music with the system, known as the Expression Synthesis Project. The device permits people to experience playing music without having to first master an instrument, the researchers say. The system is programmed to guide the user through Brahms's Hungarian Dance No. 5 in G minor. (The Chronicle, subscription required) 5.6.30 Supporting Piracy or Making a Point? Will BitTorrent, the network popular with movie swappers, be the next peer-to-peer service to face legal problems? Or is post-Grokster hysteria beginning to set in? Legal experts are debating those questions after discovering [63]a manifesto advocating digital piracy on the Web site of Bram Cohen, the software's creator. Mr. Cohen says he wrote the short polemic in 1999, two years before he started designing BitTorrent. And since releasing the software, he has repeatedly argued that it is meant for legal file swapping, not piracy. But some lawyers say that the memo could [64]damage Mr. Cohen's credibility when he claims that BitTorrent, unlike Grokster and Morpheus, does not endorse copyright infringement. (Wired News) For more on the implications of the Supreme Court's decision in MGM v. Grokster, see [65]an article from The Chronicle by Andrea L. Foster. Ready for the Digital Revolution By the year 2020 almost every piece of research published in the United Kingdom will be available online. And only [69]one in 10 newly-published articles will appear in print, according to a study commissioned by the British Library. That's a "seismic shift" for the publishing industry, says Lynne Brindley, the library's chief executive. The library will prepare, she says, by spending the next three years bolstering its technology for storing and organizing digital material. (BBC News) Peer-to-Peer, Legally The Supreme Court's Grokster decision has put most peer-to-peer networks on shaky legal ground, but it could be just what the doctor ordered for a file-swapping service called Mashboxx. The network is attempting to establish itself as [73]a legal peer-to-peer option by persuading record companies to let people download songs and play them a few times before buying the tunes. (PC Pro) Mashboxx's founder, Wayne Rosso, is [74]a familiar face to observers of the file-sharing wars: He was once president of Grokster. But Mr. Rosso has transformed himself from a thorn in the music industry's side into something of an ally -- much like Shawn Fanning, the founder of Napster, whose SnoCap software helps record companies track their songs on Mashboxx. (The Washington Post) Physicists Are People Too Quantum Diaries, a Web site featuring [78]blogs by researchers, highlights the minutiae of those who study the minutiae. (The Chronicle, subscription required) 5.6.29 RIAA Fires Off More Antipiracy Lawsuits Lawyers for the Recording Industry Association of America might be flush with victory after the Supreme Court's Grokster decision on Monday, but they still have plenty of work to do: Today the RIAA announced a new batch of lawsuits against people suspected of online piracy of copyrighted songs. A total of 784 people were identified in this month's suits, including a number of Grokster users. But recording studios did not say whether any of the defendants were suspected of sharing songs on campus networks. Song-Swapping by Subscription How do you make a music-downloading subscription service more appealing to college students? Try adding a dash of iTunes to the enterprise. Ruckus, a company that offers free music and movie downloads to students at subscribing colleges, has introduced a tool that lets students create public playlists of the tunes they've downloaded using the service. At some universities, students have taken to browsing each others' iTunes folders as a social activity, and Ruckus hopes that it can get its subscribers to do the same. But unlike iTunes, Ruckus will let students download songs from their classmates' computers, instead of having to gather them from the service's centralized database. For more on the social side of file swapping, see [85]an article from The Chronicle by Scott Carlson. Students Still Swapping Software One in three college students considers illegal file sharing to be unequivocally wrong, according to [89]a new survey commissioned by the Business Software Alliance. For software manufacturers, that's hardly a heartening statistic, but it is an improvement over the 2003 survey, which found that only 23 percent of students felt that way. The new survey, conducted by the research firm Ipsos, paints a cloudy picture of the software industry's antipiracy efforts: More than 60 percent of students said they rarely or never paid for commercial software. And while 44 percent of students said their campuses had official policies on downloading (up from 28 percent in the 2003 survey), there was no consensus on whether campus antipiracy tactics were effective. Critics of the alliance say the study just shows that the software industry is fighting an unpopular battle. Stephen Downes, of [90]OLDaily, has argued that the 2003 survey showed "massive support in the student population for file sharing and attitudes ranging from [91]indifference to support among the professors." The Perils of Podcasting Will Apple iTunes' [95]new podcasting venture run afoul of copyright law? It's conceivable, some experts say, depending on how the Supreme Court's Grokster decision is interpreted. The podcasting service, which made its debut on Tuesday, allows iTunes users to publish their own podcasts -- homemade radio programs that people can download automatically to their iPods or other portable MP3 players. Apple has said that it plans to monitor submitted podcasts for violations of copyright. But if some infringing material does sneak onto iTunes, the company could find itself in [96]uncharted territory, some analysts say. (Wired News) Recent Posts * [103]The Economics of Cheating * [104]Measuring 'Internet Intelligence' * [105]Justice Department Raids Piracy Dens * [106]Beep Beep * [107]Supporting Piracy or Making a Point? * [108]Ready for the Digital Revolution * [109]Peer-to-Peer, Legally * [110]Physicists Are People Too * [111]RIAA Fires Off More Antipiracy Lawsuits * [112]Song-Swapping by Subscription [113]Archives * [114]July 1, 2005 * [115]June 30, 2005 * [116]June 29, 2005 * [117]June 28, 2005 * [118]June 27, 2005 * [119]June 24, 2005 * [120]June 23, 2005 * [121]June 22, 2005 * [122]June 21, 2005 * [123]June 20, 2005 References 46. http://www.timesdispatch.com/servlet/Satellite?pagename=RTD%2FMGArticle%2FRTD_BasicArticle&c=MGArticle&cid=1031783599826&path=!news&s=1045855934842 50. http://www.newsday.com/technology/business/wire/sns-ap-internet-intelligence,0,5425147.story?coll=sns-ap-technology-headlines 54. http://www.latimes.com/technology/la-fi-piracy1jul01,1,4223112.story?coll=la-headlines-technology 55. http://wiredcampus.chronicle.com/2005/03/guilty_pleas_fo.html 59. http://chronicle.com/prm/weekly/v51/i43/43a02702.htm 63. http://web.archive.org/web/20010710021553/http://bitconjurer.org/a_technological_activists_agenda.html 64. http://www.wired.com/news/digiwood/0,1412,68046,00.html?tw=wn_tophead_2 65. http://chronicle.com/daily/2005/06/2005062801t.htm 73. http://www.pcpro.co.uk/news/74631/p2p-company-to-offer-legal-file-sharing-of-sony-material.html 74. http://www.washingtonpost.com/wp-dyn/articles/A18568-2004Dec22.html 78. http://chronicle.com/prm/weekly/v51/i43/43a01101.htm 85. http://chronicle.com/prm/weekly/v50/i37/37a03201.htm 88. http://wiredcampus.chronicle.com/2005/06/songswapping_by.html#trackback 89. http://www.bsa.org/usa/press/newsreleases/Nationwide-Survey-June-2005.cfm 90. http://www.downes.ca/news/OLDaily.htm 91. http://www.downes.ca/cgi-bin/website/find.cgi?string=site~Business%20Software%20Alliance 95. http://wiredcampus.chronicle.com/2005/06/pod_people.html 103. http://wiredcampus.chronicle.com/2005/07/the_economics_o.html 104. http://wiredcampus.chronicle.com/2005/07/the_dsl_curve.html 105. http://wiredcampus.chronicle.com/2005/07/justice_departm.html 106. http://wiredcampus.chronicle.com/2005/07/beep_beep.html 107. http://wiredcampus.chronicle.com/2005/06/supporting_pira.html 108. http://wiredcampus.chronicle.com/2005/06/ready_for_the_d.html 109. http://wiredcampus.chronicle.com/2005/06/peertopeer_lega.html 110. http://wiredcampus.chronicle.com/2005/06/physicists_are_.html 111. http://wiredcampus.chronicle.com/2005/06/riaa_fires_off_.html 112. http://wiredcampus.chronicle.com/2005/06/songswapping_by.html 113. http://wiredcampus.chronicle.com/archives.html 114. http://wiredcampus.chronicle.com/2005/07/01/index.html 115. http://wiredcampus.chronicle.com/2005/06/30/index.html 116. http://wiredcampus.chronicle.com/2005/06/29/index.html 117. http://wiredcampus.chronicle.com/2005/06/28/index.html 118. http://wiredcampus.chronicle.com/2005/06/27/index.html 119. http://wiredcampus.chronicle.com/2005/06/24/index.html 120. http://wiredcampus.chronicle.com/2005/06/23/index.html 121. http://wiredcampus.chronicle.com/2005/06/22/index.html 122. http://wiredcampus.chronicle.com/2005/06/21/index.html 123. http://wiredcampus.chronicle.com/2005/06/20/index.html E-mail me if you have problems getting the referenced articles. From checker at panix.com Wed Jul 6 21:19:41 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:41 -0400 (EDT) Subject: [Paleopsych] CHE: Buzzwords and Their Evolving Meanings Message-ID: Buzzwords and Their Evolving Meanings The Chronicle of Higher Education, 5.7.8 http://chronicle.com/weekly/v51/i44/44b00201.htm Fictions Peter Brooks, a professor of English and law at the University of Virginia and author of Realist Vision (Yale University Press, 2005): "Fictions" has to my mind become a crucial term for literary studies, perhaps for the humanities in general. By "fictions" I mean something other than the distinction you find at Barnes & Noble between fiction and nonfiction, though that's not irrelevant. Fictions as I want to use the term points us toward the realm of the self-consciously made-up (from fingere: to make, and to make-believe): works of the imagination that know they are that. I think I first saw the term used in this way in Frank Kermode's The Sense of an Ending (1967), one of those books that has only become more important over time. But perhaps the main source for the modern prominence of the word is Jorge Luis Borges's Ficciones (1944) -- that remarkable set of short meta-fictions, stories that comment on the very process of invention. And one could of course trace the word and concept much farther back, to Hans Vaihinger's philosophy of the "as-if," to Jeremy Bentham, to Jean-Jacques Rousseau, and so on. Kermode distinguishes between myth and fiction: Myth is a kind of degraded fiction to which individuals and cultures accord totalizing explanatory power, such as "the master race" or "the war on terror." In this sense, fiction is not opposed to reality but the condition for distinguishing the real from the cultural overlays, the ideologies and false consciousness, that mask it. As Roland Barthes taught us, that which is presented as "nature" is often simply a cultural myth. So it is not surprising that another term struggling to re-emerge after decades of eclipse is "realism" -- set in opposition to distorting fantasies. *** Sexuality Dagmar Herzog, an associate professor of history at Michigan State University and author of Sex After Fascism: Memory and Morality in Twentieth-Century Germany (Princeton University Press, 2005): "Sexuality" emerged as a keyword among historians in the mid-1970s. For the first three decades of its existence, the history of sexuality tended to be about sexual mores and practices. Among other things we learned that in 19th-century Europe it had been considered normal for middle-class men to have sex with prostitutes both before and during marriage. Infanticide had been a common strategy of family planning in early modern Europe. And same-sex activities for both men and women were considered unremarkable in 18th- and 19th-century America. In sum, we learned that the most intimate realms of human behavior have changed dramatically over time. Increasingly historians of sexuality turned their attention to the 20th century. This was the century when sexuality became ever more central to an individual's identity. It was also the century when sex became a crucial marketing tool and a major engine of economic development. In addition, laws relating to sexuality became an ever greater focus of political and cultural conflict, from abortion to homosexual rights, from pornography to sex education. Until recently, in telling the history of sexuality in the 20th century, scholars often organized their accounts as stories of progress: Things were bad, then they were better. The early 21st century, however, has seen this mood of optimism come undone. Scholars of sexuality are now seeking to make sense of the bodily and emotional dissociations resulting from psychopharmaceuticals (from Prozac to Viagra) and cybersex. They are also struggling to understand the popular appeal of religious and sexual conservatism within both Christianity and Islam. In the process, the keyword "sexuality" is losing many of its post-1960s associations with emancipatory impulses. *** Suburbia Corey Dolgon, an associate professor and chairman of sociology at Worcester State College and author of The End of the Hamptons: Scenes From the Class Struggle in America's Paradise (New York University Press, 2005): Historically, the suburbs originate in literature in the first industrial aristocracy's efforts to develop what Leo Marx called a "middle landscape, somewhere between the chaos, garbage, and immigrant-dense metropolis" and the "uncivilized, provincial, and poor countryside." But suburbanization quickly came to represent not the exurban enclaves of Long Island's Gold Coast or Philadelphia's Main Line, but the more middle-class, commuter subdivision of Levittown. This transformation chronicles what Robert Fishman called the "rise and fall" of bourgeois utopia. Like cities, the suburbs have always inspired both utopian and dystopian images. Bucolic and independent, suburbs came to embody the American dream of homeownership, good schools, clean parks, and safe, finely manicured neighborhoods. By now, however, most scholars agree that neither depiction is accurate as suburbs are commercially, demographically, and aesthetically diverse at the same time that they also suffer from social problems once associated only with the density and heterogeneity of urban centers. More people who live in suburbs actually work in suburbs. More economic growth, housing development, and retail sales now take place outside the urban core. Increasingly, immigrants bypass cities for suburban jobs in landscaping, construction, retail, and other services. Even terms like "urban sprawl" have been transformed into "suburban sprawl." Distant resort areas like Cape Cod have become year-round residences for middle- and upper-class urban refugees who today worry about chain stores, fast-food joints, and affordable tract housing mucking up their farm views. In the minds of Americans, for sure, the suburbs remain a contested cultural construction, much like the middle class itself, where generations will continue to struggle over how one inscribes the physical and cultural landscape with a mass-produced and mass-consumed vision of the "good life." *** Empire Dane Kennedy, a professor of history and international affairs at George Washington University and author of The Highly Civilized Man: Richard Burton and the Victorian World (Harvard University Press, 2005): "Empire," along with "imperialism," is one of those terms that historians can't seem to do without, but can't manage to agree upon. No sooner does one group figure they've got it contained within definitional boundaries than another group drives it in a new direction -- rather like empires themselves. Two main issues are responsible for the term's instability. The first has to do with its morality. When history developed into a modern discipline, many of its classically educated practitioners equated empire with Rome, and Rome with civilization and progress. By this standard, empire was a good thing. Others, however, associated it with the tyranny of "Oriental despots" or the military aggression of Napoleon. For them, empire was a bad thing. This dispute about the moral merits of empire has persisted to the present day. The issue of instrumentality also affects the varied uses of the term. In the estimation of some historians, empire is first and foremost a political phenomenon, involving the rule of one people over another. Others see it as essentially an instrument of economic forces, often unconstrained by any need for direct governance. Still others believe that empire's greatest significance lies in its cultural power, its ability to get into peoples' heads. For the past decade or so, advocates of the cultural approach have had a particularly good run. Doing history has always involved a dialogue between the past and the present, so it's no surprise that the U.S. invasions of Iraq and Afghanistan have intensified interest in empire among historians, who are contending anew about its instrumentality and its morality. *** Taste Denise Gigante, an assistant professor of English at Stanford University and author of Taste: A Literary History (Yale University Press, 2005): There has been a shift recently in the connotation of the cultural keyword "taste." This term was a virtual obsession in 18th-century Europe as a synonym for discernment, or aesthetic connaissance. The connoisseur was an art appreciator and a person (usually a man) of letters. The forerunner of the 20th-century literary critic was the Man of Taste. But the present shift in attention, through cultural studies, from the high arts to the low, from poetry to food and other everyday matters not associated with the patriarchal elite, has brought to light an important shift that took place at the turn of the 19th century in the discursive field of taste from the sublime to the stomach, as it were. Taste became embodied as a concept and associated more and more with the food and wine connoisseur, who showed individual distinction through fine dining. Eventually the display of savoir-faire among flavors came to assume an equal footing with -- if it did not assume cultural priority over -- what was once called a fine taste in the arts. Today taste is confounded with physical pleasure(s) to the degree that we associate gastronomy -- or an aesthetic appreciation for food -- with our popular food culture, expressing its standards and principles through gourmet magazines and journalism (restaurant reviews, televised food shows, and so forth). But a growing subfield within literary studies has grown to understand the consumerist aspect of taste as nothing other than a cultural-material expansion of the 18th-century philosophical discourse of taste. *** Rationality Edward C. Rosenthal, an associate professor of management science and operations management at the Fox School of Business and Management at Temple University and author of The Era of Choice: The Ability to Choose and Its Transformation of Contemporary Life (MIT Press, 2005): Not so long ago, we thought we knew what "rationality" was. But are we, in fact, rational beings? Try this: Would you prefer $100 right now or $110 a month from now? Would you prefer to pay a fine of $40 or else gamble on a coin flip in which you pay $100 on heads but pay nothing on tails? Many of us would select the $100 in the first scenario and gamble in the next one. Such "irrational" behavior defies conventional economic theory, and, as we are discovering, to get to the source of the problem, we need to get our heads examined -- literally. By the early 1970s, economists and decision theorists had seemingly triumphed in their quest to work out mathematical models of optimal behavior when we exchange goods with others. And since notions like supply and demand and expected utility plausibly explained much of human behavior, the assumption that we operate as Homo economicus was not unreasonable. But for 25 years now evidence has been piling up that our behavior does not always fit the models. This is not to say that we are merely rationally challenged beings. Rather, there might be a method to our madness. Foraging theory, for example, has shown that even animal behavior fits rigorous economic models. For us, perhaps risk averseness is best in certain circumstances. Perhaps emotion, not intellect, is at times the superior guide. Perhaps hot impulsiveness can be more adaptive than cool patience. As we begin to unravel the complexities of rationality, it is very exciting to track the progress being made in behavioral-decision theory, intertemporal choice, neuroeconomics, and other fields in which the goal is to redefine, rather than dismantle, the notion of humans as rational actors. From checker at panix.com Wed Jul 6 21:19:48 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:19:48 -0400 (EDT) Subject: [Paleopsych] CHE: Novel Perspectives on Bioethics (letters appended) Message-ID: CHE: Novel Perspectives on Bioethics (letters appended) The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36b00601.htm By MARTHA MONTELLO On March 16, the Kansas Legislature heatedly debated a bill that would criminalize all stem-cell research in the state. Evangelical-Christian politicians and conservative lawmakers argued with molecular biologists and physicians from the University of Kansas' medical school about the morality of therapeutic cloning. Up against a substantial audience of vocal religious conservatives, William B. Neaves, CEO and president of the Stowers Institute for Medical Research, a large, privately financed biomedical-research facility in Kansas City, began his impassioned defense of the new research by giving his credentials as "a born-again Christian for 30 years." Barbara Atkinson, executive vice chancellor of the University of Kansas Medical Center, tried to articulate the difference between "a clump of cells in a petri dish" and what several hostile representatives repeatedly interrupted to insist is "early human life." Clearly, in this forum, language mattered. Each word carried wagonloads of moral resonance. I am a literature professor. I was at the hearing because I am also chairwoman of the pediatric-ethics committee at the University of Kansas Medical Center. I listened to the debates get more and more heated as the positions got thinner and more polarized, and I kept thinking that these scientists and lawmakers needed to read more fiction and poetry. Leon R. Kass, chairman of the President's Council on Bioethics, apparently feels the same way. He opened the council's first session by asking members to read Hawthorne's story "The Birthmark,"and he has since published an anthology of literature and poetry about bioethics issues. The fight in Kansas (the bill was not put to a vote) is in some ways a microcosm of what has been happening around the country. From Kevorkian to Schiavo, cloning to antidepressants, issues of bioethics increasingly underlie controversies that dominate public and political discussion. Decisions about stem-cell research, end-of-life choices, organ transplantation, and mind- and body-enhancing drugs, among others, have become flash points for front-page news day after day. At the same time, some good literary narratives have emerged over the past few years that reveal our common yet deeply individual struggles to find an ethics commensurate with rapid advances in the new science and technologies. Kazuo Ishiguro's elegiac, disturbing new novel, Never Let Me Go, re-imagines our world in a strange, haunting tale of mystery, horror, love, and loss. Set in "England, 1990s," the story is pseudohistorical fiction with a hazy aura of scientific experimentation. A typical Ishiguro narrator, Kathy H. looks back on her first three decades, trying to puzzle out their meaning and discern the vague menace of what lies ahead. In intricate detail she sifts through her years at Hailsham, an apparently idyllic, if isolated, British boarding school, "in a smooth hollow with fields rising on all sides." Kathy and the other students were nurtured by watchful teachers and "guardians," who gave them weekly medical checks, warned them about the dangers of smoking, and monitored their athletics triumphs and adolescent struggles. Sheltered and protected, she and her friends Ruth and Tommy always knew that they were somehow special, that their well-being was important to the society somewhere outside, although they understood that they would never belong there. From the opening pages, a disturbing abnormality permeates their enclosed world. While the events at Hailsham are almost absurdly trivial -- Tommy is taunted on the soccer field, Laura gets caught running through the rhubarb garden, Kathy loses a favorite music tape -- whispered secrets pass among guardians and teachers, and the atmosphere is ominous -- as Kathy puts it, "troubling and strange." The children have no families, no surnames, no possessions but castoffs -- other people's junk. Told with a cool dispassion through a mist of hints, intuitions, and guesses, Kathy's memories gradually lift the veil on a horrifying reality: These children were cloned, created solely to become organ donors. Once they leave Hailsham (with its Dickensian reverberations of Havisham, that ghostly abuser of children) they will become "caregivers," then "donors," and if they live to make their "fourth donation," will "complete." The coded language that Kathy has learned to describe her fate flattens the unthinkable and renders it almost ordinary, simply what is, so bloodlessly that it heightens our sense of astonishment. What makes these doomed clones so odd is that they never try to escape their fate. Almost passive, they move in a fog of self-reinforced ignorance, resigned to the deadly destiny for which they have been created. However, in a dramatic scene near the end of the novel, Kathy and Tommy do try to discover, from one of the high-minded ladies who designed Hailsham, if a temporary "deferral" is possible. It is too late for any of them now, the woman finally divulges. Once the clones were created, years ago during a time of rapid scientific breakthroughs, their donations became the necessary means of curing previously incurable conditions. Society has become dependent on them. Now there is no turning back.The only way people can accept the program is to believe that these children are not fully human. Although "there were arguments" when the program began, she tells them, people's primary concern now is that their own family members not die from cancer, heart disease, diabetes, or motor-neuron diseases. People outside prefer to believe that the transplanted organs come from nowhere, or at least from beings less than human. Readers of Ishiguro's fiction will recognize his mastery in creating characters psychologically maimed by an eerie atrocity. From his debut novel, A Pale View of Hills (Putnam, 1982), Ishiguro's approach to horror has been oblique, restrained, and enigmatic. The war-ravaged widow from Nagasaki in that work presages the repressed English butler of The Remains of the Day (Random House, 1990) and Kathy herself, all long-suffering victims with wasted lives whose sense of obligation robs them of happiness. Their emotions reined in, their sight obscured, they are subject to wistful landscapes, long journeys, and a feeling of being far from the possibility of home and belonging. Never Let Me Go, however, ventures onto new terrain for Ishiguro by situating itself within current controversies about scientific research. Taking on some of the moral arguments about genetic engineering, the novel inevitably calls into question whether such fiction adds to the debates or clouds them -- and whether serious fiction about bioethics is enriched by the currency of its topic or hampered by it. Here Ishiguro's novel joins company with others that are centered in contemporary bioethics issues and might be considered a genre of their own. A decade ago, Doris Betts penetrated the intricate emotions around living donors' organ transplantation in her exquisitely rendered Souls Raised From the Dead. The novel offered a human dimension and nuanced depth to this area of medical-ethics deliberations, which were making headline news. In Betts's story, a dying young daughter needs as close a match as possible for a new kidney. Her parents face complexities and contradictions behind informed consent and true autonomy that are far more subtle, wrenching, and real than any medical document or philosophy-journal article can render. Betts does justice to the medical and moral questions surrounding decisions that physicians, patients, and families must make regarding potential organ donations. What makes the book so compelling, though, is its focus on the various and often divergent emotional strategies that parents and children use to cope with fear, sacrifice, and impending loss. The 13-year-old Mary Grace, her parents, and grandparents reveal themselves as fully rounded, noninterchangeable human beings who come to their decisions and moral understandings over time, within their own unique personal histories and relationships with each other. As the therapeutic possibilities of transplant surgery were breaking new ground in hospitals across the country, surgeons, families, and hospital ethics committees grappled with dilemmas about how to make good choices between the medical dictum to "do no harm" and the ethical responsibility to honor patients' sovereignty over their own bodies. Betts's novel captured the difficulty of doing the right thing for families enduring often inexpressible suffering: How much sacrifice can we expect of one family member to save another? The ethical complexities regarding organ donations, and particularly the dilemmas associated with decisions to conceive children as donors, are escalating. Four years ago The New York Times reported on two families who each conceived a child to save the life of another one. Fanconi anemia causes bone-marrow failure and eventually leukemia and other kinds of cancer. Children born with the disease rarely live past early childhood. Their best chance of survival comes from a bone-marrow transplant from a perfectly matched sibling. Many Fanconi parents have conceived another child in the hope that luck would give them an ideal genetic match. These two couples, however, became the first to use new reproductive technologies to select from embryos resulting from in vitro fertilization, so they could be certain that this second baby would be a perfect match. When the article appeared in the Times, many people wondered if it is wrong to create a child for "spare parts." News reports conjured up fears of "Frankenstein medicine." State and federal legislatures threatened laws to ban research using embryos. A fictional version of this dilemma appears in Jodi Picoult's novel My Sister's Keeper. Picoult, a novelist drawn to such charged topics as teen suicide and statutory rape, takes up this bioethics narrative of parents desperate to save a sick child through the promise of genetic engineering. Conceived in that way, Anna Fitzgerald has served since her birth as the perfectly matched donor for her sister, Kate, who has leukemia, supplying stem cells, bone marrow, and blood whenever needed. Now, though, as her sister's organs begin to fail, the feisty Anna balks when she is expected to donate a kidney. Through alternating points of view, Picoult exposes the family's moral, emotional, and legal dilemmas, asking if it can be right to use -- and perhaps sacrifice -- one child to save the life of another. The story draws the reader in with its interesting premise -- one sister's vital needs pitted against the other's -- but ultimately disintegrates within a melodramatic plot that strands its underdeveloped characters. Why is the girls' mother so blind and deaf to Anna's misgivings about her role as donor? How can we possibly believe the contrived ending, which circumvents the parents' need to make a difficult moral choice? Ultimately the novel trivializes what deserves to be portrayed as a profoundly painful Sophie's choice, using the contentious bioethics issue as grist for a kind of formulaic writing. While authors like Betts and Picoult have examined ethical dilemmas of the new science in a style that might be called realistic family drama, others lean toward science fiction, imagining dystopian futures that are chillingly based on the present. Often prescient, they reflect our unarticulated fears, mirroring our rising anxiety about where we are going and who we are becoming. In addressing concerns about cloning, artificial reproduction, and organ donation, these novels join an even broader, older genre, the dystopian novels of the biological revolution. In 1987 Walker Percy published The Thanatos Syndrome, a scathing fictional exploration of what the then-new psychotropic drugs might mean to our understanding of being human. In this last and darkest novel by the physician-writer, the psychiatrist Tom More stumbles on a scheme to improve human behavior by adding heavy sodium to the water supply. After all, the schemers argue, what fluoride has done for oral hygiene, we might do for crime, disease, depression, and poor memory! More is intrigued but ultimately aghast at the consequences: humans reduced to lusty apes with no discernible soul or even self-consciousness. Percy cleverly captures many of our qualms about such enhancement therapies in a fast-paced plot that reads like a thriller. Many readers, however, feel that this sixth and final novel is the least compelling of Percy's oeuvre, emphasizing his moral outrage over the excesses of science at the expense of a protagonist's spiritual and emotional journey that had previously been the hallmark of his highly acclaimed fiction. With less dark humor but equal verve, Margaret Atwood's Oryx and Crake chronicles the creation of a would-be paradise shaped and then obliterated by genetic manipulation. Echoes of her earlier best seller The Handmaid's Tale (Houghton Mifflin, 1986) reverberate through this postapocalyptic world set in an indeterminate future, where Snowman, the proverbial last man alive, describes how the primal landscape came to be after the evisceration of bioengineering gone awry. A modern-day Robinson Crusoe, Snowman is marooned on a parched beach, stranded between the polluted water and a chemical wasteland that has been stripped of humankind by a virulent plague. Once he melts away, even the vague memories of what was will have disappeared. As in other works of science fiction, while its plot complications drive the narrative, its powerful conceptual framework dominates the stage. For all it lacks in character complexity and realistic psychological motivations, this 17th book of Atwood's fiction has a captivating Swiftian moral energy, announced in the opening quotation, from Gulliver's Travels: "My principal design was to inform you, and not to amuse you."Readers, however, might wish that Atwood had made a stronger effort to amuse us. Her ability to sustain our interest is challenged by the story's unremitting bleakness and the lack of real moral depth to its few characters. Even with its weaknesses, Atwood's is a powerful cautionary tale, similar in some ways to Caryl Churchill's inventive play A Number (2002). That drama is constructed of a series of dialogues in which a son confronts his father (named Salter) with the news that he is one of "a number" of clones (all named Bernard). Years ago, grieving the death of his wife, Salter was left to raise a difficult son, lost to him in some deep way, whom he finally put into "care." Sometime later, wanting a replacement for the lost son, he had the boy cloned. Without his knowledge, 19 others were created, too. Now Salter hears not only the emotional pain and anger of his original troubled son, but also the harrowing psychological struggles of several of the cloned Bernards. Salter responds with a mix of anguish and resignation as he faces the consequences of decisions he once made without much thought. This strange play winds through an ethical maze as each of the characters desperately tries to come to some livable terms with what genetic engineering has wrought. The drama is inventive in both its staccato elliptical dialogues and its sheer number of existential and ethical ideas. In the end, though, the characters never emerge as human, never engage us sufficiently to make us care about their ordeals with selfhood and love. When Salter says to one of the cloned Bernards, "What they've done they've damaged your uniqueness, weakened your identity," it is difficult to believe that they were ever capable of possessing either. Although Churchill's nightmare may seem especially odd, her tale of violence, deception, and loss resonates with those of Betts, Ishiguro, and Picoult. What if you might lose your child? If the means were available, would you take any chance, do anything, to save her? Or, if lost to you, to bring him back? All of these stories have in common their underlying questions about where bioengineering is leading us, what kinds of choices it asks us to make, and where the true costs and benefits lie. What makes the stories different from other forms of ethical inquiry is their narrative form, their way of knowing as literature. John Gardner reminds us that novels are a form of moral laboratory. In the pages of well-written fiction, we explore the way a unique human being in a certain set of circumstance makes moral decisions and lives out their consequences. Some of the novels being written now offer valuable cautionary tales about what is at stake in our current forays into new science and technology, asking us, as Ishiguro does in Never Let Me Go, What is immutable? What endures? What is essential about being human? Where does the essential core of identity lie? Does it derive from nature or nurture, from our environment or genetics? But the best go further. As Ishiguro's does, they take the bioethics issue as a fundamental moral challenge. Instead of using an aspect of bioethics as an engine to drive the plot, some authors succeed in using it as a prism that shines new light onto timeless questions about what it means to be fully human. At its heart, Ishiguro's tale has very little to do with the specific current controversies over cloning or genetic engineering or organ transplantation, any more than The Remains of the Day has to do with butlering or A Pale View of Hills has to do with surviving the atomic bomb. By the end of the novel, we discover that Never Let Me Go is, if cautionary, also subtler and more subversive than we suspected. Tommy and Ruth are already gone, and Kathy herself is ready to begin the "donations" that will lead to her own "completion." During one of her long road trips, she stops the car for "the only indulgent thing" she's ever done in a life defined by duty and "what we're supposed to be doing." Looking out over an empty plowed field, just this once she allows herself to feel an inkling of what she's lost and all she will never have. At this moment, we realize ourselves in Kathy, and we see her foreshortened and stunted life as not so very different from our own. The biological revolution's greatest surprise of all may be that its dilemmas are not really new. Instead, it may simply deepen the ones we've always faced about how to find meaning in our own lives and the lives of others. Martha Montello is an associate professor in the department of history and philosophy of medicine and director of the Writing Resource Center in the School of Medicine at the University of Kansas. She also lectures on literature and ethics at the Harvard-MIT Division of Health Sciences & Technology, and co-edited Stories Matter: The Role of Narrative in Medical Ethics (Routledge, 2002). WORKS DISCUSSED IN THIS ESSAY My Sister's Keeper, by Jodi Picoult (Atria, 2004) Never Let Me Go, by Kazuo Ishiguro (Knopf, 2005) A Number, by Caryl Churchill (a 2002 play published by Theatre Communications Group in 2003) Oryx and Crake, by Margaret Atwood (Nan A. Talese, 2003) Souls Raised From the Dead, by Doris Betts (Knopf, 1994) The Thanatos Syndrome, by Walker Percy (Farrar, Straus and Giroux, 1987) ------------------- Does It Make Sense to Use Fiction as a Guide to Bioethics? The Chronicle of Higher Education, 5.7.8 http://chronicle.com/weekly/v51/i44/44b01301.htm To the Editor: In "Novel Perspectives on Bioethics" (The Review, May 13), Martha Montello discusses the treatment of bioethical issues in literary works that "lean toward science fiction" without discussing any examples from science fiction itself, which offers interesting, incisive considerations of bioethical concerns in abundance. Examples include Nancy Kress's 1993 Beggars in Spain, about genetic engineering, and Nancy Farmer's 2002 young-adult novel about cloning, The House of the Scorpion. These books and many other examples of science fiction -- contrary to Montello's implication that science fiction is a lesser literature dominated by plot and concept -- shine, like the works she discusses, "new light onto timeless questions about what it means to be fully human" through three-dimensional characters and well-crafted prose. As I argue in my forthcoming book, Understanding Contemporary American Science Fiction: The Age of Maturity, 1970-2000, the traditional disdain for science fiction among many literary scholars should not blind readers to its virtues -- including what it has to contribute to discussions about how science affects our lives. Darren Harris-Fain Associate Professor of English Shawnee State University Portsmouth, Ohio *** To the Editor: I read with some measure of consternation Martha Montello's essay, in which she advises those Christian fundamentalists and their opponents who have been debating bioethical issues in the Kansas Legislature to read more fiction and poetry in order to refine their ethical barometers. Although the proposal is a sound one, the reasons Professor Montello adduces -- viz., that many works of fiction, a few of which she discusses in some detail, address bioethical issues in a thoughtful and sensitive fashion -- are not at all compelling. In fact, resorting to works of art to gain knowledge about how bioethical controversies should be resolved makes Plato's decision to exile the poets seem appealing. Great works of art are vivid and moving, but they necessarily lack the systematic approach and appropriately cold-blooded rationality that other disciplines, such as ethical philosophy, possess in their best moments. Kazuo Ishiguro's portrait of Kathy H. in Never Let Me Go may be compelling as an aesthetic matter, but anyone who concludes that cloning is a horrific enterprise on the strength of that portrait is allowing an emotional reaction to a piece of science fiction to interfere with the logical scrutiny of reality. Great literature may be instructive, but not like this. One should not turn to Ishiguro to discern bioethical truths any more than one would turn to Shakespeare's history plays to learn British history. Rather, such literature teaches us how to engage our world in a subtler, more nuanced fashion, to make fine distinctions in lieu of crude, categorical ones. And it cultivates in us the skill of textual interpretation, a significant ability in a forum such as the Kansas Legislature, where, as Montello observes, "language mattered." It teaches those who would read biblical texts in a fundamentalist fashion about the use of tropes and other literary devices that permit nonliteral readings of texts and that allow the reader to get beyond the dangerous, antipodean view that such works are either factually true or else false. After all, as Nietzsche explained in responding to Plato, art is the one thing that is true because it treats appearance as appearance; its aim is precisely not to deceive. To use art as a source of philosophical or factual truths, as Professor Montello proposes, therefore, is no less an abuse than the manner in which religious conservatives seek to employ the Bible to shape their beliefs about a world far too nuanced to be contained by its well-worn pages. Alex Zubatov New York * * * To the Editor: I wish I could be more sanguine that the study of literature leads to a better bioethics. Not that I'm a disbeliever in literature. I've written two novels (that never found their way into print but that were popular with my friends, or so they diplomatically said). ... But contrary to her purpose, Martha Montello's tour of contemporary fiction shows that literature takes bioethics only so far. ... There are plenty of literary potboilers that don't help bioethics at all, and it's an open question whether fiction overall enlightens more than it stultifies. Montello says that fiction reveals "our common yet deeply individual struggles to find an ethics commensurate with rapid advances in the new science and technologies." Maybe, but there's still a step missing: How do works of imagination improve either the framing or the resolution of legal and policy decisions? ... I would be better persuaded of the value of fiction and poetry to bioethics if Montello showed how this literature actually helped answer important questions of the day, in ways that were superior to the answers arrived at without reference to someone's idea of the literary canon. Timothy F. Murphy Professor of Philosophy in the Biomedical Sciences College of Medicine University of Illinois Chicago From checker at panix.com Wed Jul 6 21:20:58 2005 From: checker at panix.com (Premise Checker) Date: Wed, 6 Jul 2005 17:20:58 -0400 (EDT) Subject: [Paleopsych] Telegraph: Susan Blackmore: I take illegal drugs for inspiration Message-ID: http://www.susanblackmore.co.uk/journalism/telegraphdrugs.htm Daily Telegraph, Saturday May 21st 2005, pp 17-18 (Note: This version is very slightly different from the published, edited, version) Every year, like a social drinker who wants to prove to herself that she's not an alcoholic, I give up cannabis for a month. It can be a tough and dreary time - and much as I enjoy a glass of wine with dinner, alcohol cannot take its place. Some people may smoke dope just to relax or have fun, but for me the reason goes deeper. In fact, I can honestly say that without cannabis, most of my scientific research would never have been done and most of my books on psychology and evolution would not have been written. Some evenings, after a long day at my desk, I'll slip into the bath, light a candle and a spliff, and let the ideas flow - that lecture I have to give to 500 people next week, that article I'm writing for New Scientist, those tricky last words of a book I've been working on for months. This is the time when the sentences seem to write themselves. Or I might sit out in my greenhouse on a summer evening among my tomatoes and peach trees, struggling with questions about free will or the nature of the universe, and find that a smoke gives me new ways of thinking about them. Yes, I know there are serious risks to my health, and I know I might be caught and fined or put in prison. But I weigh all this up, and go on smoking grass. For both individuals and society, all drugs present a dilemma: are they worth the risks to health, wealth and sanity? For me, the pay-off is the scientific inspiration, the wealth of new ideas and the spur to inner exploration. But if I end up a mental and physical wreck, I hereby give you my permission to gloat and say: "I told you so". My first encounter with drugs was a joint shared with a college friend in my first term at Oxford. This was at the tail end of the days of psychedelia and flower power - and cannabis was easy to obtain. After long days of lectures and writing essays, we enjoyed the laughter and giggling, the heightened sensations and crazy ideas that the drug seemed to let loose. Then, one night, something out of the ordinary happened - though whether it was caused by the drug, lack of sleep or something else altogether, I don't know. I was listening to a record with two friends, sitting cross-legged on the floor, and I had smoked just enough to induce a mild synaesthesia. The sound of the music had somehow induced the sensation of rushing through a long, dark tunnel of rustling leaves towards a bright light. I love tunnels. They come on the verges of sleep and death and are well known in all the cultures that use drugs for ritual, magic or healing. The reason for them lies in the visual cortex at the back of the brain, where certain drugs interfere with the inhibitory systems, releasing patterns of circles and spirals that form into tunnels and lights. I didn't know about the science then. I was just enjoying the ride, when one of my friends asked a peculiar question: "Where are you, Sue?". Where was I? I was in the tunnel. No, I was in my friend's room. I struggled to answer; then the confusion cleared and I was looking down on the familiar scene from above. "I'm on the ceiling, " I said, as I watched the mouth down below open and close and say the words in unison. It was a most peculiar sensation. My friend persisted. Can you move? Yes. Can you go through the walls? Yes. And I was off exploring what I thought, at the time, was the real world. It was a wonderful feeling - like a flying dream, only more realistic and intense. The experience lasted more than two hours, and I remember it clearly even now. Eventually, it came to seem more like a mystical experience in which time and space had lost their meaning and I appeared to merge with the universe. Years later, when I began research on out-of-body and near-death experiences, I realised that I'd had all those now-familiar sensations that people report after close brushes with death. And I wanted to find out more. However, nothing in the physiology and psychology that I was studying could remotely begin to cope with something like this. We were learning about rats' brains, and memory mechanisms, not mind and consciousness - let alone a mind that could apparently leave its body and travel around without it. Then and there, I decided to become a parapsychologist and devote my life to proving all those closed-minded scientists wrong. But I was the one who was wrong. I did become a parapsychologist, but decades of difficult research taught me that ESP almost certainly doesn't exist and that nothing leaves the body during an out-of-body experience - however realistic it may feel. Although parapsychology gave me no answers, I was still obsessed with a scientific mystery: how can we explain the mind and consciousness from what we know about the brain? Like any conventional scientist, I carried out experiments and surveys and studied the latest developments in psychology and neuroscience. But since the object of my inquiry was consciousness itself, this wasn't enough. I wanted to investigate my own consciousness as well. So I tried everything from weird machines and gadgets to long-term training in meditation - but I have to admit that drugs have played a major role. Back in those student days, it was the hallucinogens, or "mind-revealing" psychedelics, that excited us - and the ultimate hallucinogen must be LSD. Effective in minuscule doses, and not physically addictive, LSD takes you on a "trip" that lasts about eight to 10 hours but can seem like forever. Every sense is enhanced or distorted, objects change shape and form, terrors flood up from your own mind, and you can find joy in the simplest thing. Once the trip has begun, there is no escape - no antidote, no way to stop the journey into the depths of your own mind. In my twenties, I used to take acid two or three times a year - and this was quite enough, for an acid trip is not an adventure to be undertaken lightly. I've met the horrors with several hallucinogens, including magic mushrooms that I grew myself. I remember once gazing at a cheerfully coloured cushion, only to see each streak of colour turn into a scene of rape, mutilation or torture, the victims writhing and screaming - and when I shut my eyes, it didn't go away. It is easy to understand how such visions can turn into a classic "bad trip" , though that has never happened to me. Instead, the onslaught of images eventually taught me to see and accept the frightening depths of my own mind - to face up to the fact that, under other circumstances, I might be either torturer or tortured. In a curious way, this makes it easier to cope with the guilt, fear or anxiety of ordinary life. Certainly, acceptance is a skill worth having - though I guess there are easier ways of acquiring it. Then there's the fun and just the plain strangeness of LSD. On one sunny trip in Oxford, my friend and I stopped under a vast oak tree where the path had been trampled into deep furrows by cattle and then dried solid by the hot weather. We must have spent an hour there, gazing in wonder at the texture of this dried mud; at the hills and valleys in miniature; at the hoof-shaped pits and sharp cliffs; at the shifting patterns in the dappled shade. I felt that I knew every inch of this special place; that I had an intimate connection with the mud. Suddenly, I noticed a very old man with a stick, walking slowly towards us on the path. Keep calm, I told myself. Act normal. He'll just say hello, walk by, and be gone. "Excuse me, young lady," he said in a cracked voice. "My eyes are weak and, in this light, I can't see my way. Would you help me across?" And so it was that I found myself, dream-like, guiding the old man slowly across my special place - a patch of mud that I knew as well as my own features. Two days later, my friend came back from lectures, very excited. "I've seen him. The man with the stick. He's real!" We both feared that we'd hallucinated him. Aldous Huxley once said that mescaline opened "the doors of perception"; it certainly did that for me. I took it one day with friends in the country, where we walked in spring meadows, identified wild flowers, marvelled over sparkling spider's webs and gasped at the colours in the sky that rippled overhead. Back at the farmhouse, I sat playing with a kitten until kitten and flowers seemed inextricable. I took a pen and began to draw. I still have that little flower-kitten drawing on my study wall today. On another wall is a field of daffodils in oils. One day, many years later, I went to my regular art class the day after an LSD trip. The teacher had brought in a bunch of daffodils and given us one each, in a milk bottle. Mine was beautiful; but I couldn't draw just one. My vision was filled with daffodils, and I began to paint, in bold colours, huge blooms to fill the entire canvas. I will never be a great painter but, like many artists through the ages, I had found new ways of seeing that were induced by a chemical in the brain. So can drugs be creative? I would say so, although the dangers are great - not just the dangers inherent in any drug use, but the danger of coming to rely on them too much and of neglecting the hard work that both art and science demand. There are plenty of good reasons to shun drug-induced creativity. Yet, in my own case, drugs have an interesting role: in trying to understand consciousness, I am taking substances that affect the brain that I'm trying to understand. In other words, they alter the mind that is both the investigator and the investigated. Interestingly, hallucinogens such as LSD and psilocybin are the least popular of today's street drugs - perhaps because they demand so much of the person who takes them and promise neither pleasure or cheap happiness. Instead, the money is all in heroin, cocaine and other drugs of addiction. I have not enjoyed my few experiences with cocaine. I don't like the rush of false confidence and energy it provides - partly because that's not what I'm looking for and partly because I've seen cocaine take people over and ruin their lives. But many people love it - and the dealers get rich on getting people hooked. This is tragic. In just about every human society there has ever been, people have used dangerous drugs - but most have developed rituals that bring an element of control or safety to the experience. In more primitive societies, it is shamans and healers who control the use of dangerous drugs, choose appropriate settings in which to take them and teach people how to appreciate the visions and insights that they can bring. In our own society, criminals control all drug sales. This means that users have no way of knowing exactly what they are buying and no-one to teach them how to use these dangerous tools. I have been lucky with my own teachers. The first time I took ecstasy, for example, I was with three people I had met at a Norwegian conference on death and dying. It was mid-summer, and they had invited me to join them on a trip around the fjords. One afternoon, we sat together and took pure crystals of MDMA - nothing like the frightening mixtures for sale on the streets today. MDMA has the curious effect of making you feel warm and loving towards everyone and everything around you: within a few short hours, we were all convinced that we knew each other in a deep and intimate way. Then we deliberately each set off alone to walk in the mountains, where the same feeling of love now seemed to encompass the entire landscape. I was told then that I should make the most of my first few experiences with MDMA because, after five or six doses, I would never get the same effects again. In my experience, this has been true, although prohibition makes it all but impossible to find such things out. In fact, we know horrifyingly little about the psychological effects of drugs that people take every day in Britain because scientists are not allowed to carry out the necessary research. That is why I've had to do my own. I once had an expert friend inject me with a high dose of ketamine because I had heard it could induce out-of-body experiences. Known as K, or Special K, on the street, this is an anaesthetic used more often by vets than anaesthetists because of its unpleasant tendency to produce nightmares. Get the dose right, as I did, and you are completely paralysed apart from the ability to move your eyes. This is not very pleasant. However, by imagining I was lifting out of my body, I felt I could fly, and I set off home to see what my children were up to. I was sure that I saw them playing in the kitchen; but when I checked the next day, I was told they had been asleep. Back in the room, my guide began holding up his fingers out of my line of vision and, as soon as my mouth started working again, made me guess how many. I seemed to see the fingers all right, but my guesses were totally wrong. I didn't repeat the experiment. It was not nearly as interesting as those drugs, such as LSD, psilocybin, DMT or mescaline, that undermine everything you take for granted. These are psychedelics that threaten our ordinary sense of self, and that is where they touch most deeply on my scientific interests. What is a self? How does the brain create this sense of being "me", inside this head, looking out at the world, when I know that behind my eyes there are only millions of brain cells - and nowhere for an inner self to hide? How can those millions of brain cells give rise to free will when they are merely physical and chemical machines? In threatening our sense of self, could it be that these drugs reveal the scary truth that there is no such thing? Mystics would say so. And, here, we hit an old and familiar question: do drugs and mystical experiences lead to the same "insights"? And are those insights true? Since those first trips, I have taken many other drugs - such as nitrous oxide, or laughing gas. For just a few moments, I have understood everything - "Yes, yes, this is so right, this is how it has to be" - and then the certainty vanishes and you cannot say what you understood. When the discoverer of nitrous oxide, Sir Humphrey Davy, took it himself in 1799, he exclaimed: "Nothing exists but thoughts". Others, too, have found their views profoundly shifted. It seems quite extraordinary to me that so simple a molecule can change one's philosophy, even for a few moments, yet it seems it can. Why does the gas make you laugh? Perhaps it is a reaction to a brief appreciation of that terrifying cosmic joke - that we are just shifting patterns in a meaningless universe. Are drugs the quick and dirty route to insight? I wanted to try the slow route, too. So I have spent more than 20 years training in meditation - not joining any cult or religion but learning the discipline of steadily looking into my own mind. Gradually, the mind calms, space opens up, self and other become indistinguishable, and desires drop away. It's an old metaphor, but people often liken the task to climbing a mountain. The drugs can take you up in a helicopter to see what's there, but you can't stay. In the end, you have to climb the mountain yourself - the hard way. Even so, by giving you that first glimpse, the drugs may provide the inspiration to keep climbing. Psychologist Susan Blackmore, neuro-scientist Colin Blakemore and author Mike Jay will be appearing at the Cheltenham Science Festival (June 8-12) to discuss whether drugs can teach us anything about ourselves. For tickets to the Altered States session at the town hall ( ?6, 4pm on Saturday, June 11) or for any other festival event , please call 01242 227 979 (information: www.cheltenhamfestivals.org.uk) From waluk at earthlink.net Wed Jul 6 23:04:03 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 06 Jul 2005 16:04:03 -0700 Subject: [Paleopsych] stereotypes In-Reply-To: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> References: <20050706195240.68173.qmail@web30808.mail.mud.yahoo.com> Message-ID: <42CC6363.3060508@earthlink.net> Michael Christopher wrote: >>Germans without antisemitism in their families would still have known to avoid shopping at Jewish stores or being seen associating with Jews, picking up on their "marked" status and fearing the consequences of associating with a scapegoated class. >> Gerry replies: Since my husband is from a tightly knit German family (both parents were of German heritage) I've know lots of ethnic Germans and never have I ever seen or heard any of them refuse to shop at "Jewish stores" or hang out with Jews. One possible explanation could be that very few shops in California are designated as "Jewish" the way they are in the greater Boston area. I fear that what you are conveying could be another "urban legend". Gerry From checker at panix.com Thu Jul 7 14:47:47 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:47:47 -0400 (EDT) Subject: [Paleopsych] NYT: How Much Is Nature Worth? For You, $33 Trillion Message-ID: How Much Is Nature Worth? For You, $33 Trillion The New York Times, 97.5.20 [Note the date. I found this when doing some Spring cleaning. No, I won't say which spring!] By WILLIAM K. STEVENS ???HOW much is nature worth? ???Some say the question is unanswerable, that it is impossible to calculate a dollar value for the natural world. Others say the question should not even be asked; that nature, like human life, is priceless and should not be devalued as if it were a mere commodity. ??? But economists and ecologists are searching for the answer anyway. Nature performs valuable, practical, measurable functions, they say; without them the human economy could not exist, and in many cases people could not duplicate them as cheaply -- or at all. And they say it is time that the value of these functions is considered when economic decisions are made. ???One notable example of nature's economic value that they cite is the purification of New York City's water supply by microorganisms as the water percolates through the soil of the Catskills. The city plans to spend $660 million to preserve that watershed in good health; the alternative, a water treatment plant, would have cost $4 billion to build. ???Nature performs a long list of other economic services as well. Flood control, soil formation, pollination, food and timber production, provision of the raw material for new medicines, recreational opportunities and the maintenance of a favorable climate are among them. ???But like a well that is taken for granted until it runs dry, these ecosystem services, as ecologists call them, have long been overlooked until they either no longer work or are gone -- as, for instance, when the widespread destruction of Midwestern wetlands meant they could no longer perform their natural function of sponging up water from disastrous floods like those of recent years. ???And to the extent ecosystem services are noticed at all, people have tended to regard them as free. ???Now, as human activity gradually uses up or destroys this natural capital and eats away at the natural systems that provide many of the services, many experts are insisting that the worth of ecosystem services must be calculated and heeded. ???The results of the latest and in some ways the most ambitious effort to place a dollar value on natural capital and services were announced last week. ???Thirteen ecologists, economists and geographers, in a report in the journal Nature, estimated the present global value of 17 ecosystem services at $16 trillion to $54 trillion a year, with a likely figure of at least $33 trillion. Most of this, they said, lies outside formal markets and is therefore not reflected in market prices, the customary gauge of economic value. Their estimate, they said, compares with $18 trillion for the gross national product of the world, which is all the goods and services produced by people each year. ???The researchers, who based their conclusion on other published studies and their own calculations, freely point out that their estimate is a rough approximation, a first step that is mainly intended to determine whether ecosystem services amount to "big potatoes or small potatoes," in the words of Dr. Robert Costanza, an ecological economist at the University of Maryland who headed the study. "We come away from this thinking this is a minimum estimate," Dr. Costanza said. ???Virtually everyone agrees that without the natural world, the human economy and indeed human life could not exist. In this sense, the value of nature is infinite, immeasurable. To some conservationists, this is all that needs to be said. "Common sense and what little we have left of the wisdom of our ancestors tells us that if we ruin the earth, we will suffer grievously," said Dr. David Ehrenfeld, a conservation biologist at Rutgers University. He said he accepted the results of the Costanza study, which he regards as conservative, but added: "I am afraid that I don't see much hope for a civilization so stupid that it demands a quantitative estimate of the value of its own umbilical cord." ???Dr. Ehrenfeld and some other conservationists believe that moral arguments for saving nature are more persuasive than economic ones. But in the view of Dr. Costanza and others, moral and economic arguments should be pursued in parallel. ???People make economic choices involving nature all the time, according to this view, but they do so without taking all the costs into account. For example, the dollar value of a wetland's flood-protection and water cleansing abilities has not traditionally been considered when it is lost to a shopping center. The result is a creeping depletion of natural wealth. ???If such costs were reflected in day-to-day transactions, these theorists say, society would pay more attention to what is lost when land is "developed." ???"We can't wait until we've disrupted the planet's life-support system beyond repair," said Dr. Gretchen C. Daily, a conservation biologist at Stanford University. She is the editor of a recent collection of papers on the subject, "Nature's Services; Societal Dependence on Natural Ecosystems," published as a book by Island Press. Once gone, she noted, many of these ecological assets would be difficult, if not impossible, to replace; it can take thousands of years to recharge depleted aquifers or replenish topsoil. ???Until now, fledgling efforts at what is called "green accounting" have been pursued largely at the national level. In a widely applauded attempt of this kind, Dr. Robert Repetto, senior economist at the World Resources Institute, a Washington-based research organization, has analyzed the economies of Indonesia and Costa Rica. In Indonesia, he and colleagues calculated that losses from soil erosion reduced the net value of crops by about 40 percent and that the loss of value from deforestation was four times as high as the value of the timber extracted. They also concluded that depletion of Costa Rica's soils, forests and fisheries resulted in a 25 percent to 30 percent reduction in potential economic growth. ???A nascent effort to introduce a measure of natural-resource accounting into the United States' official calculation of economic worth was made in 1993, but it is on hold pending a Congressionally ordered study of the soundness of the approach by the National Academy of Sciences. A report is due this year. ???The new Costanza study is not really an exercise in green accounting, and some experts question its practical usefulness while others express skepticism about its basic finding. ???"There's no way of knowing how good this number is," Dr. Repetto said of the study's estimate of $33 trillion for the global value of ecosystem services. "They've made some heroic assumptions. I suppose it's useful for rhetorical purposes." But the number, he said, is less important than the fundamental point made by the study "that ecosystem services are important; I don't think reasonable people would deny that." ???Other experts see more utility in the analysis. The study has succeeded in providing "a conservative estimate of what the environment does for us," said Dr. Stuart Pimm, an ecologist at the University of Tennessee who wrote a commentary on the Costanza study in Nature. "So often," he said, "people concerned with protecting the environment go up against these very highly detailed economic analyses and feel they don't have anything in kind with which to respond." In the tables of specific ecosystem services that accompany the study, he said, "what Costanza et al. has done is provide a checklist" that national and local policy makers can use in attempting to make a rough gauge of the economic worth of their natural assets. ???One table, for instance, lists specific ecosystem services, and their supposed value, for 11 biomes, or types of natural areas. These include the open ocean, estuaries, seagrass and algae beds, coral reefs, continental shelves, tropical forests, temperate forests, grasslands and rangelands, tidal marshes and mangroves, wetlands and flood plains and lakes and rivers. ???The next step, Dr. Costanza says, is to delineate more clearly the explicit linkages between particular local ecosystems and local economies. For example, how much of the value of the Louisiana shrimp catch is attributable to the wetlands in which the shrimp reproduce and grow? But since wetlands perform other services as well, the wetlands' value as a shrimp nursery would be only a minimum indication of their overall value. ???The same applies, for example, to the Catskill watershed, which serves other economic functions besides providing and cleaning New York City's water -- attracting tourists, for instance. "Nobody thinks the Catskills are worth only $4 billion," Dr. Daily said, referring to the cost of replacing the Catskills' water-cleansing function. ???Assuming the value of ecosystem services could eventually be established, how might economic policies be changed? For openers, Dr. Daily and others say, government subsidies that distort the value of natural resources -- in fisheries and logging, for example -- should be abolished. Also, tax incentives might be given to landowners to protect the long-term assets represented by natural capital rather than using them for short-term gain. ???Some experts advocate applying traditional economic arrangements to ecosystem services. For instance, Dr. Graciela Chichilnisky and Dr. Geoffrey Heal, economists at Columbia University, have proposed selling investment shares in a given ecosystem. Using the Catskill watershed as an illustration, they say that the capital thus raised would pay for preserving the watershed. Returns to investors would come either from a share of the costs saved by not having to build a treatment plant or, if the investment were private, by actually selling ecosystem services. In the case of a watershed, clean water would be sold. ???But, says Dr. Daily, "the first thing is getting the prices right." ???GRAPHIC: Chart: "The Value of the Natural World" A new attempt by 13 scientists to assign dollar vlues to essential services performed for the human economy by the natural world divides the services into the following 17 Categories. Gas Regulation Carbon dioxide/oxygen balance, ozone for ultraviolet protection Climate Regulation Greenhouse gas regulation Disturbance Regulation Storm protection, flood control, drought recovery Water Regulation Provision of water for irrigation, mills or transportation Water Supply Provision of water by watersheds, reservoirs and aquifers Erosion control and sediment Retention Prevention of soil loss by wind, runoff, etc; storage of silt in lakes and wetlands Soil Formation Weathering of rock and accumulation of organic material Nutrient Cycling Nitrogen fixation Waste Treatment Pollution control, detoxification Pollination Pollinators for plant reproduction Biological Control Predator control of prey species Refuges Nurseries, habitat for migratory species Food Production Production of fish, game, crops, nuts and fruits by hunting, fishing, gathering of subsistence farming Raw Materials Production of lumber, fuel or fodder Genetic Resources Medicines, Resistance genes for crops, ornamental plant species, pets Recreation Ecotourism, sports fishing, other outdoor recreation Cultural Esthetic, artistic, educational, spiritual and/or scientific values of ecosystems (Source: Nature) From checker at panix.com Thu Jul 7 14:49:54 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:49:54 -0400 (EDT) Subject: [Paleopsych] Newsday: Test Seeks to Measure Students' Web IQ Message-ID: Test Seeks to Measure Students' Web IQ http://www.newsday.com/technology/business/wire/sns-ap-internet-intelligence,0,7931562,print.story?coll=sns-ap-technology-headlines By MICHELLE LOCKE Associated Press Writer 5.7.2 LONG BEACH, Calif. -- Students apply to college online, e-mail their papers to their professors and, when they want to be cheeky, pass notes in class by text-messaging. But that doesn't necessarily mean they have a high Internet IQ. "They're real comfortable instant-messaging, downloading MP3 files. They're less comfortable using technology in ways that require real critical thinking," says Teresa Egan of the Educational Testing Service. Or as Lorie Roth, assistant vice chancellor of academic programs at California State University puts it: "Every single one that comes through the door thinks that if you just go to Google and get some hits -- you've got material for your research paper right there." That's why Cal State and a number of other colleges are working with ETS to create a test to evaluate Internet intelligence, measuring whether students can locate and verify reliable online information and whether they know how to properly use and credit the material. "This test measures a skill as important as having mathematics and English skills when you come to the university," says Roth. "If you don't come to the university with it, you need to know that you are lacking some skills that educated people are expected to have." A preliminary version of the new test, the Information and Communication Technology Literacy Assessment, was given to 3,300 Cal State students this spring to see how well it works, i.e. testing the test. Individual scores aren't being tallied but campuses will be getting aggregate reports. Next year, the test is expected to be available for students to take on a voluntary basis. Cal State is the lead institution in a consortium which includes UCLA, the University of Louisville, the California Community College System, the University of North Alabama, the University of Texas System and the University of Washington. Some of the institutions involved are considering using the test on incoming students to see if they need remedial classes, says Egan, ETS' project manager for the Information and Communication Technology Literacy Assessment. Other schools are thinking about giving the test as a follow up to communications courses to gauge curricula efficiency. Robert Jimenez, a student at Cal State-Fullerton who took the prototype test this spring, gives it a passing grade. "It was pretty good in that it allowed us to go ahead and think through real-life problems." Sample questions include giving students a simulated page of Web search results on a particular subject and asking students to pick the legitimate sources. So, a question on bee sting remedies presents a choice of sites ranging from ads to a forum for herb treatments to (the correct answer) a listing from the National Institutes of Health, identifiable by having "nih" in the URL (site address) along with the ".gov" suffix that connotes an official government listing. High tech has been a fixture of higher ed for some years. A 2002 report from the Pew Internet & American Life Project found that 79 percent of college Internet users thought the Internet had a positive impact on their academic experience. More than 70 percent used the Internet more than the library and 56 percent said e-mail improved their relationships with professors. Of course, some of those text-messaging students are still being taught by professors whose idea of a personal data assistant is a fresh pad of Post-Its. "The problem with technology and education is how do you fit the new technology into existing curriculum lesson plans. You can't add more class time and it's much easier to just keep teaching the way you were," says Steve Jones, a co-author on the Pew study and a communications professor at the University of Illinois at Chicago. Jones folds lessons on Internet use into his classes. And he doesn't mince words about students who try the "click, copy and paste" approach to homework. "I tell the students, `Some of you are going to put off this paper until the night before. You're going to go to Google, type in search words and just look at the top five hits and use those. I'm going to grade you on this. I'm going to look at these sources and so let's talk about how to evaluate sources.'" Which doesn't necessarily mean they all "suddenly become fabulous information evaluators and seekers, but it gives them a little bit of an idea that this isn't something that's apart from learning." Jones also finds himself learning from students, who are trying out new things like blogs and collaborating with other students online to create new sources of information. He thinks assessing students' Internet skills could be useful in figuring out ways to help them do better research but cautions that it's tough to test on something as changeable as the Internet. Roth notes that the bulk of the assessment focuses on critical thinking skills, being able to analyze the legitimacy of Web sites, and knowing the difference between properly cited research and plagiarism, things that "haven't changed very much since I enrolled in college in 1969." For today's students, working on the Net means not having the safety net of references vetted by campus librarians. But Roth isn't nostalgic. "Anybody want to go back to the bad old days when you had manual typewriters, and you had to get up and walk to the library to look up something?" she says with a laugh. "I don't think so." * __ On the Net: http://www.calstate.edu http://www.ets.org/ictliteracy/ From checker at panix.com Thu Jul 7 14:50:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:02 -0400 (EDT) Subject: [Paleopsych] WSJ: (Nobel Sperm Bank) Nick Schulz: Turbulence in the Gene Pool Message-ID: Nick Schulz: Turbulence in the Gene Pool WSJ, 5.7.5 http://online.wsj.com/article/0,,SB112052650043477006,00.html* It was over breakfast one February morning in 2001 that Tom Legare, a precocious but otherwise typical American teenager, learned from his mother that his real father "was a Nobel Prize winner." The man married to his mother for so many years, it turned out, was in fact not biologically related to him. Rather, another man was, presumably a "brilliant scientist" (name unknown) who had contributed to the Repository for Germinal Choice, a "genius" sperm bank founded by businessman Robert K. Graham in 1980. Thus Tom was one of a far-flung brood, since many other infertile couples -- like the Legares -- had availed themselves of this high-end gene pool. Tom's existence really begins with Graham, an entrepreneur who had made a fortune in the eyeglass business only to turn his attention in the late 1970s to what were, for him anyway, grander and more noble pursuits. He hoped to save the human race from what he deemed a "genetic catastrophe." An experiment in 'positive eugenics' -- using Nobelists to boost inherited IQ. As David Plotz tells it in "*The Genius Factory*" (Random House, 262 pages, $24.95), a wonderfully readable and eye-opening account, Graham feared that, in late-20th-century America, "cradle-to-grave social welfare programs paid incompetents and imbeciles to reproduce. As a result, 'retrograde humans' were swamping the intelligent minority." The only way to save mankind was for the best intellectual "specimens" of the species to reproduce at a higher rate. And the best way to make that happen was to have the planet's brightest -- Nobel Prize-winners -- donate their genetic material for the betterment of humanity. "Ten men of high intelligence," Graham mused, "can be more effective than 1,000 morons." He envisioned replacing Darwin's natural selection with "intelligent selection." But the plan met resistance. "It's pretty silly," Max Delbruck, a Nobel winner in medicine said at the time. Nobelist Linus Pauling joked that "the old-fashioned way is still best." Such caviling was a problem. Graham's sperm bank had no hope of being marketed "to a skeptical public," Mr. Plotz writes, unless he scored some blue-ribbon sperm. That's when William Shockley stepped up to the cup, er, plate. Shockley had won a Nobel for his work on the transistor and helped launch the Silicon Valley tech boom. He also shared Graham's pessimistic view of mankind's genetic destiny, arguing that "humanitarianism gone berserk" -- by which he meant mostly welfare -- was keeping undesirables alive and, worse, procreating. Thus he was eager to do his part to skew the gene pool toward intelligence. With Shockley's seed, Graham was ready to go, and on Feb. 29, 1980, the Nobel Prize sperm bank was introduced to the world. According to Mr. Plotz, the bank was a product of a specific time, place and culture. California in the 1970s was the home of America's freedom-loving political and technological vanguard, filled with a spirit of limitless possibility. It was embodied in everyone from Ronald Reagan to the academics at Caltech and Stanford and the digital pioneers of Silicon Valley. An extreme, messianic form of the era's spirit took root in men like Graham and Shockley. In 2001, Mr. Plotz set out to find out what happened to the fruit of this odd, two-decade experiment in "positive" eugenics. "The Nobel sperm bank kids, I realized, were messengers from our future." Over the years, other journalists had tried to learn more about the donors and children of the bank, but unsuccessfully: The bank was set up to ensure that the donors didn't know who their kids were and that the kids didn't know who their fathers were. So Mr. Plotz, a writer and editor with the online magazine Slate, figured that he would use the Internet to break through this secret screen. He published an article in Slate asking anyone who had ever been involved with the bank or who knew anything about its donors or children to contact him. And the experiment worked. Donors and mothers reached out by email. Over three years he was able to track down dozens of children of the bank deposits. In several tense, hilarious and touching episodes, Mr. Plotz describes how he midwifed the meeting of some children and their donor fathers. Regardless of the ethical merits of Graham's plan -- or, one should say, its demerits -- the bank was doomed to failure. Most Nobelists are older, relative to the rest of the population. Their sperm is generally of substandard quality -- lower in number and, so to speak, reproductive energy -- and thus less likely to fertilize an egg. So shortly after opening the bank, Graham had to seek out as donors younger, high-IQ folks who weren't Nobelists. But once Graham watered-down the requirements, the reason for the bank's existence -- its cachet -- was gone. Standards slipped. As Tom Legare was later to find out, his father didn't win a Nobel. No genius, he was something of a lovable loser living next to a drug den in Florida. As for the other "genius" children: Many were moderately bright, but only one exceptionally so. Many had psychological problems, although it is impossible to say whether the "genius" part of their makeup played a part. The story of this genetic experiment is a rare contribution to the debate over biotechnology, which usually ping-pongs between dystopians and techno-enthusiasts making broad, philosophical claims. By giving readers the case study of a serious -- and failed -- effort to engineer a better human race, Mr. Plotz brings the discussion back down to earth, where it belongs. */Nick Schulz/**/ is editor of TechCentralStation.com ^1 ./* From checker at panix.com Thu Jul 7 14:50:11 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:11 -0400 (EDT) Subject: [Paleopsych] Pravda: Left-handed human race to make the world a better place Message-ID: Left-handed human race to make the world a better place http://english.pravda.ru/printed.html?news_id=15765 5.7.6 [Now be careful evaluating the merits of this report. My first reaction is that Pravda is not reliable since it takes ESP for granted, while in this country and the world at large it is a matter of great controversy. What kind of quality control does Pravda now make over its writers? I recall the case of Bill White, who called himself a "libertarian socialist" and wrote articles quite critical of Jewish influence on American foreign policy (fine with me to get points of view not widely circulated in the United States) but veered off into implausible conspiracies (as opposed to plausible ones, of course). [My second reaction is that the article does make several interesting points. I'd love to think left-handers have higher IQs, since I'm left-handed. [My third reaction is that I do not recall this claim and probably would have remembered it if I had. So I'm inclined to view all the facts with suspicion, except that Leonardo, Tolstoy, Chaplin, and Kidman are lefties.] Scientists say that the number of left-handed individuals grows rather fast in the world today Specialists calculated that every tenth human being is left-handed. The total amount of lefthanders living in the world reaches over 600 million. According to experts' estimates, there will be a billion of left-handed people living on planet Earth by 2020. The world will be different against the background of such a trend, scientists say. 'The number of left-handed babies that were born in 2005 doubled the amount of left-handed children, which saw the light in 1990,' doctor of biological sciences, Alexander Dubov said. 'Mankind is changing slowly. However, it is not about degradation of the human civilization at all. Quite on the contrary, people become more perfect,' the professor said. Latest research works conducted in many countries of the globe showed that the IQ level of left-handed people is higher in comparison with the one of right-handed individuals. Every fifth outstanding person is left-handed as a rule. Furthermore, the people, who can boast of having extraordinary abilities, are left-handed too. 'There are a lot of extrasensorial individuals among them,' doctor of medical sciences, Alexander Lee said. 'We checked the supposition. There are hardly any right-handers among those, who have the gift of remote viewing, telepathy, or X-ray viewing,' the doctor said. Right and left-handers are virtually different types of people with their own special mindsets and perception of the world. 'They get along with each other perfectly, but there is a hidden evolutionary struggle taking place between them, which reminds the struggle between primeval humans, Cro-Magnon and Neanderthal men. It seems to me that left-handers will eventually win the fight owing to their anomalous abilities,' scientist of anomalous phenomena, Pyotr Chereda said. Modern scientists have already concluded that the left-handed human race will change the world; the humanity will become more intellectual and extrasensorial. It is noteworthy that specialists do not have an explicit explanation to the mystery of the left-handed phenomenon. They discovered, however, that a human being takes either a left or a right way of development in mother's womb. Scientists photographed growing fetuses with the help of a special ultrasonic camera. If an unborn baby was trying to take a left hand in its mouth, it would be born as a left-handed infant. Specialists concluded that something happens in fetus's brain during the third of the fourth month of its development. The right hemisphere takes advantage of the left one and claims responsibility for those parts of the brain, which will later be in charge of speaking and writing abilities. The brain takes a significant change, when the left hand of a person eventually plays the dominating role. If the transformation was not that strong or if it was incomplete, it is possible for a person to take a step back. In this case, a person could be described as a right-left-handed individual, i.e. he or she would possess the qualities of both right and left-handers. It brings up the idea that something similar happened to Russian President Putin. Mr. Putin seems to be a right-handed person. However, he wears his watch on the right wrist. In addition, Putin often uses his left hand to take notes out of his right pocket. The left-handed phenomenon may probably be explained with genetic peculiarities. The ability is often handed down across generations. There are certain bizarre peculiarities, though, which can hardly be explained with inheritance. A recent research, which was conducted among 20,000 people, showed that left-handed people are usually delivered by women over 30 years of age. Furthermore, left-handers are usually born prematurely, during the second half of the year. Left-handed people have remarkable abilities to perceive sounds and intonations absolutely clearly and distinguish superfine color shades. They have picturesque memory, which preserves bright impressions for quite long periods of time. Most outstanding left-handers: Using only left hand: Leonardo Da Vinci painted 'Mona Lisa'; Leo Tolstoy wrote 'War and Peace;' Charlie Chaplin played with his stick. Nicole Kidman combs her hair holding a hairbrush in her left hand. From checker at panix.com Thu Jul 7 14:50:21 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:21 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Finding Design in Nature Message-ID: Finding Design in Nature New York Times Op-Ed, 5.7.7 http://www.nytimes.com/2005/07/07/opinion/07schonborn.html By CHRISTOPH SCH?NBORN Vienna EVER since 1996, when Pope John Paul II said that evolution (a term he did not define) was "more than just a hypothesis," defenders of neo-Darwinian dogma have often invoked the supposed acceptance - or at least acquiescence - of the Roman Catholic Church when they defend their theory as somehow compatible with Christian faith. But this is not true. The Catholic Church, while leaving to science many details about the history of life on earth, proclaims that by the light of reason the human intellect can readily and clearly discern purpose and design in the natural world, including the world of living things. Evolution in the sense of common ancestry might be true, but evolution in the neo-Darwinian sense - an unguided, unplanned process of random variation and natural selection - is not. Any system of thought that denies or seeks to explain away the overwhelming evidence for design in biology is ideology, not science. Consider the real teaching of our beloved John Paul. While his rather vague and unimportant 1996 letter about evolution is always and everywhere cited, we see no one discussing these comments from a 1985 general audience that represents his robust teaching on nature: "All the observations concerning the development of life lead to a similar conclusion. The evolution of living beings, of which science seeks to determine the stages and to discern the mechanism, presents an internal finality which arouses admiration. This finality which directs beings in a direction for which they are not responsible or in charge, obliges one to suppose a Mind which is its inventor, its creator." He went on: "To all these indications of the existence of God the Creator, some oppose the power of chance or of the proper mechanisms of matter. To speak of chance for a universe which presents such a complex organization in its elements and such marvelous finality in its life would be equivalent to giving up the search for an explanation of the world as it appears to us. In fact, this would be equivalent to admitting effects without a cause. It would be to abdicate human intelligence, which would thus refuse to think and to seek a solution for its problems." Note that in this quotation the word "finality" is a philosophical term synonymous with final cause, purpose or design. In comments at another general audience a year later, John Paul concludes, "It is clear that the truth of faith about creation is radically opposed to the theories of materialistic philosophy. These view the cosmos as the result of an evolution of matter reducible to pure chance and necessity." Naturally, the authoritative Catechism of the Catholic Church agrees: "Human intelligence is surely already capable of finding a response to the question of origins. The existence of God the Creator can be known with certainty through his works, by the light of human reason." It adds: "We believe that God created the world according to his wisdom. It is not the product of any necessity whatever, nor of blind fate or chance." In an unfortunate new twist on this old controversy, neo-Darwinists recently have sought to portray our new pope, Benedict XVI, as a satisfied evolutionist. They have quoted a sentence about common ancestry from a 2004 document of the International Theological Commission, pointed out that Benedict was at the time head of the commission, and concluded that the Catholic Church has no problem with the notion of "evolution" as used by mainstream biologists - that is, synonymous with neo-Darwinism. The commission's document, however, reaffirms the perennial teaching of the Catholic Church about the reality of design in nature. Commenting on the widespread abuse of John Paul's 1996 letter on evolution, the commission cautions that "the letter cannot be read as a blanket approbation of all theories of evolution, including those of a neo-Darwinian provenance which explicitly deny to divine providence any truly causal role in the development of life in the universe." Furthermore, according to the commission, "An unguided evolutionary process - one that falls outside the bounds of divine providence - simply cannot exist." Indeed, in the homily at his installation just a few weeks ago, Benedict proclaimed: "We are not some casual and meaningless product of evolution. Each of us is the result of a thought of God. Each of us is willed, each of us is loved, each of us is necessary." Throughout history the church has defended the truths of faith given by Jesus Christ. But in the modern era, the Catholic Church is in the odd position of standing in firm defense of reason as well. In the 19th century, the First Vatican Council taught a world newly enthralled by the "death of God" that by the use of reason alone mankind could come to know the reality of the Uncaused Cause, the First Mover, the God of the philosophers. Now at the beginning of the 21st century, faced with scientific claims like neo-Darwinism and the multiverse hypothesis in cosmology invented to avoid the overwhelming evidence for purpose and design found in modern science, the Catholic Church will again defend human reason by proclaiming that the immanent design evident in nature is real. Scientific theories that try to explain away the appearance of design as the result of "chance and necessity" are not scientific at all, but, as John Paul put it, an abdication of human intelligence. Christoph Sch?nborn, the Roman Catholic cardinal archbishop of Vienna, was the lead editor of the official 1992 Catechism of the Catholic Church. From checker at panix.com Thu Jul 7 14:50:31 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 10:50:31 -0400 (EDT) Subject: [Paleopsych] NYT: Films Take a More Sophisticated Look at Teenage Sex Message-ID: Films Take a More Sophisticated Look at Teenage Sex New York Times, 5.7.6 http://www.nytimes.com/2005/07/06/movies/06sex.html By [3]CARYN JAMES In Miranda July's shrewdly observed [4]"Me and You and Everyone We Know," a 14-year-old boy and his 7-year-old brother sit in front of a computer screen engaging in an increasingly common form of sex education: an online chat with an anonymous woman. Although the 14-year-old is savvy enough to guess that they could be talking, say, to a grossly overweight man instead of some hot babe, his little brother suggests, "Ask if she likes baloney," then innocently offers a nonsensical, physically impossible act that reminds us he is not so far from his potty training days. The online response is, "You are crazy, and you are making me very hot." Precocious sexual knowledge - far beyond what children and teenage characters can absorb, and often with devastating consequences - has become a staple of current independent films. In the French film [5]"Lila Says," the title character is a 16-year-old whose wealth of sexual knowledge and free-wheeling behavior leads the town to think she may actually be a whore. In [6]Gregg Araki's unexpectedly eloquent [7]"Mysterious Skin," two boys who are molested by their Little League coach grow up to be a teenage hustler and a guy who believes he was abducted by aliens. (All three films are playing in New York and a handful of other cities, and will expand to more cities through the next month or so.) And movies with similar themes will arrive in the next month, including [8]Don Roos's comic romance [9]"Happy Endings" and the satiric [10]"Pretty Persuasion." But while these filmmakers are highly aware of the dangers such early knowledge can pose - from Internet predators to unwanted pregnancy - their films do not display the knee-jerk judgments you might expect, and that shaped the 2003 film [11]"Thirteen." Where "Thirteen" was praised for its audacity in depicting 13-year-olds having sex and doing drugs, it was really a traditional cautionary tale. The current films are more complex. They often blame the big, wide media world and other social influences that cause children to grow up too fast. But they also accept this early loss of innocence as the new way of the world and move on from there. In their bewildered acceptance of this new reality, the films reflect the current social moment, in all its fraught confusion, more astutely than any alarmist work could. It's not surprising that all these films are smaller, independent works; they can afford to address the riskier themes that big-budget movies avoid. The current films also share sophisticated narratives and graceful styles that make their unsettling themes palatable. "Me and You" is directly about the romance of a lonely artist (played by Ms. July) and a recently separated shoe salesman (John Hawkes) whom she willy-nilly decides is her soul mate. But the difficulty of forging a connection is reflected in the next generation - the young brothers are the salesman's sons - in which children and teenagers confront a disorienting sexual world. The family's neighbors include two slightly older teenage girls who use the 14-year-old brother as a practice object for oral sex, asking him to judge which of them does it better. When the father's seemingly respectable co-worker makes lewd suggestions to the girls, they teasingly kiss in front of him. But when they finally dare to ring his doorbell, he cowers and hides. Ms. July's sense of a dangerous world encroaching is deflected by such last-minute twists. Yet it is still chilling when the 7-year-old arranges a real-life date with the mysterious online lover. And it may be even more chilling that the father is not callous or neglectful, just hapless and not very smart, an ordinary guy. Ms. July acknowledges the risks of precocious, half-baked sexual knowledge, but in keeping with the endearing tone of her film, willfully evades those dangers. There is no such evasion in "Lila Says." Set in a poor Marseille neighborhood, Ziad Doueiri's gripping film seems headed for tragedy from the start. The beautiful blond Lila is not just another sexually active 16-year-old, no longer a rarity. She is so bluntly, openly sexual that she enters the film by offering to expose herself to a stranger, Chimo, the 19-year-old who falls in love with her yet is intimidated by her experience. Chimo's friends regard Lila as a slut; yet if the brutal finale they set in motion is all too predictable, one crucial element is not. Chimo discovers Lila's scrapbook, in which she has pasted magazine articles about subjects like amateur porn on the Internet, clippings that suggest how much of her sexual knowledge was shaped by a world she was not ready to understand. "Lila Says" is so delicately balanced that it manages to have things both ways. It is erotic, notably in a sexual encounter between Lila and Chimo on a motorbike, yet also conveys a sad sense of lost innocence. Despite the film's melodrama, that balance creates a hauntingly realistic aura. Its least convincing element comes when Lila's aunt and guardian makes a pleading sexual advance toward her. The theme is never picked up again, so the abuse seems like a forced, convenient explanation for Lila's behavior. The idea of childhood abuse is used more intelligently in "Mysterious Skin." The molestation scenes are not graphic, but they are so clear and depicted with such immediacy that at first it seems the film has crossed a line into a completely nonjudgmental realm. But "Mysterious Skin" adheres to the boys' points of view so rigorously that the abuse reflects their own confusion, just as the film's lyricism suggests their emotional escape strategies. By the end, when the anguish inflicted on the boys becomes apparent, we see that the film realizes the abuse was monstrous. Mr. Araki has always been a provocative filmmaker, not an ingratiating one, and while "Mysterious Skin" is lucid about the horrible violation of the boys, it refuses to preach at us, and much of its power comes from that unflinching approach. While all these films matter-of-factly assume that sexual knowledge arrives earlier and earlier, "Happy Endings" is essentially a cheerful movie, even though its plot is set off when a teenage stepbrother and stepsister have sex that results in a pregnancy. "Pretty Persuasion" is caustic, as several 15-year-old girls maliciously and falsely accuse a teacher of abuse, setting off a media circus. That such varied tones can be spun from a common idea says that precocious sexuality is considered a pervasive part of our world, even if the filmmakers have no better idea of what to do with that knowledge than the 7-year-old knows what to do on his date. References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=CARYN%20JAMES&fdq=19960101&td=sysdate&sort=newest&ac=CARYN%20JAMES&inline=nyt-per 4. http://movies2.nytimes.com/gst/movies/movie.html?v_id=312502&inline=nyt_ttl 5. http://movies2.nytimes.com/gst/movies/movie.html?v_id=315403&inline=nyt_ttl 6. http://movies2.nytimes.com/gst/movies/filmography.html?p_id=79831&inline=nyt-per 7. http://movies2.nytimes.com/gst/movies/movie.html?v_id=291995&inline=nyt_ttl 8. http://movies2.nytimes.com/gst/movies/filmography.html?p_id=167026&inline=nyt-per 9. http://movies2.nytimes.com/gst/movies/titlelist.html?v_idlist=126064;126065;295227&inline=nyt_ttl 10. http://movies2.nytimes.com/gst/movies/movie.html?v_id=312864&inline=nyt_ttl 11. http://movies2.nytimes.com/gst/movies/titlelist.html?v_idlist=278975;160643&inline=nyt_ttl From anonymous_animus at yahoo.com Thu Jul 7 20:55:55 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 7 Jul 2005 13:55:55 -0700 (PDT) Subject: [Paleopsych] genes made me do it In-Reply-To: <200507071800.j67I0MR06716@tick.javien.com> Message-ID: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> >>Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool.<< --We ought to distinguish between the scientific question, "What causes human behavior" from the political question, "How do we encourage people to control behavior that might harm society". Confusing the two questions is a bad idea. It's entirely possible that some people are genetically driven to violence. But that would leave us where we already are: with a group of people who can't or won't control their behavior. We may say "You must control yourself" but we have no faith that the command will be enough. So we confine criminals instead -- Exactly what we would do if it were proven their genes made them do it. The only real difference would be that we'd no longer view "deserving it" as reason to heap scorn on those we've incarcerated. The most violent criminals were almost uniformly treated with extreme abuse in their formative years, and we already KNOW that shaming them only produces more violence rather than less. Keeping people who can't (or won't -- it makes no practical difference) control themselves away from situations where they could harm others is still the only reliable method of prevention. However, identifying people at risk for violence, whether it's a genetic trait or a result of early abuse and role modeling, is a good idea. Pre-emptive incarceration would not be an acceptable strategy, but providing counselling and cognitive therapy might counteract any existing tendency toward violence. Cognitive therapy can identify subliminal thoughts that accelerate violence (demonization of others, shifting blame, shame spiraling into rage, etc) and increase the individual's ability to calm himself and counteract the hypnotic trance-like triggers that would otherwise lead to reactive violence. It may also be helpful to view groups which demonize one another as victims of bad programming, and introduce counter-programs enabling each side to see members of the other as human rather than as symbols of evil. Regardless of whether free will exists or not, it's a good thing to be able to respond in the early stages, before violence breaks out, rather than merely punishing people after the fact. Perhaps the fear of society is not that people can't control themselves, but that by demonizing criminals we are accelerating their pathology. What if we're making things worse, by focusing on who deserves what kind of punishment, rather than how to interrupt patterns of violence before they become lethal? Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From anonymous_animus at yahoo.com Thu Jul 7 21:00:10 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 7 Jul 2005 14:00:10 -0700 (PDT) Subject: [Paleopsych] drugs In-Reply-To: <200507071800.j67I0MR06716@tick.javien.com> Message-ID: <20050707210011.75350.qmail@web30802.mail.mud.yahoo.com> >>In fact, I can honestly say that without cannabis, most of my scientific research would never have been done and most of my books on psychology and evolution would not have been written.<< --That's not surprising. Marijuana helps bridge brain hemispheres, enabling the kind of cross-contextual thinking that produces new ideas. This shouldn't be "politically incorrect". If a scientist, writer or artist says a glass of wine or whiskey helps him ponder a problem, few people will find it outrageous or even question the validity of the claim. But political correctness surrounding legal/illegal drugs makes it difficult for people to speak openly about their positive experiences with drugs other than alcohol or caffeine. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From checker at panix.com Thu Jul 7 22:19:34 2005 From: checker at panix.com (Premise Checker) Date: Thu, 7 Jul 2005 18:19:34 -0400 (EDT) Subject: [Paleopsych] NYT: Who Stole Sleep? The Pillow as Perp Message-ID: Who Stole Sleep? The Pillow as Perp New York Times, 5.7.7 http://www.nytimes.com/2005/07/07/fashion/thursdaystyles/07online.html By MICHELLE SLATALLA SHORTLY before midnight the other night, someone leaving the movie theater a block and a half from my house dropped car keys on the pavement. Actually, I don't know if the jingling came from keys or for that matter if there even was jingling, because like most mortals I slept through the incident. Even my dog Sticky, who has ears big enough to interest NASA, continued to snore. But not my husband. "What was that?" he shrieked, sitting upright as if bitten by a snake. The princess who could not sleep on a pea had nothing on my husband. His rest is often disturbed by distant barking, other people's air-conditioners and "something that sounds like a mosquito, only it never stings me." For years I snoozed through the drama. But recently neck and back twinges have begun to surge through my husband like electrical currents and have prompted him to jump out of bed, switch on the light and hop around. Even I can't sleep through that. I'm not the first spouse to stumble blearily toward the computer at 2 a.m. in search of a solution. But in the lonely predawn hours, as I considered the Internet's various suggestions - from sleep masks like the foldable Dreamlite Relaxation model ($6.95 at [3]Dreamessentials.com) to white noise machines like the Marsona Sleep Mate 980 ($52.95 at [4]Naturestapestry.com) - one possibility intrigued me above all others. Maybe we needed better pillows. Ours, old and flat and musty, provided about as much neck support as a saltine. Could the pillows be sabotaging my husband's sleep? "A pillow is important if a person has poor sleep to begin with," said Dr. Clete A. Kushida, director of the Stanford University Center for Human Sleep Research. "The environment is important." But which pillow? There are no official standards for pillows and no research to prove that one type is better than another. "The studies haven't been done," Dr. Kushida said. "Basically it comes down to what is the most comfortable." Depending on who you are, that might mean [5]llbean.com's goose down damask pillow (in sizes from standard to king and in fills ranging from soft to firm, $49 to $99). Or Overstock.com's Circle of Down pillow ($29.99). Or [6]Livingincomfort.com's hypoallergenic pillow ($17.88). Or maybe the answer is a synthetic pillow from [7]Bedbathandbeyond.com (from $7.99 for the Jumbo Gusset to $79.77 for the standard-size Indulgence Supreme Thermo-Sensitive). I needed guidance. "How can I tell if my husband is a Thermo-Sensitive type or a Circle of Down man?" I asked Dr. James Maas, a professor and sleep researcher at Cornell University. "Given all the options, I wonder if anyone is sleeping on the right pillow." Professor Maas said that pillow issues affect a great number of Americans. "Somewhere near 50 percent of the country is sleep deprived," he said during a phone interview. "This country is a country of walking zombies, mostly due to sleep length but also to poor quality of sleep." Professor Maas recommended that my husband cut back on his caffeine intake and that we create a bedroom that was cool, dark and comfortable (which I figured was a nice way of saying our 95-pound dog should get off the bed). As for pillows, the difference comes down to down versus synthetic fill, Professor Maas said. A good pillow of either stuff should last up to 10 years, he said. You can test your pillow to find out if it's past its prime. "You take your pillow," he said. "Fold it in half. If it doesn't spring forward and open instantly by itself, you've got a dead pillow. Replace it." Professor Maas said he liked the quality of pillows manufactured by United Feather and Down, an Illinois company whose products, both down and synthetic, sell under various private labels. For instance United Feather and Down's Insuloft down and PrimaLoft synthetic-fill pillows are for sale online at [8]thecompanystore.com, [9]Landsend.com, [10]Potterybarn.com and llbean.com. "We do a wide variety of fills," said Becky McMorrow, United Feather and Down's marketing manager. "Every retail customer tweaks the pillow to have an exclusive style. Restoration Hardware and Williams-Sonoma use the same fill but not the same fabric cover. Pottery Barn used a damask stripe." No matter where you shop, expect to pay from $29 to $59 for a good synthetic pillow and from $59 to $129 for a goose down pillow with a minimum of 550 fill power, Ms. McMorrow said. "Fill power is a measurement of how lofty an ounce of down is and how high it comes up on a beaker after it's compressed," Ms. McMorrow said. After ascertaining a few facts about my husband - mostly sleeps on his side, switches back and forth between a flat pillow and a fluffier one as the night progresses - Ms. McMorrow mailed me four models to test. I did not feel it necessary to mention the experiment to him; he has enough on his mind. The first night, I discreetly slipped a PrimaLoft synthetic-fill Side Sleeper into his pillowcase. Gusseted to provide an even sleep surface and neck support for a side sleeper, it was similar to a $39 model from Bedbathandbeyond.com. Then I turned out the light and lay poised to take notes as he fell into an immediate deep sleep. Thirty minutes passed without a peep out of him. Then 60. Then I fell asleep. The next night I introduced the fluffier Insuloft down-filled Side Sleeper (very like a $99 version at [11]Realgoods.com). After 30 minutes he sat up and asked suspiciously, "Do I hear a raccoon?" From this I deduced that while both Side Sleepers provided neck support, he preferred the denser texture of synthetic fill. The third night he also slept well on a down-synthetic blend called the Lyocell (similar to a pillow sold at thecompanystore.com for $89). By the fourth night I was the one who had earned the right to sleep on the Face Saver with "aloe-soft fabric" to prevent wrinkles. The pillow is to go on sale this fall on the Home Shopping Network for about $35. The conclusion? I bought all the pillows, because all four were an improvement over our old ones. The dog thought so, too. E-mail: [12]slatalla at nytimes.com From Euterpel66 at aol.com Fri Jul 8 04:33:59 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Fri, 8 Jul 2005 00:33:59 EDT Subject: [Paleopsych] genes made me do it Message-ID: <13d.16b6a455.2fff5c37@aol.com> In a message dated 7/7/2005 4:59:17 P.M. Eastern Daylight Time, anonymous_animus at yahoo.com writes: >>Consider: you are no longer responsible for anything. Sound familiar? Once it was the devil. Now it is the gene that made you do it. You are officially off the hook. It isn't your fault at all. It's your faulty genes. It gets even better. Not only is it not your fault, but you actually are a victim, a victim of your own toxic gene pool.<< --We ought to distinguish between the scientific question, "What causes human behavior" from the political question, "How do we encourage people to control behavior that might harm society". Confusing the two questions is a bad idea. It's entirely possible that some people are genetically driven to violence. But that would leave us where we already are: with a group of people who can't or won't control their behavior. We may say "You must control yourself" but we have no faith that the command will be enough. So we confine criminals instead -- Exactly what we would do if it were proven their genes made them do it. The only real difference would be that we'd no longer view "deserving it" as reason to heap scorn on those we've incarcerated. The most violent criminals were almost uniformly treated with extreme abuse in their formative years, and we already KNOW that shaming them only produces more violence rather than less. Keeping people who can't (or won't -- it makes no practical difference) control themselves away from situations where they could harm others is still the only reliable method of prevention. However, identifying people at risk for violence, whether it's a genetic trait or a result of early abuse and role modeling, is a good idea. Pre-emptive incarceration would not be an acceptable strategy, but providing counselling and cognitive therapy might counteract any existing tendency toward violence. Cognitive therapy can identify subliminal thoughts that accelerate violence (demonization of others, shifting blame, shame spiraling into rage, etc) and increase the individual's ability to calm himself and counteract the hypnotic trance-like triggers that would otherwise lead to reactive violence. It may also be helpful to view groups which demonize one another as victims of bad programming, and introduce counter-programs enabling each side to see members of the other as human rather than as symbols of evil. Regardless of whether free will exists or not, it's a good thing to be able to respond in the early stages, before violence breaks out, rather than merely punishing people after the fact. Perhaps the fear of society is not that people can't control themselves, but that by demonizing criminals we are accelerating their pathology. What if we're making things worse, by focusing on who deserves what kind of punishment, rather than how to interrupt patterns of violence before they become lethal? Michael Not only would society be able to identify undesirable behavioral tendencies, but individuals themselves would be able to reflect on why they act the way they do. It would have a name. Unknown and uncertainty are two of the most fearful adjectives describing states of mind known to our species. For her entire life my daughter knew that her behavior was self-destructive to sociality. Last year she found a name for her condition, Asperger's Syndrome. Since then, she's stopped kicking herself for her poor social skills and instead is takingmedication that has worked wonders. She recognizes the reasons for her difficulties and tries to work the skills that are necessary to social creatures. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Euterpel66 at aol.com Fri Jul 8 04:37:22 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Fri, 8 Jul 2005 00:37:22 EDT Subject: [Paleopsych] Pravda: Left-handed human race to make the world a better pl... Message-ID: <19a.37584d48.2fff5d02@aol.com> In a message dated 7/7/2005 10:50:56 A.M. Eastern Daylight Time, checker at panix.com writes: Scientists say that the number of left-handed individuals grows rather fast in the world today Last semester I had a class of 25 individuals and 8 of them were left-handed. Left-handed is just something I notice because my daughter is left-handed. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From anonymous_animus at yahoo.com Fri Jul 8 19:50:46 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Fri, 8 Jul 2005 12:50:46 -0700 (PDT) Subject: [Paleopsych] violence In-Reply-To: <200507081800.j68I0GR13416@tick.javien.com> Message-ID: <20050708195046.86225.qmail@web30808.mail.mud.yahoo.com> Lorraine says: >>Not only would society be able to identify undesirable behavioral tendencies, but individuals themselves would be able to reflect on why they act the way they do. It would have a name.<< --Good point. A Native American storyteller-therapist I once met described a man who would go into a trance when he fought with his wife. His leg would shake rhythmically and he would say to himself "It's never gonna change... it's never gonna change". He described how the man learned to notice the trance as it began and interrupt it. Others have reported positive results with mindfulness meditation, which may increase the ability of the prefrontal cortex to recognize automatic distortions of thinking and interrupt them. To me, that sounds a lot more useful than labeling people "evil". "I'm evil, I'll never change" is the last thought you want going through someone's mind if they have a problem with anger. And if anger stems from shame, thinking "I deserve to be punished" isn't going to have much positive effect either, it will only reinforce the shame and the rage it triggers. The one belief that would have a positive effect, "I can interrupt this cycle and change the outcome", is often lost in an avalanche of contempt and blame. >>For her entire life my daughter knew that her behavior was self-destructive to sociality. Last year she found a name for her condition, Asperger's Syndrome. Since then, she's stopped kicking herself for her poor social skills and instead is taking medication that has worked wonders.<< --I had similar experiences in my teens and 20's, kicking myself for not being able to express myself socially. Later, I learned to think of it as a feedback disorder and was better able to let anxiety exist without taking it as a sign of inevitable failure. I discovered I could communicate in text much better than in speech, because some of the timing and feedback issues are absent (mistakes in text can be backspaced, thoughts can come in floods without overloading speech, no awkward pauses, etc). I could process much better visually than orally. Before internet, the only thing that worked was LSD, and only for a day or two after taking a dose. For some reason, it enabled me to be fluid and trusting of unconscious processes, rather than focusing on every detail and being overloaded with anxiety and mechanical-feeling perfectionism. It was like the difference between crawling and flying, but not something I could do often. Do people with Asperger's communicate better in text as well? What medication helped your daughter? Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From checker at panix.com Fri Jul 8 22:19:10 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 18:19:10 -0400 (EDT) Subject: [Paleopsych] genes made me do it In-Reply-To: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> References: <20050707205555.97216.qmail@web30808.mail.mud.yahoo.com> Message-ID: Michael, I think the criminal justice system has always done what you propose, though not formally, which says free will is either-or. In practice, sentences are handed down according to the degree of responsibility. It's ironic that those who have the least self-control are given the harshest punishments. On 2005-07-07, Michael Christopher opined [message unchanged below]: > Date: Thu, 7 Jul 2005 13:55:55 -0700 (PDT) > From: Michael Christopher > Reply-To: The new improved paleopsych list > To: paleopsych at paleopsych.org > Subject: [Paleopsych] genes made me do it > > >>> Consider: you are no longer responsible for > anything. Sound familiar? Once it was the devil. Now > it is the gene that made you do it. You are officially > off the hook. It isn't your fault at all. It's your > faulty genes. It gets even better. Not only is it not > your fault, but you actually are a victim, a victim of > your own toxic gene pool.<< > > --We ought to distinguish between the scientific > question, "What causes human behavior" from the > political question, "How do we encourage people to > control behavior that might harm society". Confusing > the two questions is a bad idea. > > It's entirely possible that some people are > genetically driven to violence. But that would leave > us where we already are: with a group of people who > can't or won't control their behavior. We may say "You > must control yourself" but we have no faith that the > command will be enough. So we confine criminals > instead -- Exactly what we would do if it were proven > their genes made them do it. The only real difference > would be that we'd no longer view "deserving it" as > reason to heap scorn on those we've incarcerated. The > most violent criminals were almost uniformly treated > with extreme abuse in their formative years, and we > already KNOW that shaming them only produces more > violence rather than less. Keeping people who can't > (or won't -- it makes no practical difference) control > themselves away from situations where they could harm > others is still the only reliable method of > prevention. > > However, identifying people at risk for violence, > whether it's a genetic trait or a result of early > abuse and role modeling, is a good idea. Pre-emptive > incarceration would not be an acceptable strategy, but > providing counselling and cognitive therapy might > counteract any existing tendency toward violence. > Cognitive therapy can identify subliminal thoughts > that accelerate violence (demonization of others, > shifting blame, shame spiraling into rage, etc) and > increase the individual's ability to calm himself and > counteract the hypnotic trance-like triggers that > would otherwise lead to reactive violence. It may also > be helpful to view groups which demonize one another > as victims of bad programming, and introduce > counter-programs enabling each side to see members of > the other as human rather than as symbols of evil. > Regardless of whether free will exists or not, it's a > good thing to be able to respond in the early stages, > before violence breaks out, rather than merely > punishing people after the fact. > > Perhaps the fear of society is not that people can't > control themselves, but that by demonizing criminals > we are accelerating their pathology. What if we're > making things worse, by focusing on who deserves what > kind of punishment, rather than how to interrupt > patterns of violence before they become lethal? > > Michael From checker at panix.com Sat Jul 9 00:05:00 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:00 -0400 (EDT) Subject: [Paleopsych] Gary North: Terrorism and Insurgency Message-ID: Gary North: Terrorism and Insurgency Gary North's REALITY CHECK Issue 462, 5.7.8 I had been planning to write on this topic before the terrorist bombings in London. The bombings have forced me to speed up my publishing timetable. We must distinguish carefully between insurgency and terrorism. There are overlaps in the two movements, but they are conceptually distinct. They are also tactically distinct. The insurgent is a guerilla. He is a defender against an invading military force. He is a warrior battling warriors. The terrorist is a member of an organization that seeks to disrupt civilian life as a means of regime change. His targets are civilians and civilian infrastructure. The insurgent has a limited goal: the expulsion of the invading troops. The terrorist has a much broader goal: the disruption of civil society in the name of a larger cause, usually messianic. He wants to heal the world of some all-encompassing evil. His is a never-ending battle. Unlike the insurgent, he never gets to the stage where he says, "We've won, so let's de-escalate." In the mid-1980s, the United States government began to provide Stinger ground-to-air missiles to Afghan resistance fighters. These were clearly insurgents. This technology forced Soviet pilots to fly ground-support planes at 15,000 feet rather than 5,000 feet. This forced a complete re-structuring of Soviet military tactics in Afghanistan. The ground troops no longer received reliable air cover. They pulled out. The Soviets lost the war because of this. Within two years of this retreat, the Soviet Union collapsed. The visibly collapsing socialist economy, coupled with the humiliation of the defeat in Afghanistan, gutted the self-confidence of the Soviet leaders. I have intermittently studied terrorism ever since 1963, when I took a course on modern Russian history. Modern Western terrorism began in late 19th century Russia. Lenin's older brother had been executed because he was a member of a Russian terrorist organization. This turned Lenin into a Marxist. In 1881, a terrorist group assassinated the Czar, who had been a liberal (for a Russian) reformer, the man who had freed the serfs. These terrorist groups were self- consciously attempting to destroy the Russian social order. They were revolutionary anarchists. They were convinced that terrorism would call forth repression by the state, which it did. Then, they believed, counter-repression terrorist movements could recruit followers to fight this oppression. They were right, in a way: counter-terrorism recruited Lenin for the cause of revolution. After he gained power in 1917, his initial targets were not the capitalists; they were the anarchists. He liquidated them or sent them to the slave labor camps in Siberia. The ultimate counter- terrorist was Lenin. The anarchist terrorists learned an old lesson: those who live by the sword die by the sword. AN ANCIENT TRADITION We are seeing an escalation of terrorism in Iraq on a scale that has no precedent in history. The suicide bombings are now not only daily, they are intra-daily. At the height of the Intifada, the State of Israel experienced one suicide bombing per month. On the evening news, we hear reports of multiple bombings all over the central part of Iraq: a dozen dead here, two dozen dead there. We are told officially that these bombers are outsiders coming into Iraq. We had better hope that these assessments are not true. If they are true, then the supply of suicide bombers will not decrease just because the United States pulls out of Iraq. If outsiders are the perpetrators, then they are not tied to national geography. They are not Iraqi nationalists. They are self-consciously part of a regional terrorist network, loosely structured. They are volunteering for service in a larger war, a war outside the geographical confines of their home countries. If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying. The West's oil is located there. If these are terrorists, as distinguished from insurgents, then they are close to the choke points of the West. If they are terrorists in service of an anti-Western, pro-Muslim cause, then they are closer to destructive power than any terrorists in history. They could create economic chaos in the West by closing the oil pipelines. Oil prices respond in volatile swings to minor marginal changes in output. In my view, this is what they are: Muslim fanatics who see themselves as part of a tradition going back to the Assassins. I have argued since 2001 that Osama bin Laden is self-consciously positioning himself as the Assassins' legendary and near-mythological leader, known as the Old Man of the Mountain. There is an attempt by Western analysts to deny the Islamic origin of these terrorists. This is a very difficult case to make. If it is true, then the terrorists are modern, secular, and nationalist, i.e., essentially Western. That would indicate a very small pool of "talent" to recruit from regionally. I think the story of nationalist suicide bombers from outside Iraq is the product of Western analysts' inability to imagine people who are dedicated to a religion with 1.2 billion adherents, who strap bombs to their bodies and blow up themselves and civilians. Western analysts find it easier to deal with modernism's terrorists than Islam's. I'm not buying it. This is an ancient war going back 1,400 years, with a terrorist tradition going back a thousand years. Bin Laden has self-consciously identified America as Crusaders. We should not ignore his rhetoric. He understands his "market." He has not made his appeal to regional nationalists. He has appealed to Muslims. It does not explain anything to label this "Islamofascism." This escalating movement has nothing to do with fascism, which was a short-lived movement of a posturing Italian ex-Communist, with military support from a German racist occultist. "Islamofascism" makes this movement sound modern. It is not modern, except in its technology, which is low tech and dirt cheap. Permit me to reprint part of an article that I wrote for Lew Rockwell's site in mid-September, 2001. I have not changed my opinion. * * * * * * * * * A terrorist group needs recruits. A terrorist movement needs recruits. If your strategy of terror involves the extensive use of suicide missions, you need very dedicated recruits. To get such recruits, you need the following: (1) a cause that is greater than any individual; (2) a sense of destiny associated with your cause; (3) the perception that a sacrificial act on behalf of your cause is never wasted or futile; (4) a vision of victory; (5) publicly visible events that demonstrate the power of your movement. > From what little I have read about Osama bin Laden, his movement possesses all five factors. He is especially skilled with respect to point five. He understands symbolism, and he understands Western media. This man is a formidable enemy of Western civilization. I believe that Americans have completely misunderstood the events of 9-11. The attack was not a direct assault on the United States primarily for the sake of making us fearful. It was part of a recruiting campaign. The response of the street people in Palestine was what he had in mind. He gave alienated Palestinians an event to celebrate. It also gave the Establishment Palestinians a chance to speak out against terrorism. That, too, was part of bin Laden's positioning. He is not Establishment. An extremist, especially a terrorist, must position himself as a member of the non-loyal opposition. Nothing I can imagine could have accomplished this better than the events of 9-11. The Poster If you want to understand what happened on 9-11, visualize a poster with bin Laden in a turban and flowing robes, pointing his index finger at you, with a slogan underneath: "Uncle Osama Wants You." That poster is aimed at the alienated folks back home. For Americans, the slogan is different: "Uncle Osama Wants You Dead." * * * * * * * * * FUSION OR CONFUSION? There are insurgent groups in Iraq. They have been killing American troops, about two a day, for two years. There are terrorist groups in Iraq. They have attacked civilians, local police, and some government figures. Are these groups unified? No. Is there a single chain of command? No. Is there any way to negotiate with any group's leader, who will act in the name of all groups? No. That is why this is now a never-ending war. There are many theories of what motivates the terrorists. This is appropriate: there are many motivations and many terrorist groups. Some are trying to foment a civil war. Others are no doubt dreaming of inflicting permanent damage on American foreign policy for the Middle East. Others are bin Laden's followers: Islamic radicals. All are agreed: Americans and their collaborators are targets. Americans are the symbol of Western power. Americans are there and therefore are convenient targets. The fact that there is a common enemy -- American troops and officials -- has led to confusion in the minds of Western analysts. They do not understand that the success of the insurgency in inflicting tactical damage on American troops has served as motivation for terrorists who see their cause in a much broader context, both geographically and historically. Terrorism and insurgency are not fused, but Western analysts are surely confused. They were confused going into Iraq, and they will probably remain confused after we leave Iraq. The terrorists are not confused. They have a goal: the overturning of the West's social order. They now see an historic opportunity. So can the Muslim in the street. This will make their recruiting easier. TAKING THE FIGHT ABROAD Whether the London bombings were the work of Muslim fanatics or anti-WTO fanatics, we do not know. What we know is that terrorism is spreading. The tactics of terrorism are being worked out in Iraq. Today, bombs. Tomorrow. . . ? The master tacticians were the inventors of the car bomb: the IRA. That invention appeared around 1973. It is gaining popularity. It almost brought down one of the Twin Towers in 1993. That would have cost 60,000 lives -- maybe twice that, if the second tower had been hit by a collapsing first tower, which might not have fallen straight down. Iraq has become the on-the-job training program for terrorists. Because the insurgency is perceived as local, which it is, the parallel terrorism is also seen as local, or at most regional. This is a convenient assumption. But is it accurate? To separate regional Middle Eastern terrorism from worldwide Islam is convenient for political analysts who are secular. They don't comprehend the idea of world conquest by an old religious movement that was from day one a military movement. The fact that most members of this religion have abandoned the idea of conquest by force does not deal with the problem of bin Laden, who is a representative of a respected sub-tradition. The insurgency may be growing in Iraq. That is a military concern. On the other hand, it may not be growing. It may be "merely" holding its own. What is unquestionably growing is the terrorist movement. That is of much wider and more profound concern than military. Because terrorism is growing in Iraq, it is easy to confuse the terrorists with the insurgents. It is easy to assume that once America leaves Iraq, the terrorists will fade away, along with the insurgents. This expectation has about as much validity as the neo-conservatives' expectation in February, 2003, that our troops would be greeted as liberators by the broad mass or Iraqis. It is, in short, a pipe dream -- and there is some funny-smelling stuff is in the pipe. CONCLUSION Terrorists are like sharks: they follow the scent of blood. When terrorist tactics appear to be undermining people's trust in the existing social order's ability to defend stability, these tactics spread. The goal of the terrorist is messianic: the replacement of the existing social order with a new one, rarely described and never presented in blueprint form. The enemy is real: existing society. The reform is vague: mostly positive adjectives. Positive adjectives in the minds of terrorists make for intensifying adverbs. With respect to the work of terrorists, they've only just begun. So have the counter-terrorists. Bad mojo. From checker at panix.com Sat Jul 9 00:05:17 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:17 -0400 (EDT) Subject: [Paleopsych] Gary North: Time Wasting: Good vs. Bad Message-ID: Gary North: Time Wasting: Good vs. Bad Gary North's REALITY CHECK Issue 458, June 24, 2005 We will switch to a new mail server next week. Don't worry; there will be no glitches. This is digital! But if, due to circumstances beyond our control, you stop receiving this letter on Tuesdays & Fridays, and if you still want to receive it, sign up again by sending a request to: reality at dailyreckoning.com Here is a reason to stay on this mailing list. You get goodies like the following. . . . Occasionally, things go right for the good guys and wrong for the bad guys. Here is a case in point. We could use a lot more like this one. http://shurl.org/bigmistake TIME-WASTING: GOOD VS. BAD What I am now experiencing in one isolated aspect of my life's work, you may already have experienced or should experience. I hope you can learn from my mistakes. I also hope you can find a way to overcome your comparable mistakes inexpensively. As you read this, you may think, "That sounds a lot like my experience with. . . ." You may even encounter a "shock of recognition" -- a phrase that usually refers to a moment of self-awareness in which a person recognizes the reality of what he has become or was. I had one of those shocks this week. It was the result of my first evening spent enduring a long, miserable task that I had better accomplish: the cleaning out of my files of clippings. I began to clip magazines and newspapers in the mid- 1960s. This escalated a decade later, and became (I now see) maniacal from about 1980 to 1996. Today, I have a dozen 4-drawer legal files filled with clippings. The categories are in the hundreds. My mania ended with the advent of the Web. I stopped subscribing to magazines and newspapers. (I still subscribe to financial newsletters.) Yet I did not save to disk most of the files that I have read on-line. I do save a few documents this way, now that there is a free disk- search-and-retrieve program that I like: Copernic's. http://copernic.com But saving Web-based pages is not a mania for me. That's because I can find more material than I have time to read on almost any topic simply by using Google. I am in the process of moving. I want to get rid of half my filing cabinets. This means getting rid of at least half my files. Whether I will attain this goal is problematical. It took me two hours to go through the equivalent of one file drawer. That means 47 to go. It may be wiser to just put up with the files. Or maybe I should just toss out all of the files. That's what my father-in-law did two decades ago, after 40 years of clipping stuff. At the time, I thought he had made a big mistake. I no longer do. WASTED TIME Every life is filled with wasted time. Every job is also filled with wasted time. We spend much of our lives collecting, learning, cataloguing, and generally burying ourselves in information that later turns out to be trivia. It took me two hours just to skim through two boxes of clippings. I tossed out a pile of papers about 11 inches high. Most of these were yellowed newspaper clippings. I took one look and tossed most of them. Yet to file each one, I had to clip it, paste the columns on a sheet of paper, label the paper, and file it. That took time. I hate to think about how much time it took. In the bad old days, I had only one category per document. Yet most documents should have been identified by several key words. A clipping on OPEC could have been filed under "Energy: Oil," "foreign policy," or "cartels." A clipping on Henry Kissinger could have been filed under "Kissinger," "foreign policy, "conspiracy," "war," or "myopic weasel." I had to make a choice. Then I had to remember that choice. On the whole, I have been able to retrieve old articles reliably, if I could recall the article. But, after 1996, I found that I rarely went to my old files. So, I have forgotten many categories and most clippings: thousands and thousands of clippings. I did not know that the Web would arrive. I did not know that the way we access new knowledge and details of fading memories would change more than it changed since the invention of the printing press. I knew by 1982 that scanning software would someday allow me to file my clippings on a hard disk. I knew that data-retrieval software would eventually enable me to retrieve whatever I had filed. I did not think it would take over a decade to produce workable but inefficient products that would do this. PageKeeper was one of them. It did not catch on. Nothing really works well yet. If it did, Microsoft would either buy it or imitate it. So, I kept clipping. I kept buying filing cabinets. Now I will toss out most of what I filed. As I read file after file, I thought, "Why did I bother to clip this?" There were exceptions. I am keeping about half, but most of these are longer articles clipped from magazines. The newspaper clippings are mostly too narrow or too obscure or long since superseded. As I write this, I recall my advice to a friend who also suffered with clipping mania. He told me in the mid- 1980s that he was facing a major problem: a mild heart attack. His physician told him to slow down. Yet he was still compulsively clipping. I told him to let every newspaper pile up for a week. Then start reading them. I predicted that most of what he would have clipped a week before would have become outdated. He did what I said. He reported back within a month that he was clipping far fewer articles. But I did not take my own advice for a decade. The Web forced my hand, not a heart attack. BUT NOT COMPLETELY WASTED TIME Nevertheless, my files have convinced me that all that time was not wasted. The discipline of reading, clipping, and cataloguing articles did provide me with an overall sense of what was going on. Now that I see them, I remember some of it. I can see in one quick survey what developed. A few of the clippings are still worth keeping. Decades ago, Linus Pauling told his assistant Art Robinson that there is value in reading widely and trying to remember seemingly unrelated facts. Pauling said that the mind will sometimes come up with links to memories that will prove useful. Pauling was correct. We don't know how our minds make these connections. They just pop up. We recall a related incident, the way I just recalled my friend with the heart attack. We say, "garbage in, garbage out." But we don't know today what will turn out to be garbage later. We know that most of it will be, but we cannot accurately predict what. So, we fill our minds with useless stuff. Then we forget most of it. That's what files are for. In the old days, when I decided to write on some topic, I would skim through a file to see what I had clipped. But if the file was really fat, or if there were several fat files on the same topic, I would rarely do this. It was too much work. I would rely on my memory to pop up some recollection. Then I would go to a file and go through it in search of the document. If we only knew what the future will bring... we could avoid so much garbage. ANOTHER IMMENSE "FILE" Technology keeps advancing. As it does, it makes earlier technologies that were once on the cutting edge of progress turn obsolete. It also makes hash of our plans and our investments. My entire career has been tied to books. But the format of books is changing: from paper-based to digit- based. There will come a day when I will have a lightweight, book-sized reader with a screen as easy to read as a 12 dots per inch piece of paper. It will enable me to highlight a passage, file it electronically, attach key words to it, and subsequently identify what book and page it came from. Its non-availability today is a matter of price, not technology. When such a product is offered for sale below $500, and when a common format makes digital books legally available for reading, my research will change: no more yellow highlighters, no more photocopies, no more filing cabinets, and no more dependence on my fading memory. I suspect that the product's main limiting factor today is not hardware or software but rather copyright law. There is also the problem of converting entire libraries to a digital format that will be readable by multiple hardware products. Google has begun the work of conversion, and just in time: acid-based paper books printed before 1880 are disintegrating. It is only a matter of time before Harvard and Stanford will have no library advantage over Podunk State for anything published before 1923: public domain. If the newly independent Republic of Freedonia decides to ignore international copyright law, Harvard and Stanford will have no library advantage over you and me for $200 a year. Today, I can walk into a university library and go on- line. Only rarely do libraries require student passwords. I can access any scholarly journal that the library subscribes to. I can find an article and e-mail it to myself. I can then file it on my own computer. Students can access Lexus/Nexus, which is a database of newspaper and magazine articles. If I can't gain access (and I probably can), I can pay a student $6 an hour to research any topic. Or I just find a cooperating student who lets me use his/her password. If I want to write a book in three weeks, the way I wrote "The War on Mel Gibson," I can do it, cheap. I own a library. It is housed in a 3,000 square foot facility. It has 100 bookcases, seven shelves per case. I am now about to give away 80% of it. The Web has changed the way I work. I have not decided on which institution should get it. Along the far wall is a boxed collection of microcards. On these cards are readable imprints of everything published in the United States from 1639 to 1811: books, pamphlets, sermons, newspapers. It is called "Early American Imprints." I used this set in my university's library, 1969-71 to write my Ph.D. dissertation, "The Concept of Property in Puritan New England, 1630-1720." In those days, only a few graduate research libraries had this set. In the late 1980s, microfiche replaced microcards as the preferred technology. Microfiche can be used to produce printed pages. Microcards cannot. The company that produced the cards was about to use them as landfill. A man found out and went to the managers. He offered to buy them. He agreed to sell them to individuals and organizations that would not buy the new microfiche version. The outfit agreed. My non-profit research organization bought the main set for $5,000, and I paid even more for the newspapers. This was a mistake. I never got back to the Puritans. Now, microfiche technology is obsolete. The first set of materials is available to libraries on-line. The material is searchable. How, I don't know. The images are poor and the type face is ancient. "The letter "s" looks like an "f," except at the end of a word, when it looks like an "s." The newspapers are not yet digitized. I offered to give away the collection to a college library. The librarian politely refused. He is even dumping hard copies of scholarly journals, as are most librarians. Microcards are dinosaurs. So, I will probably go on eBay and offer the collection to home schoolers or Christian day schools. What cost $18,000 in 1988 will get a couple of thousand, maybe. But in 1965, the same collection cost probably $50,000, worth six times as much today. Technology giveth and technology taketh away. IF WE KNEW THE FUTURE Entrepreneurship is the art of forecasting the price of things, and then buying items today that will be worth more later, and selling items today that will be worth less. We pay in time for things that will appreciate or depreciate. Money is replaceable. Time isn't. When we waste time, it usually costs us more in the long run than when we waste money. The allocation of time is the most difficult of our responsibilities. I regret having bought those microcards. I did not put them to good use. But I regret far more the investment of time in assembling all those clippings. I see them piling up, and I wonder, "What was I thinking?" Yet there are a few items that I am glad I saved. I think they will prove useful someday in my writing. But who knows? Maybe I should toss out all of the files. It would save about 150 hours of work. But to do that would be to acknowledge that my future will be very different from my past. Nobody likes to make that kind of complete break with the past. In the movie, "About Schmidt," there is a scene where Jack Nicholson returns to his former place of employment, an insurance company. He spent his career in the accounting department. As he passes by the building, he sees his files in the basement, ready for the dumpster. In these boxes rest the visible results of his life's work. They were important to him at the time, but they are about to be tossed away. What about Schmidt? Was he the equivalent of those files? The script writer did a good job with that scene. He did not have to have Nicholson verbally ponder his own worth. It was crucial to the movie that he not do this. The character was not about to admit to himself or others that his life in retrospect seems to have been, if not wasted, then not significant. But for most people, the illusion of their cultural significance doesn't last long. There is nothing like attending a funeral to remind us of this. But tossing out files comes close. ILLUSIONARY OCCUPATIONS Because I had something close to mindless to do, I watched the TV show on the 100 most famous lines in movies. There was no doubt in my mind which line would be number one, any more than I had a doubt regarding its predecessor on the most famous song. The song had to be "Over the Rainbow," and it was. The line had to be "Frankly, my dear. . . ." The only reason for watching that sort of show is to find out the also-rans. Far and away, the most important line in the history of the movies was written by William Goldman. Several of Goldman's lines made the top 100, but not his most important one. That line was put into the mouth of Deep Throat in "All the President's Men." It has become legendary: "Follow the Money." It does not appear in the book. So, Goldman's greatest line gets no respect. Hardly anyone knows that he wrote it. Movie actors may imagine that they will exert influence. A few of them may, if they get the right parts. But they do not speak their own lines, and hardly anyone recalls the names of the script writers of any specific line, unless it's a re-make of a play by Shakespeare. Writers imagine that they will be remembered, but here is the grim reality. First, hardly anyone reads old novels, except when they are assigned in an English class. Second, nobody reads old non-fiction books, except when they are assigned in a history class. The Great Books make Great Shelves, but hardly anyone ever takes one of them down from the shelf to read it in order to gain greater wisdom. I sat down and listed books over a century old that have influenced my thinking directly, as distinguished from the influence of some contemporary who said the book is important. The list is incredibly short. De Tocqueville's "Democracy in America" is one, but I only finished both volumes a couple of years ago. I had read in it in grad school. His "Old Regime and the French Revolution" influenced me: one main idea. Burke's "Reflections on the Revolution in France" (1790) is on my list. Bastiat's "The Law" (1850) influenced me, but it is really a long essay. http://shurl.org/bastiat The information in old books gets superseded very fast. If a successor does not pick it up and run with it, or if he has no successor, a book will die. Virtually all old books have died. They are read, not for wisdom but to find out what some author said and what influence he had, way back when. So, authors may enjoy the illusion of having produced a stand-alone masterpiece, but it's still an illusion. Great artists have a shot at this. Nobody else does. But there are few great artists around today, as far as we can see. CONCLUSION We are all in the same boat. Our lives are filled with what appears to be waste. Yet the waste seems necessary for whatever productivity we add. We cannot eliminate waste. At best, we can minimize it. Goal-setting and time-management are techniques that help us reduce waste. But no matter what we do, most of what we do seems to subtract from the legacy we leave behind. If this is true, then we might as well accept waste. Somehow, it is an inescapable part of our lives. Waste contributes to our production. So, it's not really all waste. It's just whatever is unaccounted for in our overall production process. I budget waste into my life. I recognize that some of my time will be spent on what appears to be unproductive details. We can increase our output by acknowledging the reality of waste and dealing with it. It's like cholesterol. There is good waste and bad waste. So, I have budgeted in an hour a day for tossing out clippings. But I have decide that it's not for saving file cabinet space. It's for coming across an occasional gem, and hoping that my fading memory will retain it. You know the story: the hope for a pony in the pile of waste. I suggest that you pick a project like this and complete it. If nothing else, it's a good reminder of how few ponies there are in life. We should learn to appreciate them. From checker at panix.com Sat Jul 9 00:05:27 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:27 -0400 (EDT) Subject: [Paleopsych] Wired: Sam Jaffe: Giving Genetic Disease the Finger Message-ID: Sam Jaffe: Giving Genetic Disease the Finger http://wired.com/news/print/0,1294,68019,00.html 5.7.5 Scientists are closing in on techniques that could let them safely repair almost any defective gene in a patient, opening the door for the first time to treatments for a range of genetic disorders that are now considered incurable. The breakthrough, announced in the journal Nature in June, relies on so-called zinc fingers, named after wispy amino acid protuberances that emanate from a single zinc ion. When inserted into human cells, the fingers automatically bind to miscoded strands of DNA, spurring the body's innate repair mechanism to recode the problem area with the correct gene sequence. A method for fixing miscoded DNA by injecting foreign genes into cells won headlines three years ago when doctors in France and Britain announced a handful of successful cures related to X-linked severe combined immunodeficiency disease, or SCID, also known as "bubble boy" disease. But that method was ultimately proven unsafe. In a paper published earlier this month, scientists at California biotechnology company Sangamo BioSciences showed that zinc fingers can be used to erase targeted portions of DNA without risk of harmful side effects. "This doesn't just deliver a foreign gene into the cell," said Nobel Prize winner and CalTech President David Baltimore, who with a Sangamo paper co-author Mathew Porteus proposed this method to cure genetic diseases. "It actually deletes the miscoded portion and fixes the problem." At the heart of the breakthrough is the concept of "if it's broke, break it some more." Cells have a method of DNA repair called homologous recombination, which fixes breaks in the double helix of our chromosomes. But the process only repairs places where the DNA has been cut, not where genes have been miscoded. Using a package of synthesized zinc fingers, cells can be tricked into doing nano-surgery on their own genes, Sangamo researchers found. The zinc fingers home in like a guided missile on the exact spot in the genome doctors are trying to target and then bind to it. DNA-devouring enzymes then cut through the double helix of DNA at the exact beginning and end of the targeted gene, and a template of donor DNA helps rebuild the deleted strand. While such a therapy has been theorized for years by Baltimore and others, Sangamo scientists are the first to show test-tube results with human cells. In a paper published June 2, Sangamo researchers showed how they were able to correct the defective gene in 18 percent of the T-cells extracted from the body of an X-linked SCID patient. That should be enough to cure the disease, as it only takes one corrected T-cell to repopulate a person's immune system with healthy cells, according to Sangamo. If successful in trials, Sangamo's technology would be the first successful gene therapy, three decades after the concept of curing diseases by tinkering with the genome was first proposed. Most gene therapy trials have failed because the methods of inserting new genes into cells (usually with modified viruses as vectors) haven't proved to be effective enough. One trial that did succeed, but then ended in tragedy, was a 2002 French X-linked SCID trial that used retroviruses to deliver a new gene into the patients. The new gene cured the disease in 12 patients, but went on to cause leukemia in three of them. It turned out the foreign gene, in addition to producing the protein that vanquishes X-linked SCID, had the unexpected side effect of sometimes turning on a cancer-causing gene. Sangamo's technology overcomes that problem. Whereas the French viruses inserted the foreign gene randomly into the host cell's genome, the zinc fingers are highly specific and can land only at the targeted gene. "They've certainly raised the bar for gene-therapy safety," said Scott Wolfe, a zinc-finger researcher at the University of Massachusetts Medical School in Worcester, Massachusetts. He points out that the early proof-of-principle work was highly toxic to the cells. The zinc fingers weren't specific enough and they created so many double-stranded breaks in the DNA that a lot of the cells chose to commit suicide rather than try to repair all the breaks. "They really seem to have solved the toxicity problem altogether." Although X-linked SCID patients will probably be the first to try the therapy, the technology is extremely versatile for a host of human diseases. "Right now, its greatest weakness appears to be that it is optimized for very small patches of gene repair," said Baltimore. "If it's a long sequence of DNA that has to be fixed, this might not be the best way to do it." Nevertheless, there are a lot of ways to attack diseases without replacing whole genes. Other potential targets for the therapy range from many types of cancer to cystic fibrosis and even AIDS. "If they can figure out how to optimize their zinc fingers for any spot on the genome, this could target any gene you want it to," said Wolfe. From checker at panix.com Sat Jul 9 00:05:59 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:05:59 -0400 (EDT) Subject: [Paleopsych] NYT: Who Stole Sleep? The Pillow as Perp Message-ID: Who Stole Sleep? The Pillow as Perp New York Times, 5.7.7 http://www.nytimes.com/2005/07/07/fashion/thursdaystyles/07online.html By MICHELLE SLATALLA SHORTLY before midnight the other night, someone leaving the movie theater a block and a half from my house dropped car keys on the pavement. Actually, I don't know if the jingling came from keys or for that matter if there even was jingling, because like most mortals I slept through the incident. Even my dog Sticky, who has ears big enough to interest NASA, continued to snore. But not my husband. "What was that?" he shrieked, sitting upright as if bitten by a snake. The princess who could not sleep on a pea had nothing on my husband. His rest is often disturbed by distant barking, other people's air-conditioners and "something that sounds like a mosquito, only it never stings me." For years I snoozed through the drama. But recently neck and back twinges have begun to surge through my husband like electrical currents and have prompted him to jump out of bed, switch on the light and hop around. Even I can't sleep through that. I'm not the first spouse to stumble blearily toward the computer at 2 a.m. in search of a solution. But in the lonely predawn hours, as I considered the Internet's various suggestions - from sleep masks like the foldable Dreamlite Relaxation model ($6.95 at [3]Dreamessentials.com) to white noise machines like the Marsona Sleep Mate 980 ($52.95 at [4]Naturestapestry.com) - one possibility intrigued me above all others. Maybe we needed better pillows. Ours, old and flat and musty, provided about as much neck support as a saltine. Could the pillows be sabotaging my husband's sleep? "A pillow is important if a person has poor sleep to begin with," said Dr. Clete A. Kushida, director of the Stanford University Center for Human Sleep Research. "The environment is important." But which pillow? There are no official standards for pillows and no research to prove that one type is better than another. "The studies haven't been done," Dr. Kushida said. "Basically it comes down to what is the most comfortable." Depending on who you are, that might mean [5]llbean.com's goose down damask pillow (in sizes from standard to king and in fills ranging from soft to firm, $49 to $99). Or Overstock.com's Circle of Down pillow ($29.99). Or [6]Livingincomfort.com's hypoallergenic pillow ($17.88). Or maybe the answer is a synthetic pillow from [7]Bedbathandbeyond.com (from $7.99 for the Jumbo Gusset to $79.77 for the standard-size Indulgence Supreme Thermo-Sensitive). I needed guidance. "How can I tell if my husband is a Thermo-Sensitive type or a Circle of Down man?" I asked Dr. James Maas, a professor and sleep researcher at Cornell University. "Given all the options, I wonder if anyone is sleeping on the right pillow." Professor Maas said that pillow issues affect a great number of Americans. "Somewhere near 50 percent of the country is sleep deprived," he said during a phone interview. "This country is a country of walking zombies, mostly due to sleep length but also to poor quality of sleep." Professor Maas recommended that my husband cut back on his caffeine intake and that we create a bedroom that was cool, dark and comfortable (which I figured was a nice way of saying our 95-pound dog should get off the bed). As for pillows, the difference comes down to down versus synthetic fill, Professor Maas said. A good pillow of either stuff should last up to 10 years, he said. You can test your pillow to find out if it's past its prime. "You take your pillow," he said. "Fold it in half. If it doesn't spring forward and open instantly by itself, you've got a dead pillow. Replace it." Professor Maas said he liked the quality of pillows manufactured by United Feather and Down, an Illinois company whose products, both down and synthetic, sell under various private labels. For instance United Feather and Down's Insuloft down and PrimaLoft synthetic-fill pillows are for sale online at [8]thecompanystore.com, [9]Landsend.com, [10]Potterybarn.com and llbean.com. "We do a wide variety of fills," said Becky McMorrow, United Feather and Down's marketing manager. "Every retail customer tweaks the pillow to have an exclusive style. Restoration Hardware and Williams-Sonoma use the same fill but not the same fabric cover. Pottery Barn used a damask stripe." No matter where you shop, expect to pay from $29 to $59 for a good synthetic pillow and from $59 to $129 for a goose down pillow with a minimum of 550 fill power, Ms. McMorrow said. "Fill power is a measurement of how lofty an ounce of down is and how high it comes up on a beaker after it's compressed," Ms. McMorrow said. After ascertaining a few facts about my husband - mostly sleeps on his side, switches back and forth between a flat pillow and a fluffier one as the night progresses - Ms. McMorrow mailed me four models to test. I did not feel it necessary to mention the experiment to him; he has enough on his mind. The first night, I discreetly slipped a PrimaLoft synthetic-fill Side Sleeper into his pillowcase. Gusseted to provide an even sleep surface and neck support for a side sleeper, it was similar to a $39 model from Bedbathandbeyond.com. Then I turned out the light and lay poised to take notes as he fell into an immediate deep sleep. Thirty minutes passed without a peep out of him. Then 60. Then I fell asleep. The next night I introduced the fluffier Insuloft down-filled Side Sleeper (very like a $99 version at [11]Realgoods.com). After 30 minutes he sat up and asked suspiciously, "Do I hear a raccoon?" From this I deduced that while both Side Sleepers provided neck support, he preferred the denser texture of synthetic fill. The third night he also slept well on a down-synthetic blend called the Lyocell (similar to a pillow sold at thecompanystore.com for $89). By the fourth night I was the one who had earned the right to sleep on the Face Saver with "aloe-soft fabric" to prevent wrinkles. The pillow is to go on sale this fall on the Home Shopping Network for about $35. The conclusion? I bought all the pillows, because all four were an improvement over our old ones. The dog thought so, too. E-mail: [12]slatalla at nytimes.com From checker at panix.com Sat Jul 9 00:07:59 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:07:59 -0400 (EDT) Subject: [Paleopsych] Tullock Rules! Message-ID: Yesterday I circulated a 1997 New York Times article that estimated the value of the ecosystem as worth $33 trillion. Someone asked me in comparison to what. Well, annual World Domestic Product is about $30 trillion. The value of human and other capital is ten times annual WDP, or $300 trillion, and my guess was corroborated by googling. In this country at least, according to Robert Fogel, human capital is about two-thirds of the total. But humans value themselves more highly than capital markets do, as evidenced by various decisions people make on paying to avoid death at various probabilities. It turns out to be $2-4 million per person in the US. Call it $1 million for the generally much less rich world. So the value of people as estimated by themselves is 6 billion times $1 million per person, or $6 quadrillion. Five percent of this is $300 trillion. Now Gordon Tullock says we are 95% selfish and 5% altruistic. He based his calculations on the fact that Americans allocate 5% of their incomes in the form of net downward redistribution. This has remained constant for 140 years and does not depend on whether the redistributing is done primarily privately or by local governments or primarily by state and national governments. Here's a different confirmation of the same. But the ancient Hebrews had already articulated the Tullock Five Percent Rule, when they said: The Earth from God we do but rent, And all he asks is ten percent. Related articles to follow. From checker at panix.com Sat Jul 9 00:08:07 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:08:07 -0400 (EDT) Subject: [Paleopsych] Wiki: Value of Earth Message-ID: Value of Earth http://en.wikipedia.org/wiki/Value_of_Earth In [6]economics, value of [7]Earth is the ultimate in [8]ecosystem valuation, and important to [9]value of life calculations. It begins with the simple problem that if the Earth ceases to support life, and human life does not continue elsewhere, all economic activity will also cease. There are several ways to estimate the value of Earth: * Estimate the [10]value of life for everything that lives on it, and assign the Earth, as a necessary component and home for that life, the [11]natural capital on which [12]individual capital thrives, at least this much value. Since not all life is valued, and a very little is overvalued, there is high risk of under-estimation. One way to avoid this is to work continent by continent to see if there is systematic inflation of the price of life on some compared to the others. * Estimate the cost of [13]replacing the Earth, which may include finding and colonizing another planet, or creating one artificially in a compatible orbit. What if the natural capital of a nearby planet, e.g. [14]Mars, were to compete? What would be the cost of [15]terraforming it to make it as comfortable as Earth? Or even barely habitable? An issue is whether to count transport costs. + As a variation, estimate the cost of a smaller habitat, such as [16]Biosphere 2, and multiply its cost by the ratio between the population of Earth and of that smaller habitat. This however is to rely on below-minimum cost figures, since Biosphere 2, although brilliantly ambitious and expensive, was a flop. This method yields only a floor value which Earth itself would vastly exceed. See below for more details. + As another variation, figure out every disaster that might occur due to failure of the [17]biosphere, to lesser or greater degree, and calculate the price of [18]insurance against all of it. The averted insurance payments are effectively a yield, and, this is one way to calculate the value of what Earth is doing for us, for as long as these averted failures do not occur. * Calculate the yield of [19]natural capital, as [20]nature's services, and use the size and consistency of this yield to calculate how much capital there must be. This method was pioneered by [21]Robert Costanza and is promoted in [22]Natural Capitalism. As one might expect, these all produce quite high values for the entire Earth, usually at least in the hundreds of quadrillions of [23]US dollars. This seems appropriate. However, even with this sum in hand, it seems unlikely that even [24]experienced reconstruction subcontractors could complete the task of replacing Earth, certainly not without using Earth itself as a base. Rent for use of Earth and its orbit might then also have to be included, and it would be hard to price this without calculating the price of the Earth, again. One way around this is to simply declare the Earth [25]priceless or to be exactly and only as valuable as all [26]financial capital in circulation. This may be equivalent to declaring it [27]worthless however, as [28]economics deals very poorly with assets that are too valuable to trade actively in markets. Replacement methods Returning to the calculation in terms of the replacement cost of Earth's biosystems: In Biosphere2 over $240 Million was spent on developing the infrastructure to support 8 people for two years. The project failed and fresh air had to be pumped in to save the lives of the participants. So Earth is worth at least ( $240 M / 8 people ) X 6.5 Billion people on Earth = 1.95 ? 10^17 dollars. This represents the minimum value of the Earth using today's technology. Because the project failed, the true value must be higher than this amount. To put this into perspective, assuming the total value of the world's [30]GDP is $30 Trillion, that sum divided into $ 1.95 ? 10^17 = 6.5 thousand times the world's current GDP. From this we can estimate the cost of cutting a tree or taking a single fish from the ocean if there is evidence that that yielded resource unit may not be replaced. The probability that the resource will be replaced reduces the cost, so a 50% chance that it will be replaced implies cutting in half the cost, since two of them can be taken, on average, before it isn't replaced by [31]nature's services. These estimates can be done using a straight line method, for initial estimates, or using an exponential to place greater value on the remaining elements of a declining resource. Further calculation of the value of one tree, replaced or otherwise, a metric ton of fish, of soil carbon, depend on these probabilities. The curve for Replaced and Not Replaced biomass will be relatively equivalent as long as the total biomass is relatively large. As the total biomass in a specific area becomes depleted to the point where the entire sustainability of the biomass is threatened, then the exponential part of the curve comes into play. Ultimately, we are left with the question, how much are we prepared to pay in order to avert imminent death as individuals. That sum is relatively large. As the resources are depleted to the point where the conflict over what remains begins to dominate the risk of taking it, it becomes more obvious due to costs of protection and securing property. So, any calculation based on costs of replacing ecosystems tends to lead to a calculation based on costs of protecting ecosystems so that their yield can be controlled - but only at the tail end of the process, when it is too late to replace them. There are implications for costs of [32]national security and [33]climate change, both of which may have to be counted as full [34]factors of production in such an analysis, if not full [35]styles of capital - a factor which if not present in tight parameters prevents all gains from all investment in production. See Also * [37]Earth Day * [38]World Ocean Day * [39]World Water Day [41]Categories: [42]Free-market environmentalism | [43]Sustainability References 6. http://en.wikipedia.org/wiki/Economics 7. http://en.wikipedia.org/wiki/Earth 8. http://en.wikipedia.org/wiki/Ecosystem_valuation 9. http://en.wikipedia.org/wiki/Value_of_life 10. http://en.wikipedia.org/wiki/Value_of_life 11. http://en.wikipedia.org/wiki/Natural_capital 12. http://en.wikipedia.org/wiki/Individual_capital 13. http://en.wikipedia.org/wiki/Replacing_the_Earth 14. http://en.wikipedia.org/wiki/Mars_%28planet%29 15. http://en.wikipedia.org/wiki/Terraforming 16. http://en.wikipedia.org/wiki/Biosphere_2 17. http://en.wikipedia.org/wiki/Biosphere 18. http://en.wikipedia.org/wiki/Insurance 19. http://en.wikipedia.org/wiki/Natural_capital 20. http://en.wikipedia.org/wiki/Nature%27s_services 21. http://en.wikipedia.org/w/index.php?title=Robert_Costanza&action=edit 22. http://en.wikipedia.org/wiki/Natural_Capitalism 23. http://en.wikipedia.org/wiki/US_dollar 24. http://en.wikipedia.org/wiki/Halliburton 25. http://en.wikipedia.org/wiki/Priceless 26. http://en.wikipedia.org/wiki/Financial_capital 28. http://en.wikipedia.org/wiki/Economics 30. http://en.wikipedia.org/wiki/Gross_domestic_product 31. http://en.wikipedia.org/wiki/Nature%27s_services 32. http://en.wikipedia.org/wiki/National_security 33. http://en.wikipedia.org/wiki/Climate_change 34. http://en.wikipedia.org/wiki/Factors_of_production 35. http://en.wikipedia.org/wiki/Capital_%28economics%29 37. http://en.wikipedia.org/wiki/Earth_Day 38. http://en.wikipedia.org/wiki/World_Ocean_Day 39. http://en.wikipedia.org/wiki/World_Water_Day 40. http://en.wikipedia.org/wiki/Value_of_Earth 41. http://en.wikipedia.org/w/index.php?title=Special:Categories&article=Value_of_Earth 42. http://en.wikipedia.org/wiki/Category:Free-market_environmentalism 43. http://en.wikipedia.org/wiki/Category:Sustainability 44. http://en.wikipedia.org/wiki/Value_of_Earth 45. http://en.wikipedia.org/wiki/Talk:Value_of_Earth 47. http://en.wikipedia.org/w/index.php?title=Value_of_Earth&action=history 48. http://en.wikipedia.org/w/index.php?title=Special:Userlogin&returnto=Value_of_Earth From checker at panix.com Sat Jul 9 00:08:11 2005 From: checker at panix.com (Premise Checker) Date: Fri, 8 Jul 2005 20:08:11 -0400 (EDT) Subject: [Paleopsych] Progressive Policy Institute: Trade Is an Increasing Share of the New Economy Message-ID: Trade Is an Increasing Share of the New Economy http://www.neweconomyindex.org/section1_page03.html WHY IS THIS IMPORTANT? The dramatic expansion of trade means more robust competition, which makes constant innovation more critical to success. For that reason, globalization has accelerated industrial and occupational restructuring, leading to the decline of some industries and jobs, and the growth of others. One indicator of the extent of the trend toward globalization is the growing value of exports and imports as a share of the economy. THE TREND: Trade has become an integral part of the United States' and world economies. U.S. exports and imports have increased from 11 percent of GDP in 1970 to 25 percent in 1997. Moreover, the United States is increasingly specializing in more complex, higher value-added goods and services, as reflected in the fact that the average weight of a dollar's worth of American exports is less than half of what it was in 1970. World exports increased from $1.3 trillion in 1970 to $4.3 trillion in 1995, in constant dollars. And globalization may be about to move up to a new level. Jane Fraser and Jeremy Oppenheim, of the consulting firm McKinsey & Company, have estimated that the value of the world economy that is "globally contestable," which is to say open to global competitors in product, service, or asset ownership markets, will rise from about $4 trillion in 1995 (approximately a seventh of the world's output) to more than $21 trillion by 2000 (about half of world output). According to Fraser and Oppenheim, "We are on the brink of a major long-term transformation of the world economy from a series of local industries locked in closed national economies to a system of integrated global markets contested by global players."^[28]11 This growth will be driven by global capital markets, reduced economic and trade barriers, and perhaps most importantly, technological change, which makes it easier to locate enterprises and sell products and services almost anywhere. For example, online brokerages like E-Trade or Charles Schwab are just as accessible from Singapore or New Zealand as they are from the United States. THE DATA:^[29]12 References 28. http://www.neweconomyindex.org/endnotes.html#11 29. http://www.neweconomyindex.org/endnotes.html#12 From checker at panix.com Sat Jul 9 15:48:48 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:48:48 -0400 (EDT) Subject: [Paleopsych] Scientific American: Natural-Born Liars Message-ID: Natural-Born Liars http://www.sciam.com/print_version.cfm?articleID=0007B7A0-49D6-128A-89D683414B7F0000 May 18, 2005 Thanks to Alice Andrews for this. What I want to know is why mechanisms for detecting *self*-detection haven't evolved as well. Maybe they have, to a certain extent. But the deception can go many layers deep. I might grow up in a culture that believes that Mahomet is Allah's only prophet. If I belong to the vast majority that never really question this statement, then I'm not really all that self-deceiving. No doubt I should be a skeptic, but there are too many things to be skeptical about. And no brain mechanism has direct access to the truth. What might be detected is that I am little bit *too* sincere in my protestation of faith. In sum, mechanisms for detecting self-deception in others are too costly to develop in most cases. Also, it takes time for such mechanisms to evolve successfully, for there's a race between ever more subtle means of (both other- and self-) deception and the detection of that deception. -------------------- Why do we lie, and why are we so good at it? Because it works By David Livingstone Smith Deception runs like a red thread throughout all of human history. It sustains literature, from Homer's wily Odysseus to the biggest pop novels of today. Go to a movie, and odds are that the plot will revolve around deceit in some shape or form. Perhaps we find such stories so enthralling because lying pervades human life. Lying is a skill that wells up from deep within us, and we use it with abandon. As the great American observer Mark Twain wrote more than a century ago: "Everybody lies ... every day, every hour, awake, asleep, in his dreams, in his joy, in his mourning. If he keeps his tongue still his hands, his feet, his eyes, his attitude will convey deception." Deceit is fundamental to the human condition. Research supports Twain's conviction. One good example was a study conducted in 2002 by psychologist Robert S. Feldman of the University of Massachusetts Amherst. Feldman secretly videotaped students who were asked to talk with a stranger. He later had the students analyze their tapes and tally the number of lies they had told. A whopping 60 percent admitted to lying at least once during 10 minutes of conversation, and the group averaged 2.9 untruths in that time period. The transgressions ranged from intentional exaggeration to flat-out fibs. Interestingly, men and women lied with equal frequency; however, Feldman found that women were more likely to lie to make the stranger feel good, whereas men lied most often to make themselves look better. In another study a decade earlier by David Knox and Caroline Schacht, both now at East Carolina University, 92 percent of college students confessed that they had lied to a current or previous sexual partner, which left the husband-and-wife research team wondering whether the remaining 8 percent were lying. And whereas it has long been known that men are prone to lie about the number of their sexual conquests, recent research shows that women tend to underrepresent their degree of sexual experience. When asked to fill out questionnaires on personal sexual behavior and attitudes, women wired to a dummy polygraph machine reported having had twice as many lovers as those who were not, showing that the women who were not wired were less honest. It's all too ironic that the investigators had to deceive subjects to get them to tell the truth about their lies. These references are just a few of the many examples of lying that pepper the scientific record. And yet research on deception is almost always focused on lying in the narrowest sense-literally saying things that aren't true. But our fetish extends far beyond verbal falsification. We lie by omission and through the subtleties of spin. We engage in myriad forms of nonverbal deception, too: we use makeup, hairpieces, cosmetic surgery, clothing and other forms of adornment to disguise our true appearance, and we apply artificial fragrances to misrepresent our body odors. We cry crocodile tears, fake orgasms and flash phony "have a nice day" smiles. Out-and-out verbal lies are just a small part of the vast tapestry of human deceit. The obvious question raised by all of this accounting is: Why do we lie so readily? The answer: because it works. The Homo sapiens who are best able to lie have an edge over their counterparts in a relentless struggle for the reproductive success that drives the engine of evolution. As humans, we must fit into a close-knit social system to succeed, yet our primary aim is still to look out for ourselves above all others. Lying helps. And lying to ourselves--a talent built into our brains--helps us accept our fraudulent behavior. Passport to Success If this bald truth makes any one of us feel uncomfortable, we can take some solace in knowing we are not the only species to exploit the lie. Plants and animals communicate with one another by sounds, ritualistic displays, colors, airborne chemicals and other methods, and biologists once naively assumed that the sole function of these communication systems was to transmit accurate information. But the more we have learned, the more obvious it has become that nonhuman species put a lot of effort into sending inaccurate messages. The mirror orchid, for example, displays beautiful blue blossoms that are dead ringers for female wasps. The flower also manufactures a chemical cocktail that simulates the pheromones released by females to attract mates. These visual and olfactory cues keep hapless male wasps on the flower long enough to ensure that a hefty load of pollen is clinging to their bodies by the time they fly off to try their luck with another orchid in disguise. Of course, the orchid does not "intend" to deceive the wasp. Its fakery is built into its physical design, because over the course of history plants that had this capability were more readily able to pass on their genes than those that did not. Other creatures deploy equally deceptive strategies. When approached by an erstwhile predator, the harmless hog-nosed snake flattens its head, spreads out a cobralike hood and, hissing menacingly, pretends to strike with maniacal aggression, all the while keeping its mouth discreetly closed. These cases and others show that nature favors deception because it provides survival advantages. The tricks become increasingly sophisticated the closer we get to Homo sapiens on the evolutionary chain. Consider an incident between Mel and Paul: Mel dug furiously with her bare hands to extract the large succulent corm from the rock-hard Ethiopian ground. It was the dry season and food was scarce. Corms are edible bulbs somewhat like onions and are a staple during these long, hard months. Little Paul sat nearby and surreptitiously observed Mel's labors. Paul's mother was out of sight; she had left him to play in the grass, but he knew she would remain within earshot in case he needed her. Just as Mel managed, with a final pull, to yank her prize out of the earth, Paul let out an ear-splitting cry that shattered the peace of the savannah. His mother rushed to him. Heart pounding and adrenaline pumping, she burst upon the scene and quickly sized up the situation: Mel had obviously harassed her darling child. Shrieking, she stormed after the bewildered Mel, who dropped the corm and fled. Paul's scheme was complete. After a furtive glance to make sure nobody was looking, he scurried over to the corm, picked up his prize and began to eat. The trick worked so well that he used it several more times before anyone wised up. The actors in this real-life drama were not people. They were Chacma baboons, described in a 1987 article by primatologists Richard W. Byrne and Andrew Whiten of the University of St. Andrews in Scotland for i magazine and later recounted in Byrne's 1995 book The Thinking Ape (Oxford University Press). In 1983 Byrne and Whiten began noticing deceptive tactics among the mountain baboons in Drakensberg, South Africa. Catarrhine primates, the group that includes the Old World monkeys, apes and ourselves, are all able to tactically dupe members of their own species. The deceptiveness is not built into their appearance, as with the mirror orchid, nor is it encapsulated in rigid behavioral routines like those of the hog-nosed snake. The primates' repertoires are calculated, flexible and exquisitely sensitive to shifting social contexts. Byrne and Whiten catalogued many such observations, and these became the basis for their celebrated Machiavellian intelligence hypothesis, which states that the extraordinary explosion of intelligence in primate evolution was prompted by the need to master ever more sophisticated forms of social trickery and manipulation. Primates had to get smart to keep up with the snowballing development of social gamesmanship. The Machiavellian intelligence hypothesis suggests that social complexity propelled our ancestors to become progressively more intelligent and increasingly adept at wheeling, dealing, bluffing and conniving. That means human beings are natural-born liars. And in line with other evolutionary trends, our talent for dissembling dwarfs that of our nearest relatives by several orders of magnitude. The complex choreography of social gamesmanship remains central to our lives today. The best deceivers continue to reap advantages denied to their more honest or less competent peers. Lying helps us facilitate social interactions, manipulate others and make friends. There is even a correlation between social popularity and deceptive skill. We falsify our r???m?? to get jobs, plagiarize essays to boost grade-point averages and pull the wool over the eyes of potential sexual partners to lure them into bed. Research shows that liars are often better able to get jobs and attract members of the opposite sex into relationships. Several years later Feldman demonstrated that the adolescents who are most popular in their schools are also better at fooling their peers. Lying continues to work. Although it would be self-defeating to lie all the time (remember the fate of the boy who cried, "Wolf!"), lying often and well remains a passport to social, professional and economic success. Fooling Ourselves Ironically, the primary reasons we are so good at lying to others is that we are good at lying to ourselves. There is a strange asymmetry in how we apportion dishonesty. Although we are often ready to accuse others of deceiving us, we are astonishingly oblivious to our own duplicity. Experiences of being a victim of deception are burned indelibly into our memories, but our own prevarications slip off our tongues so easily that we often do not notice them for what they are. The strange phenomenon of self-deception has perplexed philosophers and psychologists for more than 2,000 years. On the face of it, the idea that a person can con oneself seems as nonsensical as cheating at solitaire or embezzling money from one's own bank account. But the paradoxical character of self-deception flows from the idea, formalized by French polymath Ren? escartes in the 17th century, that human minds are transparent to their owners and that introspection yields an accurate understanding of our own mental life. As natural as this perspective is to most of us, it turns out to be deeply misguided. If we hope to understand self-deception, we need to draw on a more scientifically sound conception of how the mind works. The brain comprises a number of functional systems. The system responsible for cognition--the thinking part of the brain--is somewhat distinct from the system that produces conscious experiences. The relation between the two systems can be thought of as similar to the relation between the processor and monitor of a personal computer. The work takes place in the processor; the monitor does nothing but display information the processor transfers to it. By the same token, the brain's cognitive systems do the thinking, whereas consciousness displays the information that it has received. Consciousness plays a less important role in cognition than previously expected. This general picture is supported by a great deal of experimental evidence. Some of the most remarkable and widely discussed studies were conducted several decades ago by neuroscientist Benjamin Libet, now professor emeritus at the University of California at San Diego. In one experiment, Libet placed subjects in front of a button and a rapidly moving clock and asked them to press the button whenever they wished and to note the time, as displayed on the clock, the moment they felt an impulse to press the button. Libet also attached electrodes over the motor cortex, which controls movement, in each of his subjects to monitor the electrical tension that mounts as the brain prepares to initiate an action. He found that our brains begin to prepare for action just over a third of a second before we consciously decide to act. In other words, despite appearances, it is not the conscious mind that decides to perform an action: the decision is made unconsciously. Although our consciousness likes to take the credit (so to speak), it is merely informed of unconscious decisions after the fact. This study and others like it suggest that we are systematically deluded about the role consciousness plays in our lives. Strange as it may seem, consciousness may not do any-thing except display the results of unconscious cognition. This general model of the mind, supported by various experiments beyond Libet's, gives us exactly what we need to resolve the paradox of self-deception--at least in theory. We are able to deceive ourselves by invoking the equivalent of a cognitive filter between unconscious cognition and conscious awareness. The filter preempts information before it reaches consciousness, preventing selected thoughts from proliferating along the neural pathways to awareness. Solving the Pinocchio Problem But why would we filter information? Considered from a biological perspective, this notion presents a problem. The idea that we have an evolved tendency to deprive ourselves of information sounds wildly implausible, self-defeating and biologically disadvantageous. But once again we can find a clue from Mark Twain, who bequeathed to us an amazingly insightful explanation. "When a person cannot deceive himself," he wrote, "the chances are against his being able to deceive other people." Self-deception is advantageous because it helps us lie to others more convincingly. Concealing the truth from ourselves conceals it from others. In the early 1970s biologist Robert L. Trivers, now at Rutgers University, put scientific flesh on Twain's insight. Trivers made the case that our flair for self-deception might be a solution to an adaptive problem that repeatedly faced ancestral humans when they attempted to deceive one another. Deception can be a risky business. In the tribal, hunter-gatherer bands that were presumably the standard social environment in which our hominid ancestors lived, being caught red-handed in an act of deception could result in social ostracism or banishment from the community, to become hyena bait. Because our ancestors were socially savvy, highly intelligent primates, there came a point when they became aware of these dangers and learned to be self-conscious liars. This awareness created a brand-new problem. Uncomfortable, jittery liars are bad liars. Like Pinocchio, they give themselves away by involuntary, nonverbal behaviors. A good deal of experimental evidence indicates that humans are remarkably adept at making inferences about one another's mental states on the basis of even minimal exposure to nonverbal information. As Freud once commented, "No mortal can keep a secret. If his lips are silent, he chatters with his fingertips; betrayal oozes out of him at every pore." In an effort to quell our rising anxiety, we may automatically raise the pitch of our voice, blush, break out into the proverbial cold sweat, scratch our nose or make small movements with our feet as though barely squelching an impulse to flee. Alternatively, we may attempt to rigidly control the tone of our voice and, in an effort to suppress telltale stray movements, raise suspicion by our stiff, wooden bearing. In any case, we sabotage our own efforts to deceive. Nowadays a used-car salesman can hide his shifty eyes behind dark sunglasses, but this cover was not available during the Pleistocene epoch. Some other solution was required. Natural selection appears to have cracked the Pinocchio problem by endowing us with the ability to lie to ourselves. Fooling ourselves allows us to selfishly manipulate others around us while remaining conveniently innocent of our own shady agendas. If this is right, self-deception took root in the human mind as a tool for social manipulation. As Trivers noted, biologists propose that the overriding function of self-deception is the more fluid deception of others. Self-deception helps us ensnare other people more effectively. It enables us to lie sincerely, to lie without knowing that we are lying. There is no longer any need to put on an act, to pretend that we are telling the truth. Indeed, a self-deceived person is actually telling the truth to the best of his or her knowledge, and believing one's own story makes it all the more persuasive. Although Trivers's thesis is difficult to test, it has gained wide currency as the only biologically realistic explanation of self-deception as an adaptive feature of the human mind. The view also fits very well with a good deal of work on the evolutionary roots of social behavior that has been supported empirically. Of course, self-deception is not always so absolute. We are sometimes aware that we are willing dupes in our own con game, stubbornly refusing to explicitly articulate to ourselves just what we are up to. We know that the stories we tell ourselves do not jibe with our behavior, or they fail to mesh with physical signs such as a thumping heart or sweaty palms that betray our emotional states. For example, the students described earlier, who admitted their lies when watching themselves on videotape, knew they were lying at times, and most likely they did not stop themselves because they were not disturbed by this behavior. At other times, however, we are happily unaware that we are pulling the wool over our own eyes. A biological perspective helps us understand why the cognitive gears of self-deception engage so smoothly and silently. They cleverly and imperceptibly embroil us in performances that are so skillfully crafted that the act gives every indication of complete sincerity, even to the actors themselves. From checker at panix.com Sat Jul 9 15:49:01 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:49:01 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Garbarage Land': Trash Talk Message-ID: 'Garbarage Land': Trash Talk http://www.nytimes.com/2005/07/10/books/review/10GENZLIN.html [First chapter appended.] GARBARAGE LAND On the Secret Trail of Trash. By Elizabeth Royte. 311 pp. Little, Brown & Company. $24.95. By NEIL GENZLINGER Imagine a type of obsessive-compulsive disorder that leaves you unable to throw or flush something away without tracking precisely where it goes. Not just from your indoor container to the curb or trunk line; this affliction makes you unable to put your mind at rest unless you follow your castoff into the truck, the transfer station, the landfill, the scrap-metal shredder, the treatment tank. Elizabeth Royte apparently has such a disorder, but rather than (or perhaps in addition to) letting it ruin her life, she has turned it into a likable chronicle of rubbish-realization, ''Garbarage Land: On the Secret Trail of Trash.'' Hers is a journey that everyone should take but few will. Put it in a class with how and where we get our gasoline, our food, our bluejeans and sneakers: best not to know the details, because not knowing allows you to not take responsibility. Royte, whose previous book, ''The Tapir's Morning Bath,'' followed researchers in the tropical rain forest, here follows an assortment of garbarage collectors, recyclers and sewage treaters, beginning with the men who pick up the stuff she leaves at her curb in Brooklyn on trash day. The idea is to see how much damage she is personally doing in the grand scheme of things and how she might minimize it; to get beyond the easy plateau of environmental awareness (don't eat endangered fish) and look at, well, the outflow. ''It wasn't fair, I reasoned, to feel connected to the rest of the world only on the front end, to the waving fields of grain and the sparkling mountain streams,'' she writes. ''We needed to cop to a downstream connection as well.'' The resulting journey introduces her to a colorful collection of characters: rabid composters, paranoid dump owners, starry-eyed crusaders, even some levelheaded businessmen and -women. She encounters a fair amount of colorful vernacular as well: ''Coney Island whitefish'' (used condoms in the Gowanus Canal), ''disco rice'' (maggots), ''mongo'' (''trash'' that curbside collectors deem worth saving -- televisions, microwaves, silk blouses, designer skirts). Royte's quest to see where her discards end up hits a number of human obstacles: in parts of the waste underworld, people don't want to talk to her or let her view their landfills or plants. ''Why was it so hard to look at garbarage?'' she laments at one point. ''To me, the secrecy of waste managers -- which was surely based on an aversion to accountability -- was only feeding the culture of shame that had come to surround an ordinary fact of life: throwing things away.'' Royte may have subconsciously let this stonewalling affect her: when a site does let her in for a look, she often seems to give it a free pass; her writing loses its skeptical edge and begins to sound like a report from a school field trip. Still, for the vast millions whose knowledge of waste disposal ends at the trash can and the recycling bin, any glimpse at all into this world is illuminating. Royte's lively description of a beast called the Prolerizer, a giant metal-crushing machine, makes you want to pay it a visit yourself: ''The Prolerizer has a 6,000-horsepower synchronous motor and enormous blades that can convert whole cars to fist-sized chunks of scrap in 30 to 60 seconds. . . . Cars plummeted onto the shredder's spinning rotor, which bristled with 32 bow-tie-shaped blades that weighed 300 pounds each. . . . They were 30 inches long, and though made of a steel-manganese alloy, they lasted a mere 24 hours, such was the ferocity of their labors.'' The deeper into trash and sewage Royte gets, the more discouraging the picture becomes. Landfilled trash does not biodegrade into the ''rich, moist brown humus'' of our guilt-free fantasies; it stews for centuries, generating poisonous leachate. The whole problem of junked computers and cellphones has barely reached public consciousness, even though we're already knee-deep in electronic waste. And as for recycling, some parts of the system seem to work, but the vagaries of markets and the ever-changing array of plastics and mixed-material containers make it hit-or-miss at best; it is in large part something we do for our conscience, not our planet. Some recycling is merely a delaying tactic (mixed plastic, for instance, can be reused only once, as plastic wood or some such), and some is downright harmful (with plastic again the main culprit) because of the toxic substances the process produces. Hard-core enviro-types actually oppose plastic-recycling programs, Royte says, because they foster the belief, held even among those who fancy themselves eco-conscious, that it's all right to swig that all-natural spring water out of a plastic bottle. The true ideal, in this formulation, should be ''closed-loop recycling,'' where no new materials are coming into the system and no waste is being generated. NONE of this is news to those versed in garbology and environmental advocacy, but Royte is not writing for them. She is aiming for a more general public, and a strength of ''Garbarage Land'' is that it doesn't get too preachy and is full of humor and self-deprecation. Here, for instance, is what Royte says about finding a mouse in her home composter: ''The E.P.A. has a regulation, called 40 CFR, Part 503.33, concerning 'vector attraction reduction' in soil enhancements. Obviously, I was out of compliance.'' And here is how she describes her encounter with a fertilizer made from septic sludge: ''I shook some Granulite onto my hand, just to see what holding someone else's highly processed feces felt like. It was no worse than handling raw meat, in the sense that it was so recently part of a living organism.'' She remains casual and scold-free even when she works her way around to the notion that the main thing any of us can do to reduce the waste stream is to buy less stuff. ''Garbarage Land,'' though, does have a fundamental bias, one that Royte never confronts: her jumping-off point seems to be the idea that our best, highest use as human beings is to keep our ''garbarage footprint'' to a minimum. That is a value judgment, because minimizing waste -- sorting trash, composting, cooking from scratch rather than relying on dinners in microwaveable dishes -- takes time, and time is a currency. Royte sounds smart; it's hard for the reader not to wonder what else she might have done with all those hours she spent washing out her used yogurt containers. Neil Genzlinger is a staff editor at The New York Times. --------------- First chapter of 'Garbarage Land' http://www.nytimes.com/2005/07/10/books/chapters/0710-1st-royte.html By ELIZABETH ROYTE The Dream of Zero Waste I had been touring San Francisco's garbarage infrastructure for two days now - prowling around the city's transfer station, poking into its curbside bins, and following its garbarage trucks. My hosts were Bob Besso, who worked for Norcal, the private company with which the city contracted to pick up refuse, and Robert Haley, from the Department of the Environment. Dressed in blue jeans and sneakers, Besso had the lankiness of a marathon runner. He was in his fifties, and he'd worked in recycling for decades. His and Haley's easy-going attitude, and their penchant for plain speaking, were diametrically opposed to the formal inscrutability of New York's sanitation operatives. The best part of hanging around Besso was his competitive streak: both he and Haley were walking poster children for Zero Waste. Who could throw out less? Who had more radically altered their lifestyle to leave a smaller human stain? The Zero Waste concept was a growing global phenomenon. Much of Australia had committed to achieving the goal in 2010, and resolutions had been passed in New Zealand, Toronto, twelve Asia-Pacific nations, Ireland, Scotland, the Haut-Rhin Department in the Alsace region of France, and several California counties. So far, no community had reached this nirvana, a condition perfected only by nature. For humans to achieve zero waste, went the rhetoric, would require not only maximizing recycling and composting, but also minimizing waste, reducing consumption, ending subsidies for waste, and ensuring that products were designed to be either reused, repaired, or recycled back into nature or the marketplace. Zero Waste, said Peter Montague, director of the Environmental Research Foundation, had the potential to "motivate people to change their life styles, demand new products, and insist that corporations and governments behave in new ways." I didn't take Zero Waste literally. I considered it a guiding principle, a rallying cry for green idealists. I understood its intensive recycling component, but what about goods that simply could not be recycled? Over lunch in a Vietnamese restaurant, I learned that Zero Waste wasn't just rhetoric to Haley. "I don't have a trash can at work," he said. On his desk sat a grapefruit sized ball of used staples - ferrous scrap that he couldn't bear to throw out. "If I'm going to be a leader in Zero Waste I have to live the life," he said. I asked what affect this had on domestic harmony. "My partner is 99.9 percent with me," he said, nodding enthusiastically. "What's the one-tenth-of-a-percent problem?" "She draws the line at twist ties." "Well you know you could strip the paper from the wires and -" I interrupted myself. Haley already knew how to recycle a twist-tie. At home, he was diverting 95 percent of his waste from the landfill. The 5 percent he threw out was "manufactured goods" - recently some beyond-repair leather shoes. Worn out sneakers, of course, were mailed to Nike, which shreds rubber and foam into flooring for gyms. The company accepts non-Nike footwear too, and is also trying to tan leather without questionable toxins and developing shoes made of a new rubber compound that doubled as a biological nutrient - something that could be harmlessly returned to nature. This would be quite an improvement, since according to designer William McDonough conventional rubber soles are stabilized with lead that degrades into the atmosphere and soil as the shoe is worn. Rain sluices this lead dust into sewers, and thence into sludge bound for agricultural fields. According to the National Park Service, which has more than a passing interest in manmade stuff that lies around on the ground, leather shoes abandoned in the backcountry last up to fifty years (if they aren't eaten, one presumes), and rubber boot soles go another thirty. McDonough's 206-page book, Cradle to Cradle, was printed on "paper" made of plastic resins and inorganic fillers. The pages are smooth, and waterproof, and the whole thing is theoretically recyclable into other "paper" products. The book weighs one pound, four ounces. A book of comparable length printed on paper made from trees weighs an entire pound less. "What do you think of that?" I asked Haley. He nearly spit out his mouthful of curried vegetables. "McDonough's book will be landfilled! I'd rather cut down a tree!" To Haley and Bob Besso, landfilling was the ultimate evidence of failure. Avoiding the hole in the ground-which in San Francisco's case was owned by Waste Management, Norcal's archenemy-had become a game to them, albeit a game with serious consequences. Haley didn't use his paper napkin at the restaurant, and he scraped the last bit of curry from his plate. But we all knew there was waste behind his meal - in the kitchen, on the farm, in the factory that made the boxes in which his bok choy had been carted to San Francisco. I wondered if Zero Waste really meant anything, considering the limits of our recycling capability and our reluctance to alter our lifestyles. It was as dreamy an idea as cars that ran on water. And just as appealing to industry, too. "Zero Waste is a sexy way to talk about garbarage," Haley said. "It gets people excited." I considered that for a moment. Could we solve our garbarage problems by making garbarage sexy? Seeing how little I could throw out was fun for me, if not exactly sexy. I'd gotten caught up in the game, back home with my kitchen scale and Lucy's blue toboggan. I recorded my weights in a little book, I crunched my numbers, and I measured my success by how many days it took to fill a plastic grocery sack. In the months to come, I'd find people who neither lived nor worked in the Bay Area who were having fun (if not sexy fun) with garbarage reduction. Shaun Stenshol, president of Maui Recycling Service, had toyed with the idea of decreeing a Plastic Free Month, but ultimately deemed such a test too easy. Instead, he issued a Zero Waste Challenge. Over the course of four weeks, Maui residents and biodiesel users Bob and Camille Armantrout produced eighty-six pounds of waste, of which all but four (mostly dairy containers and Styrofoam from a new scanner) was recyclable. Alarmed to note that 35 percent of their weight was beer bottles, which they recycled, the Armantrouts vowed to improve. Bob ordered beer-making equipment to help reduce the amount of glass they generated, and Camille promised to start making her own yogurt. Despite these efforts, the Armantrouts didn't win the Challenge. The winner of the contest, as so often happens, was its inventor. All on his own, Stenshol had produced an even one hundred pounds of waste, of which he recycled ninety-nine pounds. Fresh Kills Landfill Paddling One of my favorite expeditions while researching Garbarage Land, though this part of the story didn't make it into the book, was kayaking around the Fresh Kills landfill, on Staten Island, with Carl Alderson, a coastal restoration specialist who works for the National Oceanic and Atmospheric Administration. After a delightful paddle around the dump, Alderson and I narrowly escaped arrest by a sanitation cop only to end up in very shallow water with the tide going out. ... For several yards we poled and pried, but soon the kayak was stuck for good. Our car was parked a half mile up Main Creek, but the creek had turned into a mere trickle of brown water. Alderson seemed strangely optimistic. He checked the time on his cell phone and started muttering to himself about the tide. "Okay," he said. "We can wait four hours till it turns, or try again to get upstream, or we can roll over the mud to the edge." The edge, a field of waving Spartina patens, was about 60 feet away. "How deep is the mud?" "Over your waist." I thought about that. "Have you done it before?" "Oh yeah. You've just got to keep from panicking. It's like quicksand." Alderson was standing in the stern, wind-milling his arms to generate warmth. My feet were ice blocks. "Is that the wind?" he said, his voice rising, hair fluttering heroically. "It's pushing the water back in!" It was the wind, but it wasn't delivering any more water. The afternoon was just getting colder and more dismal. At least the snow had stopped. Opening his cell phone, Alderson dialed the office of the William T. Davis Wildlife Refuge, where we'd picked up the kayak. "Hey, Linda. Could you do me a favor and check today's tide chart?" He paused. "Uh-huh, you sure of that? Okay, thanks." With a look of resignation, Alderson snapped the phone shut. He had another plan. "There's a bunch of pallets in the refuge greenhouse, maybe we can get Sam and Nate to bring them down and make a path over the mud." It seemed a little hare-brained to me - we'd need about fifty pallets - but I liked the idea of involving others. Alderson slapped the mudflat with his paddle. It quaked. The mud didn't look particularly ominous in the fading light, but I knew it was roiling with life, with the stuff that feeds the marsh's birds, fish, and mammals. There were marine worms down there, some of them voracious predators more than five inches long, and lugworms and clamworms that ate algae or detritus extracted from the sand. These organisms were tough, able to withstand a half-day of submersion, a half-day of drought, baths of incoming salt water and rinses of sewage- and leachate-tainted fresh. Alderson advised his assistants to avoid touching the mud or water. A woman planting cord grass for him once fell in up to her baseball cap and emerged with a mysterious skin condition he called "full-body pink eye." "How deep did you say the mud is?" I asked Alderson for the second time in twenty minutes. "You can't tell," he said. "It seems bottomless. The silt and organic layering have been going on for millennia. I've watched a few people go down in chest waders. It's scary to watch someone sink deeper in muck and further in panic. I've dragged a few frightened folks out in my day." That shut me up. As we waited for Sam and Nate, I thought about how this landscape had changed. In the Paleo-Indian period, between 10,000 BC and 8,000 BC, the western side of Staten Island was a much higher and dryer place. We know that Lenape Indians occupied the area because they left their tools and high middens of clam and oyster shells behind. Sometime between 8,000 and 1,000 BC, rising sea levels created vast swamps on the western side of the island, at which time Lenape settlements became larger and more permanent. Eventually, Europeans would grow salt hay in these marshes, and it would become Staten Island's largest cash crop. Just two hundred years ago, before the hydrology of the swamps had been altered, both Richmond and Main Creeks were navigable for more than a mile. Today, the island's biggest export was garbaragey. With a low whine, a golf cart kitted out with a forklift emerged from the dun-colored reeds. While Sam and Nate - vague figures in dark clothes-struggled with the pallets, Alderson lounged like a beer drinker in a lawn chair and offered encouraging suggestions. "Not too far apart, boys." They grunted. "So did you know we all passed the navigation course?" "Yeah, Carl," said Sam, with no affect. "But when are they teaching the course about tides?" Alderson laughed, his eyes crinkling. "I guess that's next," he said. Sam dropped the pallets onto the mud, then went back to the greenhouse for more. When the makeshift dock stretched twenty feet, Nate, a burly young man in chest waders, went to the end and strapped on a pair of mud shoes. These resembled snowshoes but were made of webbed rubber that collapses when the foot is lifted and spreads out, like a heron's foot, when it's plopped down. With his thick beard and rubber clothing, Nate looked like a vulcanized hero from the underworld. He trudged toward us in a hulking manner. In his hand was a length of frayed rope. If he had a plan, no one knew it. I watched with growing fascination as he drew nearer-slop, slop, slop. Alderson sat still. I sat still. Nate reached the boat, still silent. Now he tied his line to our bow cleat, turned around, and heaved the boat forward and up the sloping mudflat. "Wow," I said. Alderson nodded at me and smiled. Barehanded and coatless, Nate hauled on the line again and again. "Shouldn't we get out?" I asked Alderson. "Nope," he answered. Apparently, there was just enough water in the mud to lubricate our passage. It dawned on me that Alderson and the boys had been through this routine before, in exactly these positions. . . . From checker at panix.com Sat Jul 9 15:49:08 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:49:08 -0400 (EDT) Subject: [Paleopsych] Space.com: Teleportation: Express Lane Space Travel Message-ID: Teleportation: Express Lane Space Travel http://space.com/businesstechnology/050708_teleportation.html Leonard David Senior Space Writer 5.7.8 Think Star Trek: You are here. You want to go there. It's just a matter of teleportation. Thanks to lab experiments, there is growth in the number of "beam me up" believers, but there is an equal amount of disbelief, too. Over the last few years, however, researchers have successfully teleported beams of light across a laboratory bench. Also, the quantum state of a trapped calcium ion to another calcium ion has been teleported in a controlled way. These and other experiments all make for heady and heavy reading in scientific journals. The reports would have surely found a spot on Einstein's night table. For the most part, it's an exotic amalgam of things like quantum this and quantum that, wave function, qubits and polarization, as well as uncertainty principle, excited states and entanglement. Seemingly, milking all this highbrow physics to flesh out point-to-point human teleportation is a long, long way off. Well, maybe...maybe not. A trillion trillion atoms In his new book, Teleportation - The Impossible Leap, published by John Wiley & Sons, Inc., writer David Darling contends that ""One way or another, teleportation is going to play a major role in all our futures. It will be a fundamental process at the heart of quantum computers, which will themselves radically change the world." Darling suggests that some form of classical teleportation and replication for inanimate objects also seems inevitable. But whether humans can make the leap, well, that remains to be seen. Teleporting a person would require a machine that isolates, appraises, and keeps track of over a trillion trillion atoms that constitute the human body, then sends that data to another locale for reassembly--and hopefully without mussing up your physical and mental makeup. "One thing is certain: if that impossible leap turns out to be merely difficult--a question of simply overcoming technical challenges--it will someday be accomplished," Darling predicts. In this regard, Darling writes that the quantum computer "is the joker in the deck, the factor that changes the rules of what is and isn't possible." Just last month, in fact, scientists at Hewlett Packard announced that they've hammered out a new tactic for a creating a quantum computer-using switches of light beams rather than today's run of the mill, transistor-laden devices. What's in the offing is hardware capable of making calculations billions of times faster than any silicon-based computer. Given quantum computers and the networking of these devices, Darling senses the day may not be far off for routine teleportation of individual atoms and molecules. That would lead to teleportation of macromolecules and microbeswith, perhaps, human teleportation to follow. Space teleportation What could teleportation do for future space endeavors? "We can see the first glimmerings of teleportation in space exploration today," said Darling, responding to questions sent via e-mail by SPACE.com to his home office near Dundee, Scotland. "Strictly speaking, teleportation is about getting from A to B without passing through the points between A and B. In other words, something dematerializes in one place, then simply rematerializes somewhere else," Darling said. Darling pointed out that the Spirit and Opportunity rovers had to get to Mars by conventional means. However, their mission and actions are controlled by commands sent from Earth. "So by beaming up instructions, we effectively complete the configuration of the spacecraft. Also, the camera eyes and other equipment of the rovers serve as vicarious extensions of our own senses. So you might say the effect is as if we had personally teleported to the Martian surface," Darling said. Spooky action at a distance In the future it might be possible to assemble spacecraft "on-the-spot" using local materials. "That would be a further step along the road to true teleportation," Darling added. To take this idea to its logical endpoint, Darling continued, that's when nanotechnology enters the scene. When nanotechnology is mature, an automated assembly unit could be sent to a destination. On arrival, it would build the required robot explorer from the molecular level up. "Bona fide quantum teleportation, as applied to space travel, would mean sending a supply of entangled particles to the target world then use what Einstein called 'spooky action at a distance' to make these particles assume the exact state of another collection of entangled particles back on Earth," Darling speculated. Doing so opens the prospect for genuinely teleporting a robot vehicle--or even an entire human crew--across interplanetary or, in the long run, across interstellar distances, Darling said. "Certainly, if it becomes possible to teleport humans," Darling said, "you can envisage people hopping to the Moon or to other parts of the solar system, as quickly and as easily as we move data around the Internet today." UFO connection? If indeed we are to become a space teleporting civilization, what about other advanced civilizations circling distant stars? Perhaps they have already mastered mass transportation via teleportation? One might even be drawn to consider that mode of travel in connection with purported UFO visitation of Earth. "Any strange comings and goings are candidates for teleportation, although you would obviously have to eliminate all mundane explanations first," Darling responded. "According to reports, some UFOs do appear and disappear quite abruptly, which would fit in with the basic idea of teleportation," he said. Darling said that interstellar teleportation would be one way to circumvent the light barrier, "although, as we understand the process now, you would need to make a sub-light trip first to set up the teleportation receiver and assembler at the destination." Quantum teleportation, Darling pointed out is the kind we can do at the subatomic level in the lab today. And that requires equipment at both ends to be able to work. "Extraterrestrial intelligence that is thousands or millions of years ahead of us will certainly be teleportation experts," Darling advised, "if the technology can be implemented at the macroscopic biological level." What possible outcome, then, from ET successfully tinkering with teleportation? "We might expect advanced aliens to be occasionally beaming in to check on our progress as a species," Darling concluded. From checker at panix.com Sat Jul 9 15:50:20 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:50:20 -0400 (EDT) Subject: [Paleopsych] The Australian: Christopher Pearson: No future in eternity Message-ID: Christopher Pearson: No future in eternity http://www.theaustralian.news.com.au/printpage/0,5942,15863203,00.html 5.7.9 I SUPPOSE most people have sometime or other toyed with the fantasy of eternal youth and health. Damien Broderick, a science contributor with The Australian, has turned it into a magnificent obsession. In his futurological books The Spike: Accelerating into the Unimaginable Future and The Last Mortal Generation: How Science Will Alter Our Lives in the 21st Century, he has seriously canvassed the chances that immortality is at hand. In last week's The Weekend Australian Review he was at it again, reviewing a new work by Ray Kurzweil and Terry Grossman entitled Fantastic Voyage: Live Long Enough to Live Forever. Surveying the latest evidence, Broderick is up-beat. "It seems likely that powerful research programs will let us first slow, then halt, the leading causes of death - heart disease, cancer, stroke, infections - then, perhaps, reverse ageing, that slow terrible corrosion of our youthful flesh and lively minds." How can this be? "Knowledge is doubling and deepening at a prodigious rate, and even that rate is accelerating ... some of those alive now may thrive indefinitely, kept youthful by the same recuperative processes that build brand-new babies from ageing sperm and ova." Fine and dandy for the fortunate young, you may be thinking, but what about the rest of us? Are we the last to feed the worms or crematorial fires? "Perhaps not, if a kind of maintenance engineering can be applied to our ailing bodies. The remedy may be complicated: genomic profiling, pills, supplements, stringent diet, more exercise than we care for ... In the slightly longer term, our bodies may be infused with swarms of machines not much larger than viruses, nanobots designed to scavenge wastes and repair tissue damage at the scale of cells." Broderick envisages a future in which "every human will have the choice of staying healthily young indefinitely or of stepping aside, if they choose, to make room for a new life, assuming, of course that we linger on this planet and that we remain strictly human". > From a futurologist's perspective, inter-planetary emigration is probably neither here nor there. However, an attenuated relationship with the strictly human does raise philosophical problems. Broderick is a techno-triumphalist; tomorrow belongs to him. "No doubt the arguments will continue for generations until all those opposed to endless life have died." If his confidence is warranted, it's surprising that there hasn't been more of a fuss made about such startling developments. Admittedly you can go to the Immortality Institute's website or log on to the World Transhumanist Association, but so far not a peep out of the federal Government. Are they just trying, yet again, "to underpromise and deliver in spades" as John Howard is wont to say? Usually voluble sources were tight-lipped, so I decided to try thinking like a futurologist. Supposing immortality were technically feasible, how would people avail themselves of the opportunity? First World economics suggests that they'd have to pay for it and that, like any scarce resource, it would be rationed by price. Initially the capital cost would be astronomical and keep eternal youth as the preserve of the very rich and, no doubt, their pets. If electoral pressure -- and occasional riots -- obliged the G8 governments to pour endless public funding into nanobot research, cryogenics and cloning, the unit cost would fall. But even if immortality became a national health service item, there would still be tricky distributional issues. For example, someone would have to make decisions about who was least likely to benefit from treatment and explain why they'd, as it were, missed the bus. Then again, think of the recriminations from the Third World, unless the elixir of life were made freely available and as UN cant puts it: "Within a socially acceptable time frame." Or forget about the recriminations and think instead about a rogue state or a terrorist organisation getting a nuclear weapon. How easy to hold the life-enhanced (but by no means indestructible) populations of the developed world to ransom: the slogan would be immortality for all or for none. Even if enlightened self-interest triumphed, in an orderly transition to a post-mortal world, there would still be pesky economic issues to sort out. What, for example, happens to countries where huge amounts of capital are diverted from other kinds of productive investment into a bottomless pit of human resource development? In a society where those entering into immortality spend most of their time at the gym or taking (on Broderick's reckoning) 250 pills a day, who does the work and prepares the food? After time and tide have borne away the last mortal cohort, there'd be an end to the transfers of inherited capital that previously helped keep the wheels of industry and speculative enterprise turning. For fear of running short, business and investors would become highly risk-averse. While some optimists might reckon that there's always time to make more money, most of us would be playing it safe and hoarding or saving up for planetary migration and to fund the next generation of life-enhancers. Talking of the next generation, reproduction as we have known it would lose any sense of urgency. The notion of immortality through progeny and the survival of one's genes would fade away. Indeed, given the amount of time that would have to be devoted to personal regeneration, it would be surprising if people had any left over to devote to parenting. Besides, the zero population growth lobby and the greens would doubtless be arguing that there's no more room, at least on this over-crowded continent. Presumably, in the transition period, adopting Third World babies would be permitted. It might also be possible - borrowing the model of carbon emissions trading - to buy the reproductive entitlements of adults who'd been talked into renouncing their access to immortality. Forward-thinking regimes such as China's might well set up a market in the reproductive rights of long-term prisoners and those condemned to death, to cover administrative costs and so forth and to complement the existing trade in body parts. Futurologists seldom take much notice of scarcity economics and they're apt to assume technological progress means abundance for all. It hasn't so far, of course, and -- if scarce resources meant rationing the right to reproduce -- we would all be in terrible trouble. For it is the experience of parenthood that most effectively teaches us, men especially, the lessons of selflessness. That hard-wired capacity for unconditional love of helpless offspring turns self-preoccupied adolescents into adults almost overnight. Without parenthood, the race would become spoiled and go to rack and ruin. It is, I suppose, just conceivable that Broderick may be right about the theoretical possibility of indefinitely prolonged life. However, human nature is less malleable than human physiology and ill-adapted to immortality's challenges. I also have my doubts about whether, if offered the everlasting option, all that many of us would take it. After all, well-adjusted people tend to develop a serene acceptance of finitude. Then again, the sense of an ending is all that makes some lives, especially very long ones, bearable in the meantime. Robert Louis Stevenson's popular Requiem captures the sense of a welcome end: Under the wide and starry sky Dig the grave and let me lie. Glad did I live and gladly die, And I laid me down with a will. This be the verse you grave for me: Here he lies where he longed to be, Home is the sailor, home from sea, And the hunter home from the hill. From checker at panix.com Sat Jul 9 15:51:28 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:51:28 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Hot Property': Freebooters of Industry Message-ID: 'Hot Property': Freebooters of Industry http://www.nytimes.com/2005/07/10/books/review/10LINDL.html [First chapter appended.] HOT PROPERTY The Stealing of Ideas in an Age of Globalization. By Pat Choate. 352 pp. Alfred A. Knopf. $26.95. By MICHAEL LIND IN recent years a series of reports have provided evidence about the erosion of America's scientific and industrial base. But ''strikingly,'' as Pat Choate observes in ''Hot Property: The Stealing of Ideas in an Age of Globalization,'' ''the massive theft of U.S.-owned intellectual properties as a contributing cause to America's technological decline has been almost totally overlooked in these reports.'' In this timely and important book, Choate sounds the alarm about the threat posed by such piracy. Choate, who is best known as Ross Perot's vice-presidential candidate in 1996, has sounded alarms before. When he published ''Agents of Influence,'' his study of Washington lobbyists funded by the Japanese government and Japanese corporations, he was denounced as a Japan-basher. Today the assertion that Japan has practiced result-oriented mercantilism rather than free trade for decades is rarely disputed. When Choate, like Perot, warned that as a result of the North American Free Trade Agreement, American corporations would move their factories to Mexico to take advantage of low labor costs, he was portrayed as a Mexico-basher. Bill Clinton and Al Gore argued that Mexico would be a market for American manufactured goods. Instead, according to the economist Charles McMillion: ''The large U.S. net export losses to Mexico since Nafta are concentrated in autos, machinery, electronics, apparel and furniture. U.S. net export gains are largely in agribusiness and bulk commodities such as cereals and organic chemicals.'' Score two out of two for Choate. In his latest campaign, Choate is likely to find allies in the business community -- and opponents among some champions of developing nations, as well as some libertarians who argue for weakening or eliminating intellectual property rights. Choate quotes the definition of ''copyright industries'' used by the International Intellectual Property Alliance: ''music and book publishing, radio and television broadcasting, cable television, newspapers and periodicals, records and tapes, motion pictures, theatrical productions, advertising, computer software and data processing'' -- to which others, like pharmaceuticals, can be added. According to the intellectual property alliance, ''worldwide digital piracy costs America's copyright industry $20 billion to $22 billion annually, and that approximation excludes illegal Internet downloads.'' Patents, copyrights, trademarks and other intellectual property rights are useless if governments do not enforce them against thieves. However, officials in many developing nations are more concerned about promoting national economic growth by disseminating know-how than with protecting the rights of foreigners. Choate tells the story of Haima, a Chinese corporation that is the largest woven-carpet manufacturer in Asia. After Milliken & Company, the largest private textile company in the United States, lost a Chinese contract to Haima, ''Milliken personnel obtained a copy of Haima's 1999 carpet catalog. It featured 16 copyrighted Milliken designs.'' Although an American court ordered Haima to pay Milliken more than $4 million, the American company has been unable to collect. Choate acknowledges that as an industrializing nation in the 19th century the United States engaged in many of the practices that it condemns today, including industrial espionage. He recounts the tales of Samuel Slater, who brought secret British spinning machine technology to the United States, and Francis Cabot Lowell, a Boston patrician who used his photographic memory to steal the trade secrets of British textile manufacturers. ''The most important feature of the Patent Act of 1793,'' Choate writes, ''was what it did not provide: protections for foreign inventors. Only American citizens were eligible for a U.S. patent. Thus, any American could bring a foreign innovation to the United States and commercialize the idea, all with total legal immunity.'' In later generations, Germany and Japan similarly manipulated intellectual property rights. In the 1990's, concern about the theft of intellectual property inspired the United States to promote the Trade Related Intellectual Property System. ''Ironically,'' Choate observes, ''after leading the long, historic fight to put these global protections into place, Washington is now strangely unwilling to use them.'' He notes that ''since June 2000, the U.S. has not filed a single intellectual property case at the World Trade Organization.'' Choate contrasts the attention that the Clinton administration paid to these issues with the lack of interest the Bush administration has displayed. The difference may be one of constituencies. ''Copyright industries'' like the movie and technology sectors provide much of the financial support for the Democratic Party, and they are far more threatened by intellectual property violations than are the commodity producers of the Republican red states. This interpretation finds support in Choate's data on the World Trade Organization: ''Overall, the Bush administration filed only 12 cases with the W.T.O. during its first four years in office. Six of those dealt with foreign impediments to U.S. agricultural exports -- beef, rice, genetically enhanced foods, corn, wheat, cheeses, dairy products and apples.'' Choate says that while industrializing countries may benefit from piracy, the world as a whole loses. ''Piracy and counterfeiting impede innovation: thieves do not invest in research, design, production, development or advertising. . . . The result is fewer new medicines, fewer advances in science, fewer new products, fewer new music CD's, fewer new movies, less new software and higher prices for whatever is created.'' Everyone is harmed, either directly or indirectly, ''when thieves steal from Microsoft and Disney.'' And, he concludes, ''What is missing is the will of U.S. political leaders to confront those who are stealing U.S.-owned intellectual properties and with them the future of the American people.'' Michael Lind is the Whitehead senior fellow at the New America Foundation in Washington. ---------------- First chapter of 'Hot Property' http://www.nytimes.com/2005/07/10/books/chapters/0710-1st-choate.html By PAT CHOATE The Golden Covenant Manhattan's 30,000 citizens were awakened on the morning of April 30, 1789, by the roar of cannons. But this day the gunfire was not for war, but to celebrate George Washington's inauguration as the first president of the United States. Soon after 10:30 a.m., the president-elect, led by a joint congressional committee, appeared in Lower Manhattan at Federal Hall (formerly City Hall), which was serving as the new nation's temporary capitol. Washington was dressed in a brown suit of homespun broadcloth-a gift from the Hartford Woolen Manufactory, a small mill in Connecticut. Before the Revolutionary War, this wealthy Virginia planter had had his suits made of silk and velvet by London's finest tailors. But now he wore a simple American-made suit-his personal gesture of support for domestic manufacturing. Yet the new president's appearance was far from drab. His suit was adorned with brass buttons embossed with the new national symbol, the bald eagle, and his cuffs had a row of studs, each marked with thirteen stars, symbolizing the founding states. Washington's overture was widely noted in the nation's newspapers, which reported that everything he wore that day had been made in the United States. George Washington's support of domestic manufacturing was not some passing political sop to a special interest group. Rather, his position had been forged by eight hard years of Revolutionary War experiences and huge debts to European suppliers and financiers. Several times, Washington's army almost lost the war because ammunition was in short supply. In the first year, soldiers often went into battle with no more than nine cartridges each. At the battle of Bunker Hill, the Americans quickly ran out of ammunition, finishing the fight by clubbing the English troops with the butt ends of their muskets. Thousands of Washington's troops spent the winter of 1777-78 at Valley Forge, Pennsylvania, with no shoes for their feet, few clothes, and not enough blankets to keep out the cold. In a letter dated December 23, 1777, a desperate Washington wrote to the Continental Congress that he had "no less than two thousand eight hundred and ninety-nine men in camp unfit for duty, because they are barefoot and otherwise naked." From the beginning of the war, Washington's army lacked guns, gun- powder, rope, sails, shoes, and clothes, among many other military necessities, largely because Great Britain had long prohibited most manufacturing in its American colonies. Instead, the mother country restricted colonial production to timber, furs, minerals, and agricultural goods. Thus, the U.S. economy was overwhelmingly agricultural when war came, with more than 94 percent of the population living on farms. After independence was declared, the new nation had to buy its war mat?riel from the Dutch, French, and other European suppliers, and do that largely on credit. Any nation that sold goods to the American colonials risked a conflict with Britain, then the world's foremost military power. And when British leaders said they would hang any of the revolutionary leaders they captured, the threat was real, making government service a bit riskier than it is today. In late 1776, a distressed Continental Congress sent Benjamin Franklin, the best-known American, to Paris to seek French support and goods. His list of purchases in 1777 illustrates just how little manufacturing capacity America had. He bought 80,000 shirts, 80,000 blankets, 100 tons of powder, 100 tons of saltpeter, 8 ships of the line, muskets, and 100 fieldpieces. Then Franklin arranged for smugglers to carry the goods across the Atlantic Ocean in a 4,000-mile, three-month journey to St. Eustatius, a Dutch island in the Caribbean, where smugglers received the supplies and slipped them through the British naval blockade and into the colonies, a 1,400-mile trip that consumed another five to six weeks. For eight years, Washington and the Continental Congress struggled to obtain enough materials for their troops. By war's end, the need for U.S. military and industrial self-sufficiency was seared into their consciousness. For Washington, wearing a plain brown suit of American-made broadcloth on Inauguration Day was a small sacrifice that sent a large message to his fellow citizens. Before taking office, Washington informed Thomas Jefferson, the man who would soon be secretary of state, that the development of manufacturing and inland navigation would be his greatest concern as president. As the historian Doron S. Ben-Atar reveals in his 2004 book Trade Secrets, Washington was a strong proponent of importing European technicians, and in his first State of the Union message, he also encouraged the introduction of foreign technology. In his many speeches, Washington "voiced the widespread expectation that the federal government would devote its energies to industrial development." After assuming the presidency, Washington and the Congress moved quickly to reduce America's dependence on other nations for its national security needs. Action was imperative, because as the Revolution's leaders had seen, today's allies often become tomorrow's enemies. In that quest for self-sufficiency, Washington turned to Alexander Hamilton, a loyal, brave, and brilliant aide who had led a bayonet attack at Yorktown. Far more foresighted than most of his contemporaries, Hamilton envisioned an economic and political structure for a post-Revolution America. When Washington appointed him secretary of the Treasury, Hamilton was ready with recommendations. In January 1790, he presented Washington and Congress a white paper titled "Report on Public Credit," which outlined the actions necessary to make the new nation appear creditworthy to foreign investors, including a controversial recommendation to pay off all the state debts incurred during the Revolution. At almost the same time as it received Hamilton's credit report, Congress ordered him to prepare a report on manufactures that would "render the United States, independent on foreign nations, for military and other essential supplies." On December 5, 1791, Hamilton submitted to Congress his "Report on Manufactures," which outlined why and how the United States could achieve economic equality with Europe and an industrial self-sufficiency. Building a strong U.S. industrial base, he wrote, " 'tis the next great work to be accomplished." To become a true equal of Europe, Hamilton proposed that the United States follow Europe's lead and erect a tariff wall behind which the American market could develop and American manufactures could prosper. This, he argued, was the only way to confront Europe's manufacturing subsidies, its high tariffs on U.S. imports, and its repeated pattern of dumping goods at artificially low prices in the U.S. market to kill America's infant industries. Without his proposed actions, American manufacturers could never compete fairly, either in Europe or in their own domestic market, Hamilton reasoned. Behind this tariff wall, the government could provide the protections of a strong patent system, giving inventors and investors a government-guaranteed right to the exclusive use of their innovations for a fixed period. To accelerate national development, Hamilton also wanted to encourage the migration of skilled foreign workers to America. They would bring badly needed abilities and state-of-the-art technology to the new nation. In his report, Hamilton commented favorably on the actions of Samuel Slater, a twenty-one-year-old mechanic who in 1789 had slipped out of England with one of the British textile industry's crown jewels: the secret of how to build and operate a machine that could spin cotton and wool into thread. Hamilton's message to potential immigrants was loud and clear: bring your nation's industrial secrets to America, gain citizenship, get a patent, be honored, and become wealthy. One irony of the American Revolution is that most of its leaders were Anglophiles. In the French and Indian Wars, Washington sought a regular commission in the British army but was rejected because of his colonial status. Franklin was the delight of London society until he defended the colonists' rights. And in the years leading up to the Declaration of Independence, Jefferson, Madison, and Monroe, among other revolutionary leaders, thought of themselves as loyal British citizens and sought a course that would allow the colonies to remain a part of Britain. Even after the Revolutionary War, with all the bitterness it generated, many English traditions and assumptions remained embedded in the hearts and minds of Americans. One of those fundamental notions was that patent and copyright protections encouraged innovation and national development. The appeal of those ideas is understandable, in part because they had an extended history. By the late 1700s, Britain had the longest continuous patent tradition in the world, one whose origins traced back to 1449, when Henry VI issued John of Utynam a letter patent (an open letter with the king's seal) granting the Flemish glassmaker a twenty-year monopoly on the process that produced the windows at Eton College. In exchange, the foreign glassmaker was required to teach English artisans his process. As former subjects of the English king, the newly minted Americans were familiar with the doctrine of the public interest, as incorporated into Britain's Statute of Monopolies (1624). It gave a fourteen-year monopoly to "the true and first inventor" of new manufactures-a law in effect for more than 150 years before the American Revolution. Likewise, the colonists were familiar with Britain's copyright law, the Statute of Anne, which was enacted in 1710. Under that act, the monopoly power of publishers was weakened and the rights of authors of new works were strengthened with copyright protection for fourteen years, with the possibility of a fourteen-year renewal. And while the Statute of Monopolies did not apply in the colonies, the various colonial governments enacted patent laws that imitated it. After independence and before the ratification of the U.S. Constitution, twelve of the thirteen colonies enacted copyright laws based on the Statute of Anne. For the leaders of the new nation, the basic concept was simple: patents and copyrights encouraged inventors and authors to produce more new and useful creations. These innovations could help the U.S. progress. And as the details of these creations became public, the general knowledge of the nation would be expanded. The process as a whole could only make life better for most Americans and would help the new nation grow richer and stronger faster. The concept was so fundamental that the Founding Fathers integrated it into the Constitution, believing that the public good fully coincided with the claims of individual authors and inventors. When the "authors and inventors clause" (sometimes called the "progress clause"), drafted by James Madison and Charles Pinckney, was presented for consideration at the Constitutional Convention on September 5, 1787, there was no debate and not a single dissenting vote. Creating a working system of patents and copyrights was a top priority for George Washington. In his first State of the Union message (January 8, 1790), he recommended that Congress enact legislation to encourage the introduction of new inventions from abroad and foster their creation domestically. Congress acted quickly, and the president signed the first Patent Act into law on April 10, 1790, and the first Copyright Act less than two months later, on May 31, 1790. The Patent Act made the issuance of a patent a matter of the highest importance-a function administered by the president and three senior cabinet officers. There was no patent office. Rather, a patent petition was submitted directly to Secretary of State Thomas Jefferson. Then Secretary of War Henry Knox and Attorney General Edmund Randolph reviewed it. These three constituted a patent board. They established strict rules for obtaining a patent, and on the last Saturday of every month, they met to review applications. If two of the three approved, a patent letter was prepared for the personal signature of President Washington, who then sent it back to Jefferson who, as secretary of state, also signed the letter and then had the Great Seal of the United States affixed. The patentee then had a fourteen-year period during which to exclude others from using the creation. The total cost was roughly $5, which went not to the Treasury but to the clerks who copied and processed the paperwork. Those early patent grants are greatly valued today for their historic signatures. Jefferson was surprised by the number of innovations inspired by the prospect of a patent. Soon after passage of the 1790 act, more applications and models of inventions were appearing at his office than he and his two colleagues could handle. As often happens with something new in government, the first patent act was a false start, and Jefferson knew it. He urged Congress to alter the "whole train of business and put it on a more easy footing." To that end, he drafted legislation and sent it to his congressional allies in February 1791. Jefferson's escape from the patent board, however, was delayed for more than a year as Congress repeatedly postponed any vote on his or any other patent reform proposal. Meanwhile, the board was obligated to carry out its duties. In 1792 Jefferson wrote his old friend Congressman Hugh Williamson of North Carolina that of all the duties ever imposed on him, reviewing patent applications consumed his time the most and gave him the most "poignant mortification." By early 1793, only 57 patents had been issued and 114 applications were pending, while dozens of others had been denied. Inventors hated the system; it delayed consideration of their applications and imposed such scrutiny that for every one approved, another was denied. The board abhorred the process because it had neither the time nor the resources to meet its obligations. Eventually, Congress enacted the Patent Act of 1793, without most of Jefferson's recommendations. What emerged was legislation that sharply changed the patent system from one with strict rules to one with virtually no rules. Congress allowed inventors to register their inventions with the State Department without an examination. The courts were assigned the responsibility of sorting out which patents were legitimate and which were not. Not surprisingly, with such lax rules the number of applications and issuances rose. Between 1793 and 1836, when the patent laws were next altered, more than 9,500 patents were issued. In such a lenient environment, piracy flourished. Many applicants went to the State Department, where models of inventions were found, bought a copy of a patent, duplicated it, and then filed an application for the same invention. Often, the same idea was patented multiple times. The owners of the later grants would enter business, telling others they had the exclusive use of an innovation, or take the official documents to unsuspecting licensees and investors for money. In other situations, an inventor would create an innovation, unaware of the advances of others, secure a patent, and sincerely believe that the conception was his alone. The result was a patent holder's nightmare and a lawyer's dream. The courts were soon clogged with lawsuits. In the end, the most important feature of the Patent Act of 1793 was what it did not provide: protections for foreign inventors. Only American citizens were eligible for a U.S. patent. Thus, any American could bring a foreign innovation to the United States and commercialize the idea, all with total legal immunity. . . . From checker at panix.com Sat Jul 9 15:51:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 9 Jul 2005 11:51:33 -0400 (EDT) Subject: [Paleopsych] AP: Gene hunters flock to Amish country Message-ID: Gene hunters flock to Amish country http://www.globetechnology.com/servlet/story/RTGAM.20050627.gtamishjun27/BNPrint/Technology/target=?mainhub=GT 5.7.6 By PAUL ELIAS Associated Press STRASBURG, Pa. -- Smack dab in the middle of a central Pennsylvanian cornfield, in the heart of an Amish culture that typically shuns technology, sits a marvel of genetic medicine and science. The building itself, a tidy clapboard structure, was raised by hand, rope and horse in the Amish way 16 years ago. Upstairs, is the Clinic for Special Children. Downstairs houses the Amish Research Clinic. The clinic has played a role in numerous significant discoveries by expert gene hunters, from diabetes breakthroughs to unlocking some of the mysteries behind sudden infant death syndrome. The gene hunters, who come from far and wide, spend countless hours rooting through a rich genetic trove that only an insular genetic pool like the Amish can offer. To the Amish, many of whom travel the few dozen miles or so from their homes by horse and buggy, the clinic has been heaven sent. It very often saves their children, who are disproportionately afflicted by rare and sometimes fatal genetic-based diseases because of 200 years of inbreeding. "It's weird and it's wonderful," said Terry Sharrer, medical curator of the Smithsonian Institution in Washington D.C. "I have never seen anything like this." The children's clinic is the creation and life's work of Dr. Holmes Morton and his wife Caroline. The Harvard-educated couple surprised colleagues and friends in 1987 when they announced they were giving up prestigious urban posts in Philadelphia, packing up the family and starting a new life among the Amish and Mennonite religious sects. It's a place where the laundry of plain clothes flaps in the breeze and barefoot children in smocks and straw hats run around homes shared and passed down by multiple generations. Road signs warn drivers to share the road with the horses and buggies. Morton hasn't regretted the move. "We discover a new gene almost weekly," he said. Isolated populations with homogenous genes such as the Amish in central Pennsylvania, the Ashkenazi Jews and Indian tribes offer genetic researchers unparalleled insight into disease and genetics. These closed populations, whether by geography or religion, were created by just a few families -- called the "founder effect" -- and built on generations of inbreeding. The Amish have higher rates of inherited disease caused by bad, recessive genes that are diluted in the general population but remain captive in closed societies. That increases the odds that distant relatives that are each carriers of a rare disorder will marry and produce afflicted children. Since the human genome was mapped five years ago, the genetic discoveries are coming fast and furious in Strasburg. The advent of the increasingly powerful gene chips, which enable researchers to experiment with thousands of genes simultaneously, have also advanced Morton's work. Morton estimates that he's uncovered about 150 genes implicated in various diseases, most of them found in the last few years. Last year he found a gene implicated in sudden infant death syndrome. He's also uncovered a genetic cause for the malady maple syrup urine disease, so-called because the victim's urine is sweet smelling. It's a rare enzyme deficiency that if left untreated, as it was for many years in the Amish community, will lead to mental retardation. Through a severe diet that excludes meat, eggs and milk -- and constant vigilance -- Morton can keep the disease in check. Much of Morton's funding is raised by community auctions that sell quilts, furniture and baked goods made and donated by the Amish. Downstairs, Dr. Alan Shuldiner and his colleagues at the Amish Research Clinic are armed with $10-million (U.S.) in National Institutes of Health grants to conduct a dozen different large-scale studies of the Amish, including diabetes, heart and longevity studies. Shuldiner, also a researcher at the University of Maryland School of Medicine, says his lab has drawn the blood of 3,000 of the 30,000 Amish who live in the area. Shuldiner opened his lab in 1995 after spending a year working out of his car. He initially befriended an Amish woman who had children with diabetes. She served as his liaison to a community skittish of outsiders. When he moved into the special children's building, he said his credibility among the Amish was cemented. "This building is really a pillar of the Amish community," he said. Mary Morrisey, a nurse in Shuldiner's lab, spends most days whipping around the back roads of Lancaster County in her minivan on a mission to enroll 1,000 Amish. The aim is to uncover genetic causes of heart disease. In two years, the lab has enrolled nearly 600 volunteers -- a testament to how massive the undertaking is. On Wednesday, Morrisey spent two hours at the kitchen table of one family's house, drawing blood and explaining the intricacies of the study to the pair, who are in their mid-60s who have nine children and 54 grandchildren. The screen door was constantly slamming as barefoot kids frolicked about the house, the younger ones fretting about needles being stuck into their grandparents' arms. Grandma soothingly reassured them in the Pennsylvania Dutch they use with each other. For a Luddite community that by and large quits school after the eighth grade, the Amish are well-informed about the technological breakthroughs their blood contains. They view their participation with the "English" scientists as in keeping with the tenets of their branch of Christianity, which demands they help their fellow man. "I wouldn't know why not," the woman responded when Morrisey asked her to join the study. "It could help our family -- and help others." The couple had participated in Shuldiner's initial diabetes study several years ago. "I think we're considered vampires," Morrisey joked. "All we want is their blood. They instinctively roll up their sleeves every time they see me." From anonymous_animus at yahoo.com Sat Jul 9 18:41:35 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sat, 9 Jul 2005 11:41:35 -0700 (PDT) Subject: [Paleopsych] Iraq war In-Reply-To: <200507091800.j69I08R16594@tick.javien.com> Message-ID: <20050709184135.73133.qmail@web30810.mail.mud.yahoo.com> >>If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying.<< --It's obviously a mixture, which brings up the question: will foreign terrorists transform Iraqi nationalists who would otherwise fight conventionally into terrorists who believe killing civilians is legitimate warfare? In the current situation, anyone who proves himself to be capable of killing US troops is likely to gain respect, but every bombing that targets Iraqis will most likely decrease respect for foreign terrorists. Has the number of Iraqi suicide bombers (targeting civilians rather than soldiers) increased since Zarqawi set up shop? Conventional fighters who target soldiers would likely taper off their efforts when US troops leave and Iraqis take up the job of security, allowing Sunnis and borderline resistance members to take part in government without losing face (many would feel that cooperating with a US-sponsored Iraqi government would be humiliating and an admission of defeat... removing US troops would remove that motive as well). But foreign terrorists aren't going to leave unless Iraqis as a people stand against them. That becomes more likely every time Iraqis are targeted by terrorists, so the logical strategy for the US is to stay until Iraqis are known to be solidly against the presence of foreign terrorists and Iraqi nationalists begin to separate themselves from foreign terrorists by denouncing attacks on civilians. Any Iraqi resistance fighter who feels it's dishonorable to kill civilians is going to be less and less comfortable being associated with Zarqawi's terrorists, more likely to develop rifts with foreign organizers and more likely to leak information leading to the capture of Zarqawi and other terrorist leaders. That pretty much sets up a series of events leading naturally to the war's end. Along with a global denunciation of terrorism by mainstream Muslims and renewed focus on the Israeli-Palestinian partition process, the general trend should be positive, as long as the US and Iran don't provoke one another into another war in the meantime. Bush will watch to see if Iranians stand up to their hardliners or bide their time thinking the US will intervene. With nobody in either party willing to impose a draft, an invasion of Iran is unlikely, leaving air strikes by the US or Israel as the only option. Would an air strike against Iran's government targets or nuclear facilities produce a larger pool of global terrorists seeking nuclear or biological weapons? Or would Iranian moderates take over immediately? Or both? One assumption that should be eliminated is that there is some fixed number of terrorists, and that it's a good thing to draw them all into Iraq to fight them on their own ground. That logic makes sense at first glance, but it's based on an assumption that's not safe to make. But regardless of how one analyzes the overall situation, the immediate solution to Iraq is for the US to stay until there is good reason to believe Iraqi moderates can establish security and prevent foreign terrorists from gaining influence and using Iraq as a training ground. Once Zarqawi is captured or killed and Iraqi resistance members begin shunning foreign terrorists, it will be a lot easier for the US to leave, and attention can be focused on Iran. I don't really trust the current administration to handle it gracefully, but we have what we have. Hopefully enough systems thinkers will focus on geopolitics to provide a counterweight to the gung-ho mentality that will want to rely on forceful moves that may backfire in the long term. We've tended to rely on those moves in the past, doing whatever seemed strongest in the short term, like a beginner chess player who takes every piece that's offered. The end result is messy. Strong moves made hastily can add up to a weak foundation. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ From checker at panix.com Sun Jul 10 15:59:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:07 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Will Any Organ Do? Message-ID: Will Any Organ Do? New York Times Magazine, 5.7/10 http://www.nytimes.com/2005/07/10/magazine/10ORGANS.html By GRETCHEN REYNOLDS Last summer at one hospital in Dallas, four people died from rabies, an unheard-of level of incidence of this rare disease. As it turned out, each patient was infected by an organ or tissue -- a kidney, a liver, an artery -- that he or she received in a transplant several weeks earlier. Their shared donor, William Beed Jr., a young brain-dead man, had rabies, caught apparently through a bite from a rabid bat, something the surgeons never suspected. They all thought he had suffered a fatal crack-cocaine overdose, which can produce symptoms similar to those of rabies. ''We had an explanation for his condition,'' says Dr. Goran Klintmalm, a surgeon who oversees transplantation at Baylor University Medical Center, where the transplants occurred. ''He'd recently smoked crack cocaine. He'd hemorrhaged around the brain. He'd died. That was all we needed to know.'' Since the rabies deaths, recriminations have flown, procedural reviews have begun and sorrow and regret have dogged the families of the organ recipients. But the outbreak also exposed a controversy that until then was roiling only the rarefied world of transplant specialists. The issue, although freighted with monetary and bio-ethical complexities, can be boiled down to one deceptively simple question. Should transplant surgeons be using organs from nearly anyone? Organ transplanting has become, in fundamental ways, a victim of its own success. Not long ago, transplant surgery was a dodgy, last-ditch response to end-stage kidney failure. But with the advent of better antirejection drugs and surgical techniques, transplantation has become the treatment of choice for a growing range of conditions, including chronic kidney failure, end-stage lung or liver disease and some congestive heart failure. Kidneys are implanted routinely, as are increasing numbers of livers, hearts and pancreases. Fifteen years ago, about 20,000 people in the United States were on waiting lists for organs. Today, about 88,000 are. The number of donors has not come close to keeping pace. There were about 15,000 transplants completed with organs from cadavers in 1993 and about 20,000 last year. Patients used to wait weeks for an organ. Now they wait years. On average, 18 people on organ waiting lists die every day. Doctors, patients and politicians concerned about transplantation have responded with proposals for increasing donations. In 2002, the American Medical Association voted to endorse pilot projects to give families financial incentives, like cash payments to help cover the costs of funerals, for donating their deceased loved ones' organs. The next year, Congress held hearings on the topic. Representative James Greenwood, Republican of Pennsylvania, introduced a bill that would have authorized demonstration projects to determine whether offering financial incentives to families of brain-dead patients would increase donation rates. There was a public outcry against ''buying'' organs and the bill died. (A few states offer tax incentives to families who donate relatives' organs.) Increasingly desperate people in need of transplants have turned to highway billboards and Internet sites to solicit donors. Donations from living people have helped. Today the number of living kidney donors is greater than the number of dead donors. But living donations of other organs are rare because they can be dangerous or are impossible. All of which has led transplant specialists to quietly begin to relax the standards of who can donate. As a result, according to surgeons I spoke with and reports in medical journals, the transplanting of what doctors refer to as ''marginal'' or ''extended criteria'' organs, organs that once would have been considered unusable, has increased considerably in the last several years. The definition of a marginal organ differs from transplant center to transplant center and also from one type of organ to another. This makes it difficult to quantify the increase in the use of these organs. But the expansion is undeniable and has become a much-discussed issue in the field, a topic of ethics papers, surgical conferences and soul-searching on the part of many of the surgeons involved. Fifteen years ago, William Beed Jr. would not have qualified as an organ donor. When he died in May 2004, he was 20, unemployed and had been living with his mother and sister in a bat-infested apartment building in Texarkana, Ark. Throughout his life, Beed had been in and out of trouble, his mother acknowledged when I spoke to her recently. Marijuana and cocaine were found in his urine at the time of his death, according to a report in The New England Journal of Medicine. Beed's drug use alone would have disqualified him as a donor. (It still would keep him from giving blood.) ''What people have to understand is that donors now, except for the 75-year-olds who die of intracranial bleeds, are not part of the church choir,'' Klintmalm told me when I met with him in Dallas earlier this year. ''The ones who die are the ones you don't want your daughter or your son to socialize with. They drink. They drive too fast. They use crack cocaine. They get caught up in drive-bys.'' The donor pool was different in the early days of transplantation. Beginning in the 60's and through the 80's, a majority of donors were head-trauma victims, people who had been involved in car accidents, botched suicides or tumbles off horses or ladders. These donors were almost all young, between 15 and 45. (In the 80's, few transplant surgeons would take a 50-year-old organ.) They were of average weight, with no history of diabetes, cancer, infectious disease, imprisonment, high blood pressure, cigarette-smoking habits, tattoos (which have been associated with blood-borne illnesses) or unsafe sexual behaviors. The chosen organs, said Klintmalm, who has been in practice for about 25 years, ''were pristine.'' It was easy to adhere to those standards at first. ''We didn't perceive any shortage of organs back in the day,'' says Dr. Nicholas Tilney, the Francis D. Moore professor of surgery at Harvard Medical School and one of the nation's premier kidney-transplant surgeons. ''If a patient had to wait a few weeks for a kidney, that seemed long. We never foresaw the kind of situation we have today.'' Conditions began to change in the 90's. Seat-belt use was more common by then, and fewer Americans were dying of head injuries, depriving transplantation of its most reliable sources of pristine organs. At the same time, the demand for transplants was growing. Surgeons had little choice but to start looking to alternative sources for organs. On April 28, 2004, William Beed Jr. complained to his mother that he was feeling sick. ''He couldn't swallow,'' his mother, Judy, a practical nurse, recalled when I spoke with her earlier this year. They decided he should go to an emergency room, she said, and the doctors there examined him and sent him home with medication, saying he was dehydrated. By that evening, he was drooling, throwing up, shaking and still having difficulty swallowing. His fever was rising. He started vomiting blood. His father drove him to another E.R. Diagnosis is often a matter of context. Because of doctor-patient confidentiality rules, doctors involved with this case would not talk about it on the record, but a few did say that had Beed not had cocaine in his blood, the E.R. doctors might have investigated his symptoms more aggressively instead of assuming he had overdosed. (Because no autopsy was done, doctors have not been able to establish whether the rabies or the drugs actually killed him.) Soon after, Beed fell into a coma and was put on a ventilator. After a few days, his mother said, the doctors told her and her family that their son was brain-dead. Transplant surgeons use organs from brain-dead patients because they still have a heartbeat, and if the patients are placed on a ventilator, their organs continue to get oxygen. Without oxygen, the organs degrade within minutes. According to Judy Beed, a transplant coordinator approached her and asked whether she would be willing to donate her son's organs. She agreed, and in the middle of the night on May 4, the parents of Joshua Hightower received a phone call offering them William Beed's kidney. Joshua Hightower, who lived in Gilmer, Tex., had had kidney problems since he was 2. They had grown progressively worse over the years. ''When he was 16, things got really bad,'' said his mother, Jennifer Hightower, a special education assistant in the public schools, when I met with her in February. ''He was pale and droopy. He weighed 112 pounds. He was sleeping all the time.'' His teachers at Gilmer High School walked him up and down the halls between classes to help him stay awake. A doctor urged his parents to get him on the waiting list for a kidney. In the meantime, Joshua began daily dialysis at home. The process, which purified his blood of toxins, required that he be home every evening by 10. Once there, he was tethered to the dialysis machine for between 9 and 16 hours. When the Hightowers received the call from the hospital, they jumped at the opportunity. It is impossible to know now when the first less-than-pristine organ was retrieved and transplanted. But over the course of the 90's, according to surgeons I spoke with, many barriers fell. Age was almost certainly the first to go. Instead of accepting donors 45 and younger, some transplant centers began, gradually, to take those who were 48, 49, 50 and then up from there. ''I wrote a paper for The Journal of the American Medical Association back in 1989,'' Dr. Lewis Teperman, director of transplantation at New York University Medical Center, told me when I talked to him earlier in the spring. ''It was looking at the outcomes of using older donors. By older donors, we meant someone over 60. That was considered really, really old.'' Recently, N.Y.U. transplanted a liver from a deceased 80-year-old. A couple of years ago, a Canadian hospital used a 93-year-old liver from a deceased donor. Almost imperceptibly, most of the other traditional prohibitions evaporated. Surgeons started accepting lungs from people who had smoked, sometimes for decades. They accepted hearts and kidneys from those who had had high blood pressure or had been obese. They took organs from alcoholics and drug users. (Because cocaine is flushed from the body relatively quickly, it is considered one of the least problematic drugs in donors.) Infectious disease was no longer an automatic disqualifier, either. Most surgeons would have once discarded organs from someone with hepatitis C, for instance, since it destroys the liver. But the virus, often spread by injected drug use, is now so common in urban areas that few transplant surgeons will immediately turn down an organ infected with it. Ideally the surgeons implant these infected organs into patients who already harbor hepatitis C. But lately there have been cases in which doctors, as a last resort, have transplanted infected livers into patients who don't have hepatitis C. There is little published data yet about the long-term outcomes for these patients. The expansion into ''marginal'' or ''extended criteria'' organs has not been systematic. One transplant surgeon will use a marginal organ from, say, a morbidly obese donor or a drug user. His patient survives. Then he will repeat it again and again. At the next big transplant conference, he will talk to his colleagues about his success, and they will go back to their own transplant centers and accept, for the first time, an obese donor or a crack-cocaine user. ''You sometimes have to experiment,'' Klintmalm says. Klintmalm and other surgeons I spoke with who work in urban areas say that marginal organs are well on their way to being the majority of organs they transplant. Klintmalm, though, takes issue with the very definition of marginal. ''Older organs should not be called 'marginal,''' Klintmalm maintains, referring to donors over age 55. ''They're standard for us.'' But two years ago, when the United Network for Organ Sharing (UNOS), the private organization that oversees organ transplantation in the United States, published its first definition of extended-criteria organs, age was prominent. The UNOS classification, which applies only to kidneys, defines a marginal kidney as one that comes from a deceased person over 60 or one over 50 with two of three characteristics: stroke, hypertension or abnormal kidney function. The definition does not mention smoking, diabetes, hepatitis, alcoholism, obesity or drug use. No government agency sets standards for what makes an organ acceptable. The Department of Health and Human Services contracts with UNOS to handle the day-to-day logistics of the transplant system (getting organs to the next person on the list and so on). But the government's main concerns in policing transplants are that donors and recipients be matched for blood type and that organs be distributed primarily based on medical need, not the wealth, race or celebrity of the recipients. So decisions about whether organs are usable are made on the spot by individual surgeons. To date, not many peer-reviewed studies have been published that examine the long-term outcomes of using marginal organs. The research that has been done mostly looks at kidneys. Recent studies of older kidneys (usually defined as over 50), for instance, have shown that they can function almost as well as younger ones. They don't work for as long, however. In a report presented by UNOS, which adjusted for the health of the recipient, among other things, about a third of extended-criteria kidneys failed within three years. (About 20 percent of non-extended-criteria organs also failed within three years.) Transplantation, even under the best of circumstances, still involves risk. In assessing marginal organs, it is difficult to know whether a bad outcome -- the recipient's death or the organ's failure -- was caused by the organ, the surgery or the fragile health of the recipient. Except for age-related research, few large-scale studies have yet investigated the effects of other extended-criteria kidneys. Do kidneys from diabetics, the obese, alcoholics, smokers or drug users generally work over the long term? Surgeons and scientists can't say for sure. There is even less information about imperfect livers, hearts or lungs. Surgeons do know that livers, for some reason, don't age at the same rate as their original owners. Sixty- or 70-year-old livers can be in fine shape. Hearts and lungs aren't as durable and are more likely to fail as they get older. But surgeons are using them. A 2003 report by the UNOS-administered Organ Procurement and Transplantation Network stated: ''The need to more agressively utilize available organs for the candidate population as a whole competes with the expectation of each individual.'' And this is, ultimately, the crux of the matter. The marginality of any given organ is relative. It depends on how sick the waiting recipient is. There is a kind of mad, desperate arithmetic that goes into calculating whether to use a marginal organ and when. ''We're all trying to quantify the risks,'' Lewis Teperman, the N.Y.U. transplant director, says. ''If we know that there's a 0.7 increase in relative risk of an extended-criteria organ failing, which is about what we've seen in kidneys so far, you take that number, look at your patient's chances for survival, which might be 90 percent with a perfect organ and 80 percent with an extended-criteria one and. . . . '' He trails off. ''It sounds very clinical when I put it like that, which isn't what I want.'' He starts again. ''It's easy enough to come up with these kinds of calculations. But it's difficult for any of us to apply them in practice, when we're dealing with very sick people's lives.'' Dr. Marlon Levy, a liver-transplant surgeon in Fort Worth and the medical director for the Southwest Transplant Alliance, the group that unwittingly collected and distributed the rabid organs last year, told me: ''You have this immensely complex weighing of benefits and risks in each of these cases. Is the recipient sick enough to justify using any organ, even a really marginal one, to try and save his life and give him a few more years? Or say you have a slightly healthier patient, and you think he's doing well enough to pass on a marginal organ and wait for a better one. Then, suddenly, he develops complications and dies before another organ becomes available. Were these decisions wrong?'' It is extremely difficult to predict outcomes. ''The best thought-out decision doesn't work out all the time,'' Teperman says. ''I have put in extended-criteria organs that worked perfectly, and the person walked out the door a week later. Other times, a patient has gotten an extended-criteria organ and remained hospitalized for months. I've also waited, thinking a better organ would come along, and the patient has died in the meantime.'' To some extent, surgeons' hands are tied. In general, the current system requires that the most desperately ill patient must get the next organ that comes in, whether it is the best organ for that patient or not. ''Things would work best if we could put the most extended-criteria organs into the less critically ill patients and the healthiest organs into the sickest patients,'' Teperman says. The calculus may be even more complex from the patient's perspective. Dr. Grant Campbell, an epidemiologist with the Centers for Disease Control and Prevention, had a liver transplant in 1990. At that time, he was chronically ill and knowingly accepted an organ infected with cytomegalovirus, a common and usually mild disease but one that can be serious in immunosuppressed transplant patients. Fortunately, he didn't become sick. Even the most rational attempts to weigh the risks and benefits of marginal organs tend to fall apart in the face of truly boundless human despair. ''We would have taken any lungs,'' said Harry Littlejohn, 59, of Lewisville, Tex., whose 28-year-old daughter, Carmen, died in 2001 of cystic fibrosis. She had been No. 1 on the state waiting list for new lungs for eight weeks by then. None became available. ''We would have done anything to save her,'' he said, ''anything. But there was nothing we could do.'' Joshua Hightower turned 18 on May 10, 2004, in the transplant recovery ward at Baylor University Medical Center. Photos from around that time show him propped up in bed, looking wan, but smiling. Joshua had been added to the lengthy transplant waiting list the year before. The doctors said they could not estimate how long the wait would be, Jennifer Hightower, his mother, told me. After the Hightowers received the call from the hospital, his mother recalled, she had wondered about the donor. Anonymity has been crucial to the workings of the organ-transplant system. Donation is supposed to be a blind act of altruism. Donor families aren't told at the time who will receive the organs, and recipients generally are told only the age and sex of the donor. ''You don't want people coming in and saying, 'I'll only donate to Italians.' Or 'I only want them to go to someone in the Ku Klux Klan,''' says Sheldon Zink, director of the program for transplant policy and ethics at the University of Pennsylvania. You also don't want recipients turning down organs because of their own biases. But how much should a surgeon tell a patient who is about to receive a compromised organ? Should he explain that the new kidney comes from a retiree, a drug user or an alcoholic, a chain smoker or a member of a motorcycle gang? Does he have to tell a patient that the organ he is about to receive is considered marginal? "I wish we had been told more,'' Jennifer Hightower says. Her son, she went on to say, would have declined the kidney had they known more about Beed's background and his death. Joshua, she says, was not so sick that he couldn't wait. ''I would have made him pass on it.'' Her attitude worries Zink, the ethicist. ''I would question anyone's motivation in refusing an organ from a drug user,'' she told me. ''They aren't responding to clinical information, because the available clinical data'' -- the anecdotal reports from doctors -- ''indicates that organs from crack-cocaine users are fine, in general. So they must be responding to preconceptions about that person's lifestyle. That's only one small step from declining an organ because the donor is black or Hispanic.'' At the moment, no formal national medical standards dictate what transplant surgeons should tell their patients about organs other than kidneys or what they can withhold. Each doctor makes that decision based on how he feels about the ethics of the situation. ''I believe in erring on the side of telling the patient as much as possible,'' Teperman says. ''We have a lengthy consent form here at N.Y.U., and it goes into the use of marginal organs. We ask patients if they will accept one. You don't want to be calling someone at 2 a.m. and saying: 'You can take this organ we just got in that may not be very good or you can wait and maybe die. What do you want to do?' That's an unrealistic burden to put on a patient. We try to have the conversation early on, when patients are a little more clearheaded. That's not always an easy conversation to have. Some patients would rather not think about it. They'd rather the doctor just make the decision for them.'' Some surgeons insist on making decisions about marginal organs unilaterally. ''There are transplant surgeons who think they absolutely know best,'' Zink says. ''They don't bother asking the patient if he wants a marginal organ because they don't want the patient having a choice. They make it for him.'' When Zink recently asked surgeons at a major transplant conference how many of them always tell their patients if they are about to implant a marginal organ, ''about half said they tell the patient,'' Zink told me. ''Half said they don't.'' Some surgeons withhold information because they are concerned about litigation (better to say nothing than to say that an organ might be compromised, have your judgment proved right and be sued for it). Others are prodded by compassion. ''There are doctors out there who think that a patient will recover better if he isn't worrying about the quality of the organ inside of him,'' Zink says. Wry pragmatism also plays a role. ''At some large urban transplant centers, virtually all organs nowadays are extended-criteria organs,'' Zink points out. Why discuss the option of accepting or declining an imperfect organ? If a patient says he doesn't want one, he'll most likely never get an organ at all. ''I've had doctors tell me they don't even tell their patients that they're about to get an organ that might be infected with hepatitis C because so many of the donated organs may have it,'' Zink says. On Friday, May 28, 24 days after his transplant, Joshua Hightower, who had been released from the hospital, graduated from high school. He clutched his diploma, climbed up into the stands and threw up, Jennifer Hightower said. He didn't stop vomiting all through the celebrations that followed. The next day, he was stumbling, and by the evening, he was having convulsions. Spit dribbled down his face. Doctors at the nearest emergency room hurriedly transferred him to the E.R. at Baylor. Upstairs in the transplant wing, around the same time, three other patients who had received donations from William Beed Jr. lay dying, each with convulsions, delirium or pain. Within two weeks, all but Joshua were dead. Rabies was confirmed as the cause of death a few weeks later. There is no formal system that tracks the short-term fate of individual organs from a particular donor. Surgeons report raw data about deaths and severe surgical complications to UNOS. Had all of the people who received an organ from William Beed Jr. not come back to the same hospital and died, one after another, their rabies may not have come to light. In May, three people died who had received organs from the same donor in New England. As it turned out, the donor had passed along lymphocytic choriomeningitis virus, a rare illness transmitted to humans from rodents like hamsters. Two of the recipients, after getting ill, went to the same hospital, which helped doctors there determine that the transplant was the cause. ''I doubt very much that this is the only time'' that rabies has killed transplant patients, says Charles Rupprecht, the C.D.C.'s rabies expert about the Beed case. ''And I doubt that it will be the last.'' In February, doctors in Germany announced that four patients there had been infected with rabies after receiving organs from a rabid young woman who had died, they had thought, of a heart attack associated with an overdose of cocaine and Ecstasy. ''Rabies is a sentinel disease,'' argues Dr. Matthew Kuehnert, the assistant director for blood safety at the C.D.C., who has studied outbreaks of disease in transplant recipients. ''It tells us we should be paying attention, that something needs to change.'' What, though? ''We cannot start testing every donor for rabies or any of the other once-in-a-lifetime diseases that might crop up,'' Klintmalm says. ''We don't have time. It would cost too much. You might as well shut down every transplant center. If another case came in today exactly like that one, a young man who used crack cocaine and died, I would not demand more explanation. Why? We'll never get the risk of transplants down to zero. It's stupid to pretend we can. That young man appeared to be a perfect donor. I wish we had more like him.'' The broader question is what, if anything, should change in transplantation as marginal organs become everyday organs? ''We at the C.D.C. wish that there were more formal disease surveillance and follow-up of transplant patients,'' Kuehnert said. ''We simply don't know the risks of using certain types of donors at this point.'' The C.D.C. has no authority to require such follow-up and study, though. Only other regulatory agencies within the Department of Health and Human Services or state agencies can set such mandates. In June 2004, the New York State Department of Health became the first regulatory agency in the country to start formally looking into the growing use of marginal organs and to formulate recommendations about what patients should be told and what kinds of organs should be allowed. Its report is due soon. In the meantime, the United Network for Organ Sharing has created a designation for patients who say they will accept a marginal kidney. At the end of February, 42 percent of the adults waiting for a kidney in the United States said they would take a marginal organ. A year ago, while Joshua Hightower lay unconscious but alive, the doctors decided to surgically remove his transplanted kidney. But by then, rabies (not yet identified as the culprit) was everywhere in him. His condition worsened. On June 18, a Friday, doctors tested for brain activity. They found none and declared him brain dead. Stung with grief, Jennifer Hightower and the rest of her family sat with the boy through a wrenching weekend while he remained on a ventilator. On that Monday, his parents agreed to end life support. That afternoon, with his family watching, doctors turned off the ventilator. His mother held him as his heart stopped. It will not be a simple matter in the years ahead to decide how best to save lives with transplants. At some point this year, the number of people on transplant waiting lists in the United States will very likely top 100,000. Unless there is an enormous effort, probably from the federal government, to increase organ donation, the shortage will only grow. ''All these kids we see with diabetes,'' Nicholas Tilney says, ''so many of them will need a new kidney in a few years. Where are those organs going to come from?'' Gretchen Reynolds frequently writes about medical topics. Her last article for the magazine was about epidemiologists tracking the avian flu. From checker at panix.com Sun Jul 10 15:59:16 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:16 -0400 (EDT) Subject: [Paleopsych] NYT: The Half-Life of Anxiety Message-ID: The Half-Life of Anxiety http://www.nytimes.com/2005/07/10/weekinreview/10carey.html By [3]BENEDICT CAREY FOR all their murderous power, the four terrorist bombs detonated in London on Thursday morning have not created anything close to mass panic. It's possible to imagine a scene straight out of the movie "War of the Worlds," an unraveling of society, with people disoriented, afraid for their lives, holing up in their basements or fleeing the city. Instead, on Friday morning, a day after the bombing, Londoners were beginning to return to daily routines, some even riding the buses and subway trains. Although real terrorism is life-shattering to those directly affected and may help attackers achieve political goals - last year's bombing in Madrid, for example, may have helped lead to the withdrawal of Spanish troops from Iraq - the attacks almost never sow the kind of lasting confusion and mass anxiety that the perpetrators presumably want. In Israel, the damage from cafe and bus bombings is typically cleared within hours. In Lower Manhattan, real estate prices have only spiraled upward since the Sept. 11 attacks; the average sale in TriBeCa last year was almost $1.7 million, 16 percent higher than in 2003. And a recent report found that tourism had increased in Madrid since the bombings. "It says something that it is hard to think of any attack that truly caused a city to cease to function, except perhaps Dresden, Hiroshima or Nagasaki," said Dr. Lynn Eden, a senior research scholar at Stanford University's Institute for International Studies and author of the book, "Whole World on Fire," an analysis of military bombing and damage predictions. Are Western cities themselves so resourceful and structurally sound that they can absorb just about any blow? Are people adaptable enough that they can live with almost any threat? Or, can certain kinds of threats deeply unsettle a worldly population? Strangely enough, the answer to all three questions is yes. Certainly, an attack of the magnitude of last week's in London creates a climate of fear, and no one in England is likely to forget the carnage; July 7 is certain to carry in the English consciousness some of the same resonance as Sept. 11 does in this country. But terror groups like Al Qaeda are widely thought to be after bigger game - the psychological unraveling, or loss of confidence, in Western society. And high explosives have not done the trick. People understand bombs, for one thing; they know what the weapons can do, and why certain targets are chosen. This allows residents to feel that they have some control over the situation: They can decide not to take trains at rush hour, avoid buses or drive a car, psychologists say. "Unfortunately, and I think people sensed this in watching the coverage in London, bombings have become familiar," and, as such, less frightening to those not directly affected, said George Loewenstein, a professor of psychology and economics at Carnegie Mellon University in Pittsburgh. And for all their flaws, Western governments typically respond immediately to terror, which is far more psychologically soothing than many people admit. It's the reason Prime Minister Tony Blair flew back to London from the Group of 8 conference in Gleneagles, Scotland. And it's the reason both Rudolph Giuliani and Winston Churchill became national heroes. All the same, it's clear that people in much of the West believe that their societies are fragile, and capable of breaking down. Indeed, as the new millennium approached, there were fears that a large-scale computer meltdown would paralyze hospitals, police and other basic services. And the most unsettling thing about the current brand of extremist Muslim terror is the certainty that the enemy will try anything - including using weapons whose psychological effects are entirely unknown. Even small changes in weaponry can be deeply unnerving. In her history of London during World War II, "London 1945," Maureen Waller describes how Londoners, long accustomed to take cover from the roar of bombers overhead, plunged into confusion when first hit with Hitler's missiles, the V-1 buzz bomb and the V-2 rocket. "By some acoustic quirk, those in its direct path barely heard a V-2," she writes. "If you did hear it, it had missed you. But that knowledge did nothing to quell the primeval fear each time one exploded." The missiles were far more terrifying than the conventional bombardments, Ms. Waller adds. "Life was uncertain again." Bioterror scenarios are the most obvious modern-day example of such terrifying ambiguity. Despite only a handful of deaths, the anthrax poisonings in 2001 created a rip current of anxiety for millions anytime they opened their mailboxes. Studies find that this kind of free-floating concern, when written across neighbors' or colleagues' faces, is contagious, Dr. Loewenstein said. A similarly frightening mystique might surround the so-called dirty bomb, a conventional explosive containing some radioactive material. A dirty bomb is not a nuclear bomb, as many people assume, and can inflict nowhere near the amount of damage or radioactive contamination, said Dr. Irwin Redlener, director of the National Center for Disaster Preparedness and a professor at Columbia University. While a nuclear bomb could devastate much of the city with its blast and radiation wave, a dirty bomb is a local device - a car bomb, say, that could contaminate a specific area, like Times Square. "There is a whole lot of mythology associated with any nuclear device, and a tendency for people to confuse a dirty bomb with a nuclear bomb, and we just don't know how people will react," Dr. Redlener said. "For instance, would people decide to come back to work and live in an area hit by a dirty bomb?" The widespread revulsion to any hint of radiation, he said, lends the dirty bomb both an ominous novelty and mystery that are much more likely to induce life-altering psychological anxiety than a conventional bomb would. Although such sustained and uncertain threats may fall short of bringing a city to a standstill, they could shatter social networks and slow an economy, experts say. People may still ride the buses, take their children to school and go to work, but a community under continuous assault often turns on itself, with neighbors distrusting one another, research suggests. In studies of Alaskan communities that were affected by the oil spill from the tanker Exxon Valdez in 1989, and of towns dealing with water contamination in New Jersey and New York, sociologists have found what they call social corrosion. Sustained anxiety breaks down social groups and leads to an increase in mental health problems and potentially to economic downturn, said Lee Clarke, a sociology professor at Rutgers University and author of the forthcoming book, "Worst Cases," an analysis of responses to disaster. Beyond the unknown, many people wonder whether city residents would stick around if terrorists successfully staged not one bombing but a series of major attacks in a short period of time. Certainly after Sept. 11, many people openly wondered whether another big attack - a double or triple hit - might be just enough to cause a kind of collective mental breakdown, an exodus. Maybe. But in the absence of new species of horror, the histories of Jerusalem, Tel Aviv, Belfast and London still suggest otherwise. "Even if we hypothesize attacks like this for a week, what would happen?" said Dr. Eden. "They would shut down the subway, let's say, and my guess is that there would be a run on bicycles. There would be a difficult adjustment period, there would be some economic ramifications, but people would learn to function." From checker at panix.com Sun Jul 10 15:59:21 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 11:59:21 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Euthanasia for Babies? Message-ID: Euthanasia for Babies? New York Times Magazine, 5.7.10 http://www.nytimes.com/2005/07/10/magazine/10WWLN.html [I am enthusiastic about euthanasia myself and would not only place very few restrictions on it, but encourage the ending of lives without pleasure, engagement, or meaning (the three ingredients of happiness). Not that I'm eager to euthanise anyone myself. I would be reluctant to defy deeply-held attitudes, many of which I've internalized. It's that I'd prefer to live in a society where these attitudes have changed so as to be more supportive of euthanasia. Please understand this distinction.] By JIM HOLT One sure way to start a lively argument at a dinner party is to raise the question Are we humans getting more decent over time? Optimists about moral progress will point out that the last few centuries have seen, in the West at least, such welcome developments as the abolition of slavery and of legal segregation, the expansion of freedoms (of religion, speech and press), better treatment of women and a gradual reduction of violence, notably murder, in everyday life. Pessimists will respond by citing the epic evils of the 20th century -- the Holocaust, the Gulag. Depending on their religious convictions, some may call attention to the breakdown of the family and a supposed decline in sexual morality. Others will complain of backsliding in areas where moral progress had seemingly been secured, like the killing of civilians in war, the reintroduction of the death penalty or the use of torture. And it is quite possible, if your dinner guests are especially well informed, that someone will bring up infanticide. Infanticide -- the deliberate killing of newborns with the consent of the parents and the community -- has been common throughout most of human history. In some societies, like the Eskimos, the Kung in Africa and 18th-century Japan, it served as a form of birth control when food supplies were limited. In others, like the Greek city-states and ancient Rome, it was a way of getting rid of deformed babies. (Plato was an ardent advocate of infanticide for eugenic purposes.) But the three great monotheistic religions, Judaism, Christianity and Islam, all condemned infanticide as murder, holding that only God has the right to take innocent human life. Consequently, the practice has long been outlawed in every Western nation. This year, however, a new chapter may have begun in the history of infanticide. Two physicians practicing in the Netherlands, the very heart of civilized Europe, this spring published in The New England Journal of Medicine a set of guidelines for what they called infant ''euthanasia.'' The authors named their guidelines the Groningen protocol, after the city where they work. One of the physicians, Dr. Eduard Verhagen, has admitted to presiding over the killing of four babies in the last three years, by means of a lethal intravenous drip of morphine and midazolam (a sleeping agent). While Verhagen's actions were illegal under Dutch law, he hasn't been prosecuted for them; and if his guidelines were to be accepted, they could establish a legal basis for his death-administering work. At first blush, a call for open infanticide would seem to be the opposite of moral progress. It offends against the ''sanctity of life,'' a doctrine that has come to suffuse moral consciousness, especially in the United States. All human life is held to be of equal and inestimable value. A newborn baby, no matter how deformed or retarded, has a right to life -- a right that trumps all other moral considerations. Violating that right is always and everywhere murder. The sanctity-of-life doctrine has an impressively absolute ring to it. In practice, however, it has proved quite flexible. Take the case of a baby who is born missing most or all of its brain. This condition, known as anencephaly, occurs in about 1 in every 2,000 births. An anencephalic baby, while biologically human, will never develop a rudimentary consciousness, let alone an ability to relate to others or a sense of the future. Yet according to the sanctity-of-life doctrine, those deficiencies do not affect its moral status and hence its right to life. Anencephalic babies could be kept alive for years, given the necessary life support. Yet treatment is typically withheld from them on the grounds that it amounts to ''extraordinary means'' -- even though a baby with a normal brain in need of similar treatment would not be so deprived. Thus they are allowed to die. Are there any limits to such ''passive'' euthanasia? A famous test case occurred in 1982 in Indiana, when an infant known as Baby Doe was born with Down syndrome. Children with Down syndrome typically suffer some retardation and other difficulties; while presenting a great challenge to their parents and families, they often live joyful and relatively independent lives. As it happened, Baby Doe also had an improperly formed esophagus, which meant that food put into his mouth could not reach his stomach. Surgery might have remedied this problem, but his parents and physician decided against it, opting for painkillers instead. Within a few days, Baby Doe starved to death. The Reagan administration responded to the case by drafting the ''Baby Doe guidelines,'' which mandated life-sustaining care for such handicapped newborns. But the guidelines were opposed by the American Medical Association and were eventually struck down by the Supreme Court. The distinction between killing a baby and letting it die may be convenient. But is there any moral difference? Failing to save someone's life out of ignorance or laziness or cowardice is one thing. But when available lifesaving treatment is deliberately withheld from a baby, the intention is to cause that baby's death. And the result is just as sure -- if possibly more protracted and painful -- as it would have been through lethal injection. It is interesting to contrast the sort of passive euthanasia of infants that is deemed acceptable in our sanctity-of-life culture with the active form that has been advocated in the Netherlands. The Groningen protocol is concerned with an element not present in the above cases: unbearable and unrelievable suffering. Consider the case of Sanne, a Dutch baby girl who was born with a severe form of Hallopeau-Siemens syndrome, a rare skin disease. As reported earlier this year by Gregory Crouch in The Times, the baby Sanne's ''skin would literally come off if anyone touched her, leaving painful scar tissue in its place.'' With this condition, she was expected to live at most 9 or 10 years before dying of skin cancer. Her parents asked that an end be put to her ordeal, but hospital officials, fearing criminal prosecution, refused. After six months of agony, Sanne finally died of pneumonia. In a case like Sanne's, a new moral duty would seem to be germane: the duty to prevent suffering, especially futile suffering. That is what the Groningen protocol seeks to recognize. If the newborn's prognosis is hopeless and the pain both severe and unrelievable, it observes, the parents and physicians ''may concur that death would be more humane than continued life.'' The protocol aims to safeguard against ''unjustified'' euthanasia by offering a checklist of requirements, including informed consent of both parents, certain diagnosis, confirmation by at least one independent doctor and so on. The debate over infant euthanasia is usually framed as a collision between two values: sanctity of life and quality of life. Judgments about the latter, of course, are notoriously subjective and can lead you down a slippery slope. But shifting the emphasis to suffering changes the terms of the debate. To keep alive an infant whose short life expectancy will be dominated by pain -- pain that it can neither bear nor comprehend -- is, it might be argued, to do that infant a continuous injury. Our sense of what constitutes moral progress is a matter partly of reason and partly of sentiment. On the reason side, the Groningen protocol may seem progressive because it refuses to countenance the prolonging of an infant's suffering merely to satisfy a dubious distinction between ''killing'' and ''letting nature take its course.'' It insists on unflinching honesty about a practice that is often shrouded in casuistry in the United States. Moral sentiments, though, have an inertia that sometimes resists the force of moral reasons. Just quote Verhagen's description of the medically induced infant deaths over which he has presided -- ''it's beautiful in a way. . . . It is after they die that you see them relaxed for the first time'' -- and even the most spirited dinner-table debate over moral progress will, for a moment, fall silent. Jim Holt is a frequent contributor to the magazine. From checker at panix.com Sun Jul 10 16:01:18 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:18 -0400 (EDT) Subject: [Paleopsych] Book World: Snake Oil Message-ID: Snake Oil http://www.washingtonpost.com/wp-dyn/content/article/2005/07/07/AR2005070701757_pf.html Reviewed by Chris Lehmann Sunday, July 10, 2005; BW06 SHAM How the Self-Help Movement Made America Helpless By Steve Salerno Crown. 273 pp. $24.95 The distinctly American phenomenon of self-help is an affront on many levels. It insults our sense of moral proportion, turning petty grievances into cosmically unappeasable plaints of the spirit, to be resolved only when an elaborate (and usually quite expensive) set of affirmations is unleashed or an inner child is at last quieted. It offends our intelligence with its vapid narcissism, hymning the claustral wonders of the self while spouting undigested tracts of pseudo-mystic wisdom from East, West, North and South. Not least, it aggrieves our ear for well-turned language, with its irritating catchphrases (this or that gender being from Mars or Venus, "chicken soup for the soul," "I'm OK, you're OK"), clunky coinages (Gestalt, transactional analysis, self-actualization) and neologisms (creative visualization, codependency and, for that matter, the very term self-help, which misleadingly suggests a can-do independent spirit in a market awash in gurus and hucksters preaching our dependence on them). The decades-old self-help industry is, in short, a plump, inviting target for a sharp takedown, detailing its origins, follies and suspect claims. Unfortunately, Steve Salerno's SHAM, which draws its title from a rather ponderous author-coined acronym for "Self-Help and Motivation," is not that book. More accurately, it is perhaps a third of that book, since Salerno, a former business reporter, is fixated on the notion that, as his sensational title suggests, self-help gurus rarely deliver on their claims to be healers of the wounded American body and soul. This is not a trivial charge, of course, but, intellectually speaking, it's the least interesting feature of the sprawling self-help industry. All sorts of things in contemporary culture don't work yet continue to draw millions of people, usually on a repeat-business model: fad diets, pyramid investment schemes, faith healing, the two-party system. P.T. Barnum's immortal dictum about the regular birthing of suckers is a keystone of the American consumer economy. Nevertheless, Salerno presses a single-minded brief against the practitioners of self-help on the grounds that they consistently fail to deliver the goods promised in their come-ons. To seal the indictment, he describes his own moment of clarity, which came to him during a stretch in a satellite wing of the self-help business, as an editor on a Men's Health-affiliated books program at Rodale Press, a "vast better-living empire." After he had looked over a few marketing surveys, Salerno reports, "one piece of information . . . stood out above all others and guided our entire approach: The most likely customer for a book on any given topic was someone who had bought a similar book within the preceding eighteen months ." This was all well and good, Salerno reasoned, for Rodale's regular gardening or Civil War titles, but when it came to self-help, a different standard should apply: "Many of our books proposed to solve, or at least ameliorate, a problem. If what we sold worked, one would expect lives to improve," and repeat business to evaporate. Instead, Salerno writes, "failure and stagnation are central to all of SHAM. The self-help guru has a compelling interest in not helping people."Armed with this obvious truth, Salerno provides a series of uncomplimentary thumbnail profiles of self-help leaders, from the Oprah-branded disciplinarian "Dr. Phil" McGraw to the corporate cheerleader Anthony Robbins. Often enough, he yields damning, or simply entertaining, background material -- for example, radio scold Dr. Laura Schlessinger's penchant for poaching the mates of others (in addition to her well-documented dalliance as a nude photo subject). But just as often, Salerno overreaches and recites material that is either simply irrelevant -- as when he tells us that investment guru Suze Orman had "a serious speech impediment" as a child -- or nobody's business. Referring to Orman, for instance, he writes that she "has never married -- a bit odd for a woman who spends so much time talking about balance in life."As SHAM continues on its determined path, it becomes clear that the book is anything but "the first serious expos?" of the self-help movement touted in the publicity materials. It is, rather, a kitchen-sink broadside, in which Salerno pins all sorts of evils on the industry. For example, in the movement's well-documented rhetoric of guiltlessness, he sees the very foundations of Western morality giving way: "We have the Recovery movement to thank for the fact that nowadays the people who criticize wrongdoers are sinners, while the wrongdoers themselves are simply 'human'. . . . Recovery's bedrock assumption -- that you're not evil or venal, you're simply exhibiting symptoms -- lays the groundwork for an amoral view of life. It explains why today's society goes to extraordinary semantic lengths to separate the criminal from the crime."This is all a bit much -- especially since the only proof Salerno offers for this grandiose claim concerns a sensational legal defense mounted by Rosemary Heinen, an embezzling executive at Starbucks, who said she suffered from "impulse control disorder." Salerno also fails to mention that the defense didn't work: Heinen was convicted and sentenced to a four-year jail term in 2002. That is the problem with much of SHAM : It is less a considered argument about the self-help world's many excesses than a long train of can-you-believe-this-crap anecdotage. The outrages are all real enough, but the reader wants some sustained explanation of why they keep occurring, and why this country, an alleged capital of Emersonian self-reliance, churns them out in such enormous quantities. But that would mean extending Salerno's argument beyond its self-imposed historical limitations. According to him, self-help kicks off with the emergence of Alcoholics Anonymous in 1935 and gathers real momentum with the publication of the transactional analysis bible I'm OK, You're OK in 1967, whereas most serious students of this strain of therapeutic belief, such as the cultural historian Donald Meyer, locate its roots in the late-19th century New Thought movement. Explaining the deeper sources of self-help's appeal also would involve hazarding some argument about the nature of the American self to begin with -- as Christopher Lasch did in his masterful critique of human potential (as it was then called) in The Culture of Narcissism (1978). That book, together with Meyer's landmark 1965 study, The Positive Thinkers , would be the best place to start reckoning with the bigger questions raised by our national romance with self-help. SHAM misses a great opportunity to follow up on those questions, and that is, indeed, a shame. ? Chris Lehmann is an editor at Congressional Quarterly. From checker at panix.com Sun Jul 10 16:01:32 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:32 -0400 (EDT) Subject: [Paleopsych] SW: On Mental Disorders in the US 1990-2003 Message-ID: Public Health: On Mental Disorders in the US 1990-2003 http://scienceweek.com/2005/sw050715-6.htm The following points are made by R.C. Kessler et al (New Engl. J. Med. 2005 352:2515): 1) In the 1980s, the Epidemiologic Catchment Area (ECA) Study found that 29.4 percent of the adults interviewed had had a mental disorder at some time in the 12 months before the interview (referred to as a "12-month mental disorder"), according to the criteria of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, third edition (DSM-III).[3] A fifth of those with a 12-month disorder received treatment. Half of all who received treatment did not meet the criteria for a 12-month disorder according to the ECA Study or the DSM-III. A decade later, the National Comorbidity Survey (NCS) found that 30.5 percent of people 15 to 54 years of age had conditions that met the criteria for a 12-month mental disorder according to the criteria of the DSM-III, revised (DSM-III-R).[4] A fourth of these patients received treatment. Roughly half of all who received treatment did not meet the criteria for a 12-month mental disorder according to the NCS or the DSM-III-R. 2) The results of the ECA study and the NCS are no longer valid owing to changes in the delivery of mental health care. The Substance Abuse and Mental Health Services Administration found that annual visits to mental health specialists (i.e., psychiatrists and psychologists) increased by 50 percent between 1992 and 2000.[5] The National Ambulatory Medical Care Survey found that the number of people receiving treatment for depression tripled between 1987 and 1997. The Robert Wood Johnson Foundation Community Tracking Survey found that the number of people with a serious mental illness who were treated by a specialist increased by 20 percent between 1997 and 2001. 3) The authors examined trends in the prevalence and rate of treatment of mental disorders among people 18 to 54 years of age during roughly the past decade. The authors conclude: Despite an increase in the rate of treatment, most patients with a mental disorder did not receive treatment.[1.2] References (abridged): 1. Department of Health and Human Services. Mental health: a report of the Surgeon General. Bethesda, Md.: National Institute of Mental Health, 1999 2. President's New Freedom Commission on Mental Health. Achieving the promise: transforming mental health care in America 3. Robins LN, Regier DA, eds. Psychiatric disorders in America: The Epidemiologic Catchment Area Study. New York: Free Press, 1991 4. Kessler RC, McGonagle KA, Zhao S, et al. Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Arch Gen Psychiatry 1994;51:8-19 5. Manderscheid RW, Atay JE, Hernandez-Cartagana MR, et al. Highlights of organized mental health services in 1998 and major national and state trends. In: Manderscheid RW, Henderson MJ, eds. Mental health, United States, 2000. Washington, D.C.: Government Printing Office, 2001:135-71 New Engl. J. Med. http://www.nejm.org From checker at panix.com Sun Jul 10 16:01:41 2005 From: checker at panix.com (Premise Checker) Date: Sun, 10 Jul 2005 12:01:41 -0400 (EDT) Subject: [Paleopsych] SW: On Disease in Marathon Runners Message-ID: Medical Biology: On Disease in Marathon Runners http://scienceweek.com/2005/sw050715-5.htm The following points are made by B.D. Levine and P.D. Thompson (New Engl. J. Med. 2005 352:1516): 1) As traditional as the marathon itself is the use of the event for research and of its runners as research subjects. In the second year of its existence, two physicians, Harold Williams and Horace D. Arnold, examined urine specimens from some of the runners and noted urinary casts and proteinuria -- findings that would later be known as "athletic pseudonephritis".[1] Clarence DeMar, a legendary Boston runner, won the marathon an incredible seven times. His total would probably have been higher had he not been advised against competing by a physician who detected what was undoubtedly an innocent flow murmur produced by DeMar's augmented cardiac stroke volume. DeMar was also a subject in studies performed by the noted Boston cardiologist Paul Dudley White, who had a lifelong interest in the marathon and had studied the heart rate of Boston participants in the 1915 and 1916 races. When DeMar died of colon cancer in 1958, White arranged for an autopsy on the already embalmed body. A report in 1961 [2] presented results from both White's earlier studies of DeMar and the autopsy, which showed that the diameter of DeMar's coronary arteries was approximately two to three times that in normal adults. White, a great advocate of exercise who often rode his bicycle to work, was a big fan of the marathon and, ironically, first recognized his own heart disease because of angina that developed as he jogged over to the race venue to watch David McKenzie of New Zealand win the 1967 race. 2) Research interest in marathon participants during the first decades of the 20th century was driven by concern for their health. Little was known about cardiac adaptations to endurance exercise, and what was known was determined by auscultation and the use of the "trained finger" for palpation and percussion. Hallmarks of an athlete's heart such as bradycardia, cardiac enlargement, and innocent flow murmurs, were, in the view of the clinicians of the day, possible signs of pathologic heart block, cardiomyopathy, and valvular obstruction. It was not until 1942 that White used electrocardiography to record markedly slow, but normal, sinus bradycardia in athletes. According to Tom Derderian, author of a history of the Boston Marathon,[3] marathoners were the test pilots and astronauts of their time, running where none had run before -- and possibly risking their health in the process. Concerns about the health of athletes ultimately abated with the growing understanding that these cardiac changes were normal physiological adaptations and that physical activity conferred multiple health benefits. 3) In actuality, marathoning is a reasonably safe sport, with less than one death per 50,000 participants. Deaths that occur during less extreme physical activity and in previously healthy persons are usually caused by cardiac disease -- predominantly, congenital problems such as hypertrophic cardiomyopathy or coronary anomalies in young athletes and atherosclerotic coronary artery disease in persons older than 35 years of age. 4) Nontraumatic causes of death among marathoners and ultramarathoners, military recruits, and persons who labor in hot and humid conditions are more varied; historically, they have included heat stroke and exertional rhabdomyolysis. These conditions are mitigated by adequate hydration, and preventive efforts have led to widespread recommendations for aggressive fluid consumption during endurance events such as marathons. These recommendations stemmed from the argument that because thirst may not be a precise indicator of the state of the plasma volume, fixed (and large) quantities of fluids should be consumed by athletes during endurance events, regardless of fitness level, body size, and known amount or composition of sweat loss. 5) However in 1981, during the 90-km Comrades Ultramarathon in South Africa, two cases of hyponatremia developed; they were later reported by Timothy Noakes in a runners' magazine called South African Runner. Although there has been vigorous debate about the relative importance of fluid overload as compared with sodium loss due to sweating in the development of hyponatremia in runners, an extensive literature has accumulated over the past 20 years documenting that the primary cause is water intake in excess of sodium loss. The relative importance of water loss and sodium loss depends on the type and duration of the race, weather conditions, and the rates of these losses (as well as the rate of replacement of water and sodium), which may vary widely among athletes.[3-5] 1. Williams H, Arnold HD. The effects of violent and prolonged muscular exercise upon the heart. Phila Med J 1899;3:1233-9 2. Currens JH, White PD. Half a century of running: clinical, physiologic and autopsy findings in the case of Clarence DeMar ("Mr. Marathon"). Nord Hyg Tidskr 1961;265:988-993 3. Derderian T. The Boston Marathon: the first century of the world's premier running event. Champaign, Ill.: Human Kinetics, 1996 4. Casa D. Proper hydration for distance running -- identifying individual fluid needs. Indianapolis: USA Track & Field, 2003. 5. Maughan RJ, Burke LM, Coyle EF, eds. Food, nutrition and sports performance II: the International Olympic Committee consensus on sports nutrition. New York: Taylor & Francis Group/Routledge, 2004 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ANTHROPOLOGY: ENDURANCE RUNNING AND HUMAN EVOLUTION The following points are made by D.M. Bramble and D.E. Lieberman (Nature 2004 432:345): 1) Most research on the evolution of human locomotion has focused on walking. There are a few indications that the earliest-known hominids were bipeds[1,2], and there is abundant fossil evidence that australopithecines habitually walked by at least 4.4 million years (Myr) ago[3,4]. Many researchers interpret the evolution of an essentially modern human-like body shape, first apparent in early Homo erectus, as evidence for improved walking performance in more open habitats that came at the expense of retained adaptations in the australopithecine postcranium for arboreal locomotion [5]. 2) Although the biomechanics of running, the other human gait, is well studied, only a few researchers have considered whether running was a mode of locomotion that influenced human evolution. This lack of attention is largely because humans are mediocre runners in several respects. Even elite human sprinters are comparatively slow, capable of sustaining maximum speeds of only 10.2 m/s for less than 15 s. In contrast, mammalian cursorial specialists such as horses, greyhounds, and pronghorn antelopes can maintain maximum galloping speeds of 15-20 m/s for several minutes. Moreover, running is more costly for humans than for most mammals, demanding roughly twice as much metabolic energy per distance travelled than is typical for a mammal of equal body mass. Finally, human runners are less manoeuvrable and lack many structural modifications characteristic of most quadrupedal cursors such as elongate digitigrade feet and short proximal limb segments. 3) However, although humans are comparatively poor sprinters, they also engage in a different type of running, endurance running (ER), defined as running many kilometers over extended time periods using aerobic metabolism. Although not extensively studied in non-humans, ER is unique to humans among primates, and uncommon among quadrupedal mammals other than social carnivores (such as dogs and hyenas) and migratory ungulates (such as wildebeest and horses). 4) In summary: Striding bipedalism is a key derived behavior of hominids that possibly originated soon after the divergence of the chimpanzee and human lineages. Although bipedal gaits include walking and running, running is generally considered to have played no major role in human evolution because humans, like apes, are poor sprinters compared to most quadrupeds. The authors assess how well humans perform at sustained long-distance running, and review the physiological and anatomical bases of endurance running capabilities in humans and other mammals. Judged by several criteria, humans perform remarkably well at endurance running, thanks to a diverse array of features, many of which leave traces in the skeleton. The fossil evidence of these features suggests that endurance running is a derived capability of the genus Homo, originating about 2 million years ago, and may have been instrumental in the evolution of the human body form. References (abridged): 1. Haile-Selassie, Y. Late Miocene hominids from the Middle Awash, Ethiopia. Nature 412, 178-181 (2001) 2. Galik, Y. et al. External and internal morphology of the BAR 1002'00 Orrorin tugenensis femur. Science 305, 1450-1453 (2004) 3. Ward, C. V. Interpreting the posture and locomotion of Australopithecus afarensis: where do we stand? Yb. Physical Anthropol. 35, 185-215 (2002) 4. Aiello, L. & Dean, M. C. An Introduction to Human Evolutionary Anatomy (Academic, London, 1990) 5. Rose, M. D. in Origine(s) de la Bip?die chez les Hominides (eds Coppens, Y. & Senut, B.) 37-49 (CNRS, Paris, 1991) Nature http://www.nature.com/nature -------------------------------- Related Material: MEDICAL BIOLOGY: DOPING AND ATHLETIC PERFORMANCE The following points are made by Timothy D. Noakes (New Engl. J. Med. 2004 351:847): 1) Is it possible for the "natural" athlete who competes without chemical assistance to achieve record-breaking performances in sports requiring strength, power, speed, or endurance? Because doping tests are infrequently positive in international sports, it has been widely believed that the answer is yes -- and that few athletes competing in major sporting events, including the Olympic Games and the Tour de France, use performance-enhancing drugs. But multiple sources of evidence, including personal testimony(1,2) and an ever-increasing incidence of doping scandals, suggest the opposite: that widespread use of performance-enhancing drugs has fundamentally distorted the upper range of human athletic performance.(1,3-5) Unfortunately, a global code of silence has kept the problem hidden from public view.(4,5) 2) Drugs have been in sports for a long time. In the earliest modern Olympic Games, the drugs of choice included strychnine, heroin, cocaine, and morphine,(4) which were probably more harmful than helpful. The first "effective" performance-enhancing drugs, the amphetamines, which were used widely by soldiers in the Second World War, crossed over into sports in the early 1950s.(4) These drugs -- nicknamed "la bomba" by Italian cyclists and "atoom" by Dutch cyclists -- minimize the uncomfortable sensations of fatigue during exercise. By setting a safe upper limit to the body's performance at peak exertion, these unpleasant sensations prevent bodily harm. The artificial manipulation of this limit by drugs places athletes at risk for uncontrolled overexertion. 3) The first cases of fatal heatstroke in athletes using atoom were reported in the 1960s. In the 1967 Tour de France, elite British cyclist Tom Simpson died on the steep ascent of Mont Ventoux, allegedly because of amphetamine abuse. The precise extent to which amphetamines enhance athletic performance is unknown, since, as with all performance-enhancing drugs, there are few modern studies quantifying their effects. The convenient absence of such information represents further evidence of a hidden problem. A popular opinion is that la bomba can turn the usual Tour de France domestique, or support rider, into a stage winner. 4) Since amphetamines must be present in the body to be effective, the sole method of avoiding the detection of their use during competition is to substitute a clean urine sample for the doped specimen. A multitude of innovative techniques have been developed to accomplish this swap.(2) Cortisone, a potent but legal performance-enhancing drug used to dampen inflammation, also reduces the discomfort of heavy daily training and competition and lifts the mood. It is also widely abused by professional cyclists.(2) 5) Testosterone propionate (Testoviron), the prototype of the anabolic steroids, the second major group of potent performance-enhancing drugs, was synthesized in 1936 and appeared in sport sometime after the 1948 Olympic Games. The subsequent synthesis of methandrostenolone (Dianabol) in the USin 1958 and oral chlordehydromethyltestosterone (Turinabol) in East Germany after 1966 marked the beginning of the "virilization" of modern sport.(4) By increasing muscle size, these drugs increase strength, power, and sprinting speed; they also alter mood and speed the rate of recovery, permitting more intensive training and hence superior training adaptation. For maximal effect, anabolic steroids are used in combination with other hormones that have similar activity, including insulin, growth hormone, and insulin-like growth factor. They have multiple side effects, some of which are serious, including premature death. References: 1. Reiterer W. Positive -- an Australian Olympian reveals the inside story of drugs and sport. Sydney: Pan Macmillan Australia, 2000 2. Voet W. Breaking the chain: drugs and cycling; the true story. Fotheringham W, trans. London: Yellow Jersey, 2001 3. Franke WW, Berendonk B. Hormonal doping and androgenization of athletes: a secret program of the German Democratic Republic government. Clin Chem 1997;43:1262-1279 4. Hoberman JM. Mortal engines: the science of performance and the dehumanization of sport. New York: Free Press, 1992 5. Hoberman JM. How drug testing fails: the politics of doping control. In: Wilson W, Derse E, eds. Doping in elite sport: the politics of drugs in the Olympic movement. Champaign, Ill.: Human Kinetics, 2001:241-70 New Engl. J. Med. http://www.nejm.org From shovland at mindspring.com Sun Jul 10 19:16:12 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 10 Jul 2005 21:16:12 +0200 (GMT+02:00) Subject: [Paleopsych] Iraq war Message-ID: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> Killing civilians may make perfect sense. After all, the politicians can't keep the war going if they know we are against them. Even if our elections are a sham, we still do them, and there is always the possibility of wild cards. -----Original Message----- From: Michael Christopher Sent: Jul 9, 2005 8:41 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Iraq war >>If these bombers are Iraqi nationalists, there is hope that our departure from Iraq will cool the conflict. The thought of regional terrorists in the Middle East is indeed terrifying.<< --It's obviously a mixture, which brings up the question: will foreign terrorists transform Iraqi nationalists who would otherwise fight conventionally into terrorists who believe killing civilians is legitimate warfare? In the current situation, anyone who proves himself to be capable of killing US troops is likely to gain respect, but every bombing that targets Iraqis will most likely decrease respect for foreign terrorists. Has the number of Iraqi suicide bombers (targeting civilians rather than soldiers) increased since Zarqawi set up shop? Conventional fighters who target soldiers would likely taper off their efforts when US troops leave and Iraqis take up the job of security, allowing Sunnis and borderline resistance members to take part in government without losing face (many would feel that cooperating with a US-sponsored Iraqi government would be humiliating and an admission of defeat... removing US troops would remove that motive as well). But foreign terrorists aren't going to leave unless Iraqis as a people stand against them. That becomes more likely every time Iraqis are targeted by terrorists, so the logical strategy for the US is to stay until Iraqis are known to be solidly against the presence of foreign terrorists and Iraqi nationalists begin to separate themselves from foreign terrorists by denouncing attacks on civilians. Any Iraqi resistance fighter who feels it's dishonorable to kill civilians is going to be less and less comfortable being associated with Zarqawi's terrorists, more likely to develop rifts with foreign organizers and more likely to leak information leading to the capture of Zarqawi and other terrorist leaders. That pretty much sets up a series of events leading naturally to the war's end. Along with a global denunciation of terrorism by mainstream Muslims and renewed focus on the Israeli-Palestinian partition process, the general trend should be positive, as long as the US and Iran don't provoke one another into another war in the meantime. Bush will watch to see if Iranians stand up to their hardliners or bide their time thinking the US will intervene. With nobody in either party willing to impose a draft, an invasion of Iran is unlikely, leaving air strikes by the US or Israel as the only option. Would an air strike against Iran's government targets or nuclear facilities produce a larger pool of global terrorists seeking nuclear or biological weapons? Or would Iranian moderates take over immediately? Or both? One assumption that should be eliminated is that there is some fixed number of terrorists, and that it's a good thing to draw them all into Iraq to fight them on their own ground. That logic makes sense at first glance, but it's based on an assumption that's not safe to make. But regardless of how one analyzes the overall situation, the immediate solution to Iraq is for the US to stay until there is good reason to believe Iraqi moderates can establish security and prevent foreign terrorists from gaining influence and using Iraq as a training ground. Once Zarqawi is captured or killed and Iraqi resistance members begin shunning foreign terrorists, it will be a lot easier for the US to leave, and attention can be focused on Iran. I don't really trust the current administration to handle it gracefully, but we have what we have. Hopefully enough systems thinkers will focus on geopolitics to provide a counterweight to the gung-ho mentality that will want to rely on forceful moves that may backfire in the long term. We've tended to rely on those moves in the past, doing whatever seemed strongest in the short term, like a beginner chess player who takes every piece that's offered. The end result is messy. Strong moves made hastily can add up to a weak foundation. Michael ____________________________________________________ Sell on Yahoo! Auctions ? no fees. Bid on great items. http://auctions.yahoo.com/ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From christian.rauh at uconn.edu Mon Jul 11 00:19:27 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Sun, 10 Jul 2005 20:19:27 -0400 Subject: [Paleopsych] Iraq war In-Reply-To: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> References: <19428674.1121022972832.JavaMail.root@wamui-thinleaf.atl.sa.earthlink.net> Message-ID: <42D1BB0F.6070908@uconn.edu> I was trying not to write in this thread but couldn't help myself. Following Steve's comment "killing civilians may make perfect sense". I want to add that killing civilians is not an "evil" strategy that resistance fighthers would not use hadn't they been infiltrated by terrorists. The strategy makes sense. Terrorism is a common strategy when confronting a much larger and powerful adversary in war. It has always been used, the change today is the suicide component, aparently. What makes no sense is the Iraqi resistance trying to go against the US military or US equiped Iraqi forces in "conventional" fighting. The resistance would be crushed easily. If civilians are the only targets they can reach then that's what they will hit. Of course, the treshold for that level of warfare is different for different groups but the rationale is always the same. In the past many terrorist were known as freedom fighters, ops, they still are called that. Yours, Christian shovland at mindspring.com wrote: > Killing civilians may make perfect sense. > > After all, the politicians can't keep the war > going if they know we are against them. > > Even if our elections are a sham, we still > do them, and there is always the possibility > of wild cards. > > > -----Original Message----- > From: Michael Christopher > Sent: Jul 9, 2005 8:41 PM > To: paleopsych at paleopsych.org > Subject: [Paleopsych] Iraq war > > > >>>If these bombers are Iraqi nationalists, there is > > hope that our departure from Iraq will cool the > conflict. The thought of regional terrorists in the > Middle East is indeed terrifying.<< > > --It's obviously a mixture, which brings up the > question: will foreign terrorists transform Iraqi > nationalists who would otherwise fight conventionally > into terrorists who believe killing civilians is > legitimate warfare? In the current situation, anyone > who proves himself to be capable of killing US troops > is likely to gain respect, but every bombing that > targets Iraqis will most likely decrease respect for > foreign terrorists. > > Has the number of Iraqi suicide bombers (targeting > civilians rather than soldiers) increased since > Zarqawi set up shop? Conventional fighters who target > soldiers would likely taper off their efforts when US > troops leave and Iraqis take up the job of security, > allowing Sunnis and borderline resistance members to > take part in government without losing face (many > would feel that cooperating with a US-sponsored Iraqi > government would be humiliating and an admission of > defeat... removing US troops would remove that motive > as well). But foreign terrorists aren't going to leave > unless Iraqis as a people stand against them. That > becomes more likely every time Iraqis are targeted by > terrorists, so the logical strategy for the US is to > stay until Iraqis are known to be solidly against the > presence of foreign terrorists and Iraqi nationalists > begin to separate themselves from foreign terrorists > by denouncing attacks on civilians. Any Iraqi > resistance fighter who feels it's dishonorable to kill > civilians is going to be less and less comfortable > being associated with Zarqawi's terrorists, more > likely to develop rifts with foreign organizers and > more likely to leak information leading to the capture > of Zarqawi and other terrorist leaders. > > That pretty much sets up a series of events leading > naturally to the war's end. Along with a global > denunciation of terrorism by mainstream Muslims and > renewed focus on the Israeli-Palestinian partition > process, the general trend should be positive, as long > as the US and Iran don't provoke one another into > another war in the meantime. Bush will watch to see if > Iranians stand up to their hardliners or bide t