From shovland at mindspring.com Sun May 1 15:42:36 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 1 May 2005 08:42:36 -0700 Subject: [Paleopsych] Finding may unlock secret to nerve growth factor Message-ID: <01C54E29.BA756A90.shovland@mindspring.com> http://news-service.stanford.edu/news/medical/2004/may26/nerve.html Researchers' discovery could pave way for development of drugs that alter nerve growth By MITZI BAKER Cells communicate through an intricate system of locks and keys -- receptors on cell surfaces and ligand molecules -- that allow the transmission of very specific information across their membranes. Researchers at the School of Medicine have just discovered an unexpected new type of lock-and-key mechanism that provides a critical step in reproducing nerve growth factor, crucial to all aspects of nerve formation and function. The information revealed can now be directly applied to design a drug to treat neurodegenerative conditions such as Alzheimer's disease or spinal cord injuries. Nerve growth factor, or NGF, is one of the most important molecules in the nervous system, said Chris Garcia, PhD, assistant professor of microbiology and immunology and of structural biology. NGF and its family members called neurotrophins not only control the development of the nervous system in the embryo but also the maintenance of nervous tissue and neural transmission in the adult. Researchers Xiao-lin He (left) and Chris Garcia sit in front of a computer screen that shows an electron density map created by X-ray imaging that helped derive the structure of nerve growth factor. Their research "unlocks" an important step in reproducing the growth factor, which plays a critical role in nerve formation and function. Photo: Mitzi Baker NGF plays a role in many nervous system problems such as neural degeneration in aging, Alzheimer's disease and neural regeneration in spinal cord injuries and other damage to neural tissue. It also may factor into mood and other psychological disorders. NGF's fundamental importance in the nervous system, Garcia said, made it a compelling puzzle to try to solve in his lab, which focuses broadly on how information is communicated across membranes using receptors and ligands, the locks and keys of molecular biology. Interestingly, he added, one of the receptors for NGF is also used by the rabies virus to gain entry into cells, stimulating interest in their lab which has a focus on molecules involved in infection and immunity. "A lot of companies have tried for many years to make a drug out of NGF and it just hasn't worked very well because basically no one has really known what the mechanisms are for receptor activation," he said. "I think the significance of our result is that now we have an atomic model of this system that begins to clarify a lot of the confusing functional data." Garcia and a postdoctoral scholar in his lab, Xiao-lin He, PhD, published their findings of the three-dimensional structure of NGF bound to its receptor earlier this month in Science. The main question that hadn't been answered until now is how a molecule with two symmetrical parts like NGF could simultaneously activate two different receptors on its surface -- called p75 and Trk -- required for its signal. The question that had been a conundrum for researchers in neurobiology for 15 years was "how does NGF specifically select one of each type of receptor instead of two of the same?" "No matter what we found, we knew that it was going to be new and unprecedented," said Garcia. In a mechanism that could be right out of the world of "Harry Potter," the key inserted into one of the locks morphs such that the shape of the combined parts then fits with another type of lock. Garcia and He discerned this unusual feature of the interaction by using X-ray imaging techniques confirmed by biochemical methods. "The result was a complete surprise," said He, who has been studying the NGF signaling system for about a year. He explained that since NGF is composed of two identical chains of protein, it would be logical that it binds the two identical chains of the p75 receptor. But it only attaches to one chain. The researchers found that after NGF connects with one of the p75 protein chain, it changes shape such that a second receptor of the same kind cannot fit. What that does, said Garcia, is allow the other NGF receptor, Trk, to bind on the other side and form a three-way signaling complex. Garcia said neurobiology researchers are also surprised by the finding, which has caused controversy about its meaning. Garcia and He's detailed structural data can now be used by others in the field as a template for further experiments. "Our data is going to stimulate a lot of science to figure out what its significance is," Garcia said. In terms of the straightforward goal of creating a drug that simulates or blocks the actions of NGF binding to its receptors, Garcia said, "It's all there. We've got it. What a drug company needs is in that structure right now and they don't need to know anything else." This research is supported by a fellowship from the Paralyzed Veterans of America, Spinal Cord Research Foundation; the American Heart Association; the Christopher Reeve Paralysis Foundation; the Keck Foundation; and the National Institutes of Health. From shovland at mindspring.com Sun May 1 16:37:08 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 1 May 2005 09:37:08 -0700 Subject: [Paleopsych] Hormones of the Hypothalamus Message-ID: <01C54E31.5899B6C0.shovland@mindspring.com> http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/Hypothalamus.html The hypothalamus is a region of the brain. It secretes a number of hormones. Thyrotropin-releasing hormone (TRH) Gonadotropin-releasing hormone (GnRH) Growth hormone-releasing hormone (GHRH) Corticotropin-releasing hormone (CRH) Somatostatin Dopamine All of these are released into the blood, travel immediately to the anterior lobe of the pituitary <../P/Pituitary.html>, where they exert their effects. All of them are released in periodic spurts. In fact, replacement hormone therapy with these hormones does not work unless the replacements are also given in spurts. Two other hypothalamic hormones: Antidiuretic hormone (ADH) and Oxytocin travel in neurons <../N/N.html> to the posterior lobe of the pituitary where they are released into the circulation. Link to diagram of the endocrine glands <../E/Endocrines.gif> (92K) Thyrotropin-releasing hormone (TRH) TRH is a tripeptide (GluHisPro). When it reaches the anterior lobe of the pituitary it stimulates the release there of thyroid-stimulating hormone <../P/Pituitary.html> (TSH) prolactin <../P/Pituitary.html> (PRL) Gonadotropin-releasing hormone (GnRH) GnRH is a peptide of 10 amino acids. Its secretion at the onset of puberty triggers sexual development. Primary Effects Secondary Effects FSH <../P/Pituitary.html> and LH <../P/Pituitary.html> Up estrogen and progesterone Up (in females) testosterone Up (in males) After puberty, a hyposecretion of GnRH may result from intense physical training anorexia nervosa Synthetic agonists <../A/A.html> of GnRH are used to treat inherited or acquired deficiencies of GnRH secretion. prostate cancer. In this case, high levels of the GnRH agonist reduces the number of GnRH receptors in the pituitary, which reduces its secretion of FSH and LH, which reduces the secretion of testosterone, which reduces the stimulation of the cells of the prostate. Growth hormone-releasing hormone (GHRH) GHRH is a mixture of two peptides, one containing 40 amino acids, the other 44. As its name indicates, GHRH stimulates cells in the anterior lobe of the pituitary to secrete growth hormone <../P/Pituitary.html> (GH). Corticotropin-releasing hormone (CRH) CRH is a peptide of 41 amino acids. As its name indicates, its acts on cells in the anterior lobe of the pituitary to release adrenocorticotropic hormone <../P/Pituitary.html> (ACTH) CRH is also synthesized by the placenta and seems to determine the duration of pregnancy. Description of the mechanism. <../S/SexHormones.html> It may also play a role in keeping the T cells of the mother from mounting an immune attack against the fetus. [Discussion <../S/Sexual_Reproduction.html>] Somatostatin Somatostatin is a mixture of two peptides, one of 14 amino acids, the other of 28. Somatostatin acts on the anterior lobe of the pituitary to inhibit the release of growth hormone <../P/Pituitary.html> (GH) inhibit the release of thyroid-stimulating hormone <../P/Pituitary.html> (TSH) Somatostatin is also secreted by cells in the pancreas <../P/Pancreas.html> and in the intestine <../G/GutHormones.html> where it inhibits the secretion of a variety of other hormones. Dopamine Dopamine is a derivative of the amino acid tyrosine <../T/Tyr_phe.gif>. Its principal function in the hypothalamus is to inhibit the release of prolactin <../P/Pituitary.html> (PRL) from the anterior lobe of the pituitary. Antidiuretic hormone (ADH) and Oxytocin These peptides are released from the posterior lobe of the pituitary and are described in the page devoted to the pituitary. From shovland at mindspring.com Sun May 1 16:41:06 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 1 May 2005 09:41:06 -0700 Subject: [Paleopsych] peptide hormones Message-ID: <01C54E31.E6883F60.shovland@mindspring.com> Although it might be possible to control all the functions of the brain using only a handful of hormones and neurotransmitters, the body has developed instead a hierarchy of systems of considerably greater complexity. This is an artifact rather than a necessity, i.e., "it is what it is." Peptides are long chains or polymers of amino acids, which are just small molecules with one positive end and one negative end. When they link, this electrostatic energy is converted into a chemical bond. Peptides undergo secondary structure transformations in their free state, twisting around themselves to minimize surface tension, an important term in the total surface free energy. This flexibility enables a much larger variety of forms than could be derived from nucleic acid polymers due to their double-helix which significantly limits their conformational variety. In addition, since they are composed of amino acids, peptides can contain and position highly polar or reactive residues. These reactive portions are normally hydrophilic and as such as contained on the outer portion of the coiled peptide where they can act most effectively on other entities. These two facts make proteins ideal as structures for enzymes. The body can fine tune the structure and therefore the chemical activity of enzymes by changing the genetic coding which produces them; and proteins can alter genetic function by regulating its transcription, turning genes on and off, and by enzymatically inhibiting or promoting synthesis of other peptides coded by the DNA. Other peptides - the immunoglobins - are responsible for recognizing non-self material (antigens) such as invading microbes by protuberances on their outer surfaces, and as such are key to the function of the immune system. Finally, and most obviously, proteins comprise a large proportion of the physical structure of the body, as collagen, clathin, etc. Some diseases are directly related to mutated forms of proteins, resulting from mutated genes; still others, such as mad-cow and Alzheimer's, result from improper folding of peptides to form in non-water-soluble protein deposits - amyloid - whose residues have their hydrophobic regions directed outward. http://dwb.unl.edu/Teacher/NSF/C10/C10Links/www.pharmcentral.com/peptide s.htm From shovland at mindspring.com Mon May 2 00:23:29 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 1 May 2005 17:23:29 -0700 Subject: [Paleopsych] A rich resource on cellular receptors Message-ID: <01C54E72.7EBCD530.shovland@mindspring.com> http://www.answers.com/main/ntquery;jsessionid=3ss6ltts54wlo?method=4&ds id=2222&dekey=Receptor+%28biochemistry%29&gwp=8&curtab=2222_1&sbid=lc03b receptor (biochemistry) In biochemistry, a receptor is a protein on the cell membrane or within the cytoplasm that binds to a specific factor (a ligand), such as a neurotransmitter, hormone, or other substance, and initiates the cellular response to the ligand. As all receptors are proteins, their structure is encoded into the DNA. Most hormone genes contain a short sequence that signals to the cell whether it needs to be transported to the cell membrane or it is to remain in the cytoplasm. Overview Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone. From checker at panix.com Mon May 2 16:20:37 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:20:37 -0400 (EDT) Subject: [Paleopsych] BH: Happiness Is Good Biological Functioning Message-ID: Happiness Is Good Biological Functioning http://www.betterhumans.com/Print/index.aspx?ArticleID=2005-04-18-3 Betterhumans Staff 4/18/2005 4:04 PM Smiling Girl Credit: Wolfgang Lienbacher Healthy smile: Happier people are healthier people, suggests a new study Happiness, apparently, is good biological functioning. So suggests a study by researchers from [8]University College London, who found that happier people have several markers of a healthy body, such as those relating to the cardiovascular system and those controlling hormone levels. "It has been suggested that positive affective states are protective, but the pathways through which such effects might be mediated are poorly understood," the researchers say. "Here we show that positive affect in middle-aged men and women is associated with reduced neuroendocrine, inflammatory, and cardiovascular activity." Key systems In 200 middle-aged Londoners, [9]Andrew Steptoe and colleagues found that participants who reported more everyday happiness had better biological function in a few key systems. Happier people, for example, had lower levels of the stress hormone [10]cortisol, which has been linked to such conditions as type 2 diabetes and hypertension. They also had lower responses to stress in plasma [11]fibrinogen levels. In high concentrations, fibrinogen can indicate a risk of coronary heart disease. Happy men also had lower heart rates over the day and evening, suggesting that they had good cardiovascular health. Steptoe and colleagues found that their results were independent of psychological distress, implying that good well-being is directly linked to health-related biological processes. The research is reported in the [12]Proceedings of the National Academy of Sciences ([13]read abstract). References 8. http://www.ucl.ac.uk/ 9. http://www.ucl.ac.uk/epidemiology/staff/steptoe.html 10. http://www.wikipedia.org/wiki/cortisol 11. http://www.wikipedia.org/wiki/fibrinogen 12. http://www.pnas.org/ 13. http://www.pnas.org/cgi/doi/10.1073/pnas.0409174102 From checker at panix.com Mon May 2 16:20:52 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:20:52 -0400 (EDT) Subject: [Paleopsych] WSJ: Divorce increasing at 20%/year in China Message-ID: Divorce increasing at 20%/year in China http://online.wsj.com/article/0,,SB111333549406604929,00.html?mod=todays_us_page_one April 13, 2005 By KATHY CHEN Staff Reporter of THE WALL STREET JOURNAL April 13, 2005; Page A1 NANCHANG, China -- China's success is tearing the Fan family apart. Qun, a successful 39-year-old entrepreneur in Beijing, bought his parents a new apartment and takes them sightseeing in other Chinese cities. But he feels he has little in common with them any more and less to say to them. His younger brother Jun, 37, is reeling over a divorce after his wife left him to pursue opportunities in southern China. He is unemployed after a failed business venture and has been living with his parents for more than a year. At a loss over how to deal with his family's situation, patriarch Fan Hanlin often retreats to his bedroom, usurped in his role of respected elder. His older son's social standing outstrips his, and his younger son ignores his advice. Mr. Fan's wife escapes by playing mah-jongg each afternoon with friends. "Everyone is unhappy," says the elder Mr. Fan, 70. For thousands of years, Chinese have made the family paramount, with generations often living together, and younger members deferring to their elders. Fathers were the head of the household. But opportunities born of China's move to a market-based economy over the past two dozen years are creating new wealth, new hierarchies and new strains. The scramble to keep up with neighbors, or one's own relatives, is testing family ties, contributing to a rise in social problems. Some 1.6 million couples divorced in China last year, a 21% jump over the year before, according to China's Ministry of Civil Affairs. In Beijing, there were 800 reported cases of domestic violence in 2004, double the number the previous year, according to the city's Bureau of Justice. Younger Chinese are opting for privacy over extended-family living and buying parents their own apartments. Others are putting their aging parents in nursing homes, as convenience trumps filial piety, an unheard-of violation of Confucian ethics. Over the past decade, the number of nursing-home residents has increased 40% to more than one million. In their room at the Beijing Fifth Social Welfare Institution, a state-run nursing home, one elderly couple explained why they live there. The Wangs, who declined to be identified by their full names, say they moved to Beijing after retiring to be closer to their daughter. But they found it too lonely living in the high-rise apartment she bought them. "We didn't want to live with her because...in a market economy, competition is very fierce and children have no extra time or energy to take care of their parents," says Ms. Wang, 75. The Wangs say they like it at the institution, because there is more socializing. They note their daughter, a securities-company executive, has cut her once weekly visits to holidays and phone calls. "She has no time," says Ms. Wang. Parents of young children are leaving their offspring in the care of relatives for years, as they seek better jobs far from home. Millions of peasants have left their rural homes for work in cities, while some professionals are going abroad. The trend is spawning China's own generation of latchkey children, numbering in the tens of millions. Last September, a 16-year-old, whose mother had left their village to work in the city, and two friends robbed several dozen students at knifepoint in the city of Daye, say local authorities. The teen was among several dozen youngsters from a nearby village with at least one parent working elsewhere. Many had dropped out of school. "These kids are like land mines who could explode any moment," says village chief Hu Yunyan. One of last year's most-viewed television series in China was "Chinese-Style Divorce." It focuses on a doctor whose marriage unravels after he goes to work at a higher-paying hospital, backed by foreigners, to satisfy his wife's demands for a better life. The show has spawned a best-selling book and pages of commentary and columns in newspapers. The show's producer, Zhu Zhibing, says he came up with the idea for the series after observing how the race to get ahead in China is eroding family relationships. "Everyone is focused on making money," he says. "It destabilizes society." In the Fan household, life followed traditional guidelines when the children were growing up. Mr. Fan, head of the household, taught physics at a high school and his wife, Luo Shuzheng, was an engineer in a state-run factory. They had three children -- two boys and a girl -- who excelled at school, and tested into prestigious universities. Like other Chinese children, the Fans were expected to obey their father without question. "We required [the children] to sit still and didn't let them fool around," Mr. Fan says. Usually, just raising his voice was enough, but Mr. Fan says sometimes he hit the boys. He still recollects with pride how, after he hit his younger son in an effort to improve his study habits, the boy scored so well on college-entrance exams that he ranked among the top in Nanchang County. The Fans were a tight-knit clan. Qun, the oldest, looked after his two younger siblings while their parents were at work. Many nights, their mother stayed up mending clothes and making cloth shoes for the children. Sundays were a rush of shopping, cooking and housework. Neither Mr. Fan's nor his wife's parents lived with them, but the couple set aside part of their small income each month to give to their parents. Like generations of Chinese, Mr. Fan and his wife, Ms. Luo, envisioned a life driven by filial duties for their own children: study hard, find a stable job, get married, produce offspring (preferably male) and support their parents in old age. But in the late 1980s and early 1990s, when the Fan children were graduating from college, China's economic reforms were opening up all sorts of new opportunities, in job choices, lifestyles and ways to get rich. Qun jumped at the chance to do something different. Bucking the trend among college graduates at the time to take a state-sector job, he opted for a marketing position in a joint-venture company of SmithKline Beecham, now GlaxoSmithKline PLC. He learned about the pharmaceutical business and Western marketing techniques, befriended American colleagues and helped the company successfully launch its Contac brand of cold medicine in China. In 1996, he started his own consulting company, advising drug companies on doing business in China. Today, he says, his company employs a dozen people, including his wife, Li Chunhui. The business generates annual revenue of more than $1.2 million, he says, and aftertax profit of 15% to 20% of revenue. As Qun prospered, the distance widened between him and his family in Nanchang, a city of 4.5 million nearly 800 miles from his new home in Beijing. He shares few details of his life in the city with his parents. They don't understand his business, he says, and wouldn't necessarily approve of his lifestyle. He and his wife each drive their own car, dine out frequently and retain two housekeepers, including one just to look after their pet Pekinese. This year, they moved to a two-story house in a wealthy suburb. His parents don't have a car or a housekeeper, rarely eat out and take pride in saving. "If my parents saw me spending this kind of money, I'd be embarrassed," Qun says, as he and his wife grab a coffee at a Starbucks shop on the way home from the office. He notes the few hundred dollars they spend each month on such luxuries as fresh beef, imported cookies, dog sweaters and a sitter for their dog is more than his parents' monthly income. In the past, Chinese families revolved around fathers and sons. But like other younger-generation Chinese, Qun views his first allegiance as with his "xiao jiating," or small family unit centered around his marriage. The couple have resisted suggestions by Qun's parents for them to have a baby. Instead, Qun's wife insists their Pekinese dog, "Wrong Wrong," should be recognized as a "grandson," proclaiming the dog's surname to be "Fan." Qun says this offended his mother, and she once complained that he should find a wife "who listens more." In traditional China, a son would have quietly accepted such criticism. But Qun says he told his mother that if she had a problem with his wife, she should tell her directly. "My main family is with Linda," he says, using his wife's English name. Like many of today's middle-class Chinese, the couple also use Western names, which is seen in certain circles as a sign of being modern and sophisticated. Qun also goes by the name of James. Qun's mother says she doesn't care how they raise their dog, but "you still need to have a child. How will you get by when you are old? Dogs can't take care of you." These days, Qun sees his parents only a few times a year. Conversations tend to be about Nanchang friends or family, since his parents have different views on other subjects. "They always complain about how the government is unfair and society is unjust," says Qun. "I try to influence them...but I think they'll never catch up to my way of thinking." Min, the youngest Fan sibling, also resisted her parents' traditional expectations. Ms. Luo says she hoped her daughter, after graduation from college, would return to Nanchang. Instead, Min settled in the southern city of Guangzhou, where she is married, has a child and works at a major Chinese insurance company. Jun, the middle sibling, is still figuring out his place in the new China. Growing up, he says the constant message from his parents and school was: "If you just listen, you'll be successful and society will take care of you." After college, he accepted a job at the local subsidiary of the state-run China National Machinery & Equipment Import & Export Corp. Working as a trader, he exported engines to other Asian countries and the U.S. and earned more than $10,000 in bonuses, he says. But after more than a decade on the job, his salary changed little, totaling about $200 a month. In 1993, Jun married a fellow office worker, Zhu Yifang. Their employer provided them with a small apartment and they had a son in 1995. After the baby was born, Jun's mother spent long stretches of time living with the couple, a traditional Chinese practice. But Ms. Zhu says she resented her mother-in-law's presence, which she regarded as interference. "It would have been better if the older generation didn't live with us," Ms. Zhu says. "But I couldn't refuse," she says. Unlike wives in pre-reform China, Ms. Zhu could walk away, with more freedom and job opportunities. In 1999, she went to southern China to work as the sales agent for a construction-material company, leaving her husband to care for their son, then 4 years old. In 2000, the couple divorced. Today, Ms. Zhu lives in Nanchang with a boyfriend and earns more than $500 a month, she says, teaching English at a university and running her own English class. After the divorce, in 2001, Qun offered his younger brother a job at his company in Beijing to market a vitamin supplement and oversee a handful of employees. Jun quit his state-sector job and accepted. But he felt uncomfortable leaving his son in his parents' care, he says. He couldn't get used to Beijing or his new job. He had a hard time persuading retailers to buy the vitamin supplement, and after a year, the venture had lost more than $60,000. He says the company didn't spend enough to promote the product. Qun says his younger brother "approached the job like he was still at a state-run company...He got up in the morning, drank a cup of tea, and then did only what I told him to." He says his brother "often complains and finds excuses....We live in different worlds." Jun says that by his older brother's standards, "I haven't succeeded...but the goals he chooses are different from mine." He thinks his brother "doesn't necessarily like what he does, but he wants to earn money." His own goal in life, Jun says, is to first be a good father. Jun returned home and last year moved in with his parents. That is a reversal of Chinese tradition, in which grown offspring typically provide for their parents. Sitting on the apartment patio on a recent day, Jun sipped tea from a beer mug and pondered his future. "I haven't thought through a lot of things, like how to raise my kid, how to be a model parent and how to live with my parents," Jun says. "I'm just considering the question, 'How successful should I be?' People drive a [Mercedes] Benz; I don't have a car." Mr. Fan and Jun often squabble over how to raise Jun's son, now 9 years old. "I tell [Jun] his son should go to sleep at 9 p.m. or he'll be tired at school. But I talk and no one listens," says Mr. Fan. Jun says that his father "has lived this long, but doesn't know what family is. You need to show love to your kid, but [the elder Mr. Fan] doesn't express his emotions." After initially rejecting his brother's suggestion that he look for a job outside of Nanchang, Jun recently had a change of heart. He says he plans to visit Shanghai to explore an opportunity to work for a trading company there. Economic changes have given people in China more money, but are also causing "more pressure" Jun says. "Some contradictions always existed in our family," he says, "but when life was simple, we just lived with them." ---- Cui Rong and Qiu Haixu contributed to this article. Write to Kathy Chen at kathy.chen at wsj.com1 http://online.wsj.com/article/0,,SB111333549406604929,00.html From checker at panix.com Mon May 2 16:21:13 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:21:13 -0400 (EDT) Subject: [Paleopsych] NYT: 'The World Is Flat': Global Playing Field: More Level, but It Still Has Bumps Message-ID: 'The World Is Flat': Global Playing Field: More Level, but It Still Has Bumps http://www.nytimes.com/2005/04/30/books/30stig.html [Sunday Book Review and first chapter appended. I already posted an NYT Magazine article by Friedman.] April 30, 2005 BOOKS OF THE TIMES | 'THE WORLD IS FLAT' Global Playing Field: More Level, but It Still Has Bumps By JOSEPH E. STIGLITZ THE WORLD IS FLAT: A Brief History of the 21st Century By Thomas L. Friedman. 488 pp. Farrar, Straus & Giroux. $27.50. The world is flat, or at least becoming flatter very quickly, Thomas L. Friedman says in his exciting and very readable account of globalization. In this flat new world, there is a level (or at least more level) playing field in which countries like India and China, long marginalized in the global economy, are able to compete. And while Mr. Friedman, a Pulitzer Prize-winning columnist for The New York Times, celebrates the new vistas opening up for these countries, he describes forcefully the challenges globalization presents for the older industrialized nations - especially the United States. America is still the global leader in science and technology, but its dominance is eroding. As Mr. Friedman points out in "The World Is Flat," Asian countries now produce eight times as many bachelor's degrees in engineering as the United States; the proportion of foreign-born Ph.D.'s in the American science and engineering labor force has risen to 38 percent; and federal financing for research in physical and mathematical sciences and engineering as a share of gross domestic product declined by 37 percent from 1970 to 2004. About a third of "The World Is Flat" is devoted to describing the forces of leveling - from the fall of the Berlin Wall, which eliminated the ideological divide separating much of the world, to the rise of the Internet and technological changes that have led to new models of production and collaboration, including outsourcing and offshore manufacturing. The rest of the book is devoted to exploring the implications of this flattening, both for the advanced industrial countries and the developing world. In truth, Mr. Friedman's major points would come across more strongly if his 488 pages were edited more tightly. But he provides a compelling case that something big is going on. I was in Bangalore, India, in January 2004 - just a month before Mr. Friedman - visiting Infosys, one of India's new leading high-technology companies. I, too, was bowled over by what I saw: "campuses" more modern than anything I had seen on the West Coast, and business leaders as dynamic and thoughtful as anywhere in the world. It may be true that fears of outsourcing have been exaggerated: there are only a limited number of radiologists, software programmers and back-office people whose jobs can be performed at a distance. But I side with Mr. Friedman: the integration of some three billion people into the global economy is a big deal. Even if only a limited number of American jobs are lost, the new competition will have striking effects, particularly on the wages of unskilled workers. While free trade may ultimately make every country better off, not every individual will be better off. There are winners and there are losers; and while, in principle, the winners could compensate the losers, that typically does not happen. Among other things, a flatter world means a less flat America - more inequality. The playing field may be getting more level, but not everyone is equipped to play on it. On that same trip to India, I spent more than half my time in the countryside surrounding Bangalore, where traveling 10 miles was like traveling back 2,000 years. Peasants were farming as their ancestors must have. What has enabled Bangalore to become a high-tech success story is that companies like Infosys have removed themselves from what is going on nearby. They communicate directly by satellite with the United States, and in a place where local newspapers list the number of brownouts the previous day, these companies can have their own sources of power. And while new technologies may close the gap between parts of India and China and the advanced industrial countries, they will also increase the gap between those countries and Africa. Mr. Friedman is right that there are forces flattening the world, but there are other forces making it less flat. At issue is the balance between them. So is the world really much flatter than before? For instance, the new technologies that Mr. Friedman praises as levelers have also given rise to new opportunities for monopolization. Mr. Friedman praises Netscape's leveling role: its browser has really helped to put a world of knowledge and information at each person's doorstep (or computer). But Microsoft was able to use its own market power through control of computer operating systems to effectively replace Netscape with its own browser, Internet Explorer. While Microsoft speaks eloquently of the need to reward innovation, the real rewards are often not reaped by the innovators. In addition, the underlying research for major developments like the Internet and Web browsers is expensive. Large, rich countries can pay for it; poor, small ones cannot. Mr. Friedman notes, but does not emphasize as much as he might, the important role played by government in financing such research before allowing private entrepreneurs to bring the actual products to market - and make the profits. American companies have a distinct advantage in benefiting from government-financed research, even though there are crumbs (some quite large) that those around the world can pick up. Meanwhile, the new "rules of the game" that were part of the last round of global trade negotiations - notably intellectual property regulations requiring all countries to adopt American-style patent and copyright laws - are almost surely making the playing field less level. They will make it easier for those who are ahead of the game to maintain their lead. One mark of a great book is that it makes you see things in a new way, and Mr. Friedman certainly succeeds in that goal. The world may not yet be flat, but there is no doubt that there are important forces - some leveling, some the opposite - that are changing its shape in critical ways. And in his provocative account, Mr. Friedman suggests what this brave new world will mean to all of us, in both the developed and the developing worlds. Joseph E. Stiglitz, university professor at Columbia University, won the Nobel Prize in Economics in 2001. ---------------- Sunday Book Review > 'The World Is Flat': The Wealth of Yet More Nations http://www.nytimes.com/2005/05/01/books/review/01ZAKARIA.html May 1, 2005 By FAREED ZAKARIA THE WORLD IS FLAT: A Brief History of the Twenty-First Century. By Thomas L. Friedman. 488 pp. Farrar, Straus & Giroux. $27.50. OVER the past few years, the United States has been obsessed with the Middle East. The administration, the news media and the American people have all been focused almost exclusively on the region, and it has seemed that dealing with its problems would define the early decades of the 21st century. ''The war on terror is a struggle that will last for generations,'' Donald Rumsfeld is reported to have said to his associates after 9/11. But could it be that we're focused on the wrong problem? The challenge of Islamic terrorism is real enough, but could it prove to be less durable than it once appeared? There are some signs to suggest this. The combined power of most governments of the world is proving to be a match for any terror group. In addition, several of the governments in the Middle East are inching toward modernizing and opening up their societies. This will be a long process but it is already draining some of the rage that undergirded Islamic extremism. This doesn't mean that the Middle East will disappear off the map. Far from it. Terrorism remains a threat, and we will all continue to be fascinated by upheavals in Lebanon, events in Iran and reforms in Egypt. But ultimately these trends are unlikely to shape the world's future. The countries of the Middle East have been losers in the age of globalization, out of step in an age of free markets, free trade and democratic politics. The world's future -- the big picture -- is more likely to be shaped by the winners of this era. And if the United States thought it was difficult to deal with the losers, the winners present an even thornier set of challenges. This is the implication of the New York Times columnist Thomas L. Friedman's excellent new book, ''The World Is Flat: A Brief History of the Twenty-First Century.'' The metaphor of a flat world, used by Friedman to describe the next phase of globalization, is ingenious. It came to him after hearing an Indian software executive explain how the world's economic playing field was being leveled. For a variety of reasons, what economists call ''barriers to entry'' are being destroyed; today an individual or company anywhere can collaborate or compete globally. Bill Gates explains the meaning of this transformation best. Thirty years ago, he tells Friedman, if you had to choose between being born a genius in Mumbai or Shanghai and an average person in Poughkeepsie, you would have chosen Poughkeepsie because your chances of living a prosperous and fulfilled life were much greater there. ''Now,'' Gates says, ''I would rather be a genius born in China than an average guy born in Poughkeepsie.'' The book is done in Friedman's trademark style. You travel with him, meet his wife and kids, learn about his friends and sit in on his interviews. Some find this irritating. I think it works in making complicated ideas accessible. Another Indian entrepreneur, Jerry Rao, explained to Friedman why his accounting firm in Bangalore was able to prepare tax returns for Americans. (In 2005, an estimated 400,000 American I.R.S. returns were prepared in India.) ''Any activity where we can digitize and decompose the value chain, and move the work around, will get moved around. Some people will say, 'Yes, but you can't serve me a steak.' True, but I can take the reservation for your table sitting anywhere in the world,'' Rao says. He ended the interview by describing his next plan, which is to link up with an Israeli company that can transmit CAT scans via the Internet so that Americans can get a second opinion from an Indian or Israeli doctor, quickly and cheaply. What created the flat world? Friedman stresses technological forces. Paradoxically, the dot-com bubble played a crucial role. Telecommunications companies like Global Crossing had hundreds of millions of dollars of cash -- given to them by gullible investors -- and they used it to pursue incredibly ambitious plans to ''wire the world,'' laying fiber-optic cable across the ocean floors, connecting Bangalore, Bangkok and Beijing to the advanced industrial countries. This excess supply of connectivity meant that the costs of phone calls, Internet connections and data transmission declined dramatically -- so dramatically that many of the companies that laid these cables went bankrupt. But the deed was done, the world was wired. Today it costs about as much to connect to Guangdong as it does New Jersey. The next blow in this one-two punch was the dot-com bust. The stock market crash made companies everywhere cut spending. That meant they needed to look for ways to do what they were doing for less money. The solution: outsourcing. General Electric had led the way a decade earlier and by the late 1990's many large American companies were recognizing that Indian engineers could handle most technical jobs they needed done, at a tenth the cost. The preparations for Y2K, the millennium bug, gave a huge impetus to this shift since most Western companies needed armies of cheap software workers to recode their computers. Welcome to Bangalore. A good bit of the book is taken up with a discussion of these technological forces and the way in which business has reacted and adapted to them. Friedman explains the importance of the development of ''work flow platforms,'' software that made it possible for all kinds of computer applications to connect and work together, which is what allowed seamless cooperation by people working anywhere. ''It is the creation of this platform, with these unique attributes, that is the truly important sustainable breakthrough that has made what you call the flattening of the world possible,'' Microsoft's chief technology officer, Craig J. Mundie, told Friedman. Friedman has a flair for business reporting and finds amusing stories about Wal-Mart, UPS, Dell and JetBlue, among others, that relate to his basic theme. Did you know that when you order a burger at the drive-through McDonald's on Interstate 55 near Cape Girardeau, Mo., the person taking your order is at a call center 900 miles away in Colorado Springs? (He or she then zaps it back to that McDonald's and the order is ready a few minutes later as you drive around to the pickup window.) Or that when you call JetBlue for a reservation, you're talking to a housewife in Utah, who does the job part time? Or that when you ship your Toshiba laptop for repairs via UPS, it's actually UPS's guys in the ''funny brown shorts'' who do the fixing? China and India loom large in Friedman's story because they are the two big countries benefiting most from the flat world. To take just one example, Wal-Mart alone last year imported $18 billion worth of goods from its 5,000 Chinese suppliers. (Friedman doesn't do the math, but this would mean that of Wal-Mart's 6,000 suppliers, 80 percent are in one country -- China.) The Indian case is less staggering and still mostly in services, though the trend is dramatically upward. But Friedman understands that China and India represent not just threats to the developed world, but also great opportunities. After all, the changes he is describing have the net effect of adding hundreds of millions of people -- consumers -- to the world economy. That is an unparalleled opportunity for every company and individual in the world. Friedman quotes a Morgan Stanley study estimating that since the mid-1990's cheap imports from China have saved American consumers over $600 billion and probably saved American companies even more than that since they use Chinese-sourced parts in their production. And this is not all about cheap labor. Between 1995 and 2002, China's private sector has increased productivity at 17 percent annually -- a truly breathtaking pace. Friedman describes his honest reaction to this new world while he's at one of India's great outsourcing companies, Infosys. He was standing, he says, ''at the gate observing this river of educated young people flowing in and out. . . . They all looked as if they had scored 1600 on their SAT's. . . . My mind just kept telling me, 'Ricardo is right, Ricardo is right.' . . . These Indian techies were doing what was their comparative advantage and then turning around and using their income to buy all the products from America that are our comparative advantage. . . . Both our countries would benefit. . . . But my eye kept . . . telling me something else: 'Oh, my God, there are just so many of them, and they all look so serious, so eager for work. And they just keep coming, wave after wave. How in the world can it possibly be good for my daughters and millions of other young Americans that these Indians can do the same jobs as they can for a fraction of the wages?' '' He ends up, wisely, understanding that there's no way to stop the wave. You cannot switch off these forces except at great cost to your own economic well-being. Over the last century, those countries that tried to preserve their systems, jobs, culture or traditions by keeping the rest of the world out all stagnated. Those that opened themselves up to the world prospered. But that doesn't mean you can't do anything to prepare for this new competition and new world. Friedman spends a good chunk of the book outlining ways that America and Americans can place themselves in a position to do better. People in advanced countries have to find ways to move up the value chain, to have special skills that create superior products for which they can charge extra. The UPS story is a classic example of this. Delivering goods doesn't have high margins, but repairing computers (and in effect managing a supply chain) does. In one of Friedman's classic anecdote-as-explanation shticks, he recounts that one of his best friends is an illustrator. The friend saw his business beginning to dry up as computers made routine illustrations easy to do, and he moved on to something new. He became an illustration consultant, helping clients conceive of what they want rather than simply executing a drawing. Friedman explains this in Friedman metaphors: the friend's work began as a chocolate sauce, was turned into a vanilla commodity, through upgraded skills became a special chocolate sauce again, and then had a cherry put on top. All clear? Of course it won't be as easy as that, as Friedman knows. He points to the dramatic erosion of America's science and technology base, which has been masked in recent decades by another aspect of globalization. America now imports foreigners to do the scientific work that its citizens no longer want to do or even know how to do. Nearly one in five scientists and engineers in the United States is an immigrant, and 51 percent of doctorates in engineering go to foreigners. America's soaring health care costs are increasingly a burden in a global race, particularly since American industry is especially disadvantaged on this issue. An American carmaker pays about $6,000 per worker for health care. If it moves its factory up to Canada, where the government runs and pays for medical coverage, the company pays only $800. Most of Friedman's solutions to these kinds of problems are intelligent, neoliberal ways of using government in a market-friendly way to further the country's ability to compete in a flat world. There are difficulties with the book. Once Friedman gets through explicating his main point, he throws in too many extras -- perhaps trying to make that chocolate sundae -- making the book seem slightly padded. The process of flattening that he is describing is in its infancy. India is still a poor third-world country, but if you read this book you would assume it is on the verge of becoming a global superstar. (Though as an Indian-American, I read Friedman and whisper the old Jewish saying, ''From your lips to God's ears.'') And while this book is not as powerful as Friedman's earlier ones -- it is, as the publisher notes, an ''update'' of [1]''The Lexus and the Olive Tree'' -- its fundamental insight is true and deeply important. In explaining this insight and this new world, Friedman can sometimes sound like a technological determinist. And while he does acknowledge political factors, they get little space in the book, which gives it a lopsided feel. I would argue that one of the primary forces driving the flat world is actually the shifting attitudes and policies of governments around the world. From Brazil to South Africa to India, governments are becoming more market-friendly, accepting that the best way to cure poverty is to aim for high-growth policies. This change, more than any other, has unleashed the energy of the private sector. After all, India had hundreds of thousands of trained engineers in the 1970's, but they didn't produce growth. In the United States and Europe, deregulation policies spurred the competition that led to radical innovation. There is a chicken-and-egg problem, to be sure. Did government policies create the technological boom or vice versa? At least one can say that each furthered the other. The largest political factor is, of course, the structure of global politics. The flat economic world has been created by an extremely unflat political world. The United States dominates the globe like no country since ancient Rome. It has been at the forefront, pushing for open markets, open trade and open politics. But the consequence of these policies will be to create a more nearly equal world, economically and politically. If China grows economically, at some point it will also gain political ambitions. If Brazil continues to surge, it will want to have a larger voice on the international stage. If India gains economic muscle, history suggests that it will also want the security of a stronger military. Friedman tells us that the economic relations between states will be a powerful deterrent to war, which is true if nations act sensibly. But as we have seen over the last three years, pride, honor and rage play a large part in global politics. The ultimate challenge for America -- and for Americans -- is whether we are prepared for this flat world, economic and political. While hierarchies are being eroded and playing fields leveled as other countries and people rise in importance and ambition, are we conducting ourselves in a way that will succeed in this new atmosphere? Or will it turn out that, having globalized the world, the United States had forgotten to globalize itself? Fareed Zakaria, the editor of Newsweek International and author of ''The Future of Freedom,'' is the host of a new current affairs program on public television, Foreign Exchange. References 1. http://www.nytimes.com/books/99/04/25/reviews/990425.25joffet.html -------------- First Chapter: 'The World Is Flat' http://www.nytimes.com/2005/05/01/books/chapters/0501-1st-friedman.html By THOMAS L. FRIEDMAN No one ever gave me directions like this on a golf course before: "Aim at either Microsoft or IBM." I was standing on the first tee at the KGA Golf Club in downtown Bangalore, in southern India, when my playing partner pointed at two shiny glass-and-steel buildings off in the distance, just behind the first green. The Goldman Sachs building wasn't done yet; otherwise he could have pointed that out as well and made it a threesome. HP and Texas Instruments had their offices on the back nine, along the tenth hole. That wasn't all. The tee markers were from Epson, the printer company, and one of our caddies was wearing a hat from 3M. Outside, some of the traffic signs were also sponsored by Texas Instruments, and the Pizza Hut billboard on the way over showed a steaming pizza, under the headline "Gigabites of Taste!" No, this definitely wasn't Kansas. It didn't even seem like India. Was this the New World, the Old World, or the Next World? I had come to Bangalore, India's Silicon Valley, on my own Columbus-like journey of exploration. Columbus sailed with the Ni?a, the Pinta, and the Santa Mar?a in an effort to discover a shorter, more direct route to India by heading west, across the Atlantic, on what he presumed to be an open sea route to the East Indies-rather than going south and east around Africa, as Portuguese explorers of his day were trying to do. India and the magical Spice Islands of the East were famed at the time for their gold, pearls, gems, and silk-a source of untold riches. Finding this shortcut by sea to India, at a time when the Muslim powers of the day had blocked the overland routes from Europe, was a way for both Columbus and the Spanish monarchy to become wealthy and powerful. When Columbus set sail, he apparently assumed the Earth was round, which was why he was convinced that he could get to India by going west. He miscalculated the distance, though. He thought the Earth was a smaller sphere than it is. He also did not anticipate running into a landmass before he reached the East Indies. Nevertheless, he called the aboriginal peoples he encountered in the new world "Indians." Returning home, though, Columbus was able to tell his patrons, King Ferdinand and Queen Isabella, that although he never did find India, he could confirm that the world was indeed round. I set out for India by going due east, via Frankfurt. I had Lufthansa business class. I knew exactly which direction I was going thanks to the GPS map displayed on the screen that popped out of the armrest of my airline seat. I landed safely and on schedule. I too encountered people called Indians. I too was searching for the source of India's riches. Columbus was searching for hardware-precious metals, silk, and spices-the source of wealth in his day. I was searching for software, brainpower, complex algorithms, knowledge workers, call centers, transmission protocols, breakthroughs in optical engineering-the sources of wealth in our day. Columbus was happy to make the Indians her met his slaves, a pool of free manual labor. I just wanted to understand why the Indians I met were taking our work, why they had become such an important pool for the outsourcing of service and information technology work from America and other industrialized countries. Columbus had more than one hundred men on his three ships; I had a small crew from the Discovery Times channel that fit comfortably into two banged-up vans, with Indian drivers who drove barefoot. When I set sail, so to speak, I too assumed that the world was round, but what I encountered in the real India profoundly shook my faith in that notion. Columbus accidentally ran into America but thought he had discovered part of India. I actually found India and thought many of the people I met there were Americans. Some had actually taken American names, and others were doing great imitations of American accents at call centers and American business techniques at software labs. Columbus reported to his king and queen that the world was round, and he went down in history as the man who first made this discovery. I returned home and shared my discovery only with my wife, and only in a whisper. "Honey," I confided, "I think the world is flat." . . . From checker at panix.com Mon May 2 16:21:38 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:21:38 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'One Nation Under Therapy': They Don't Feel Your Pain Message-ID: 'One Nation Under Therapy': They Don't Feel Your Pain New York Times Book Review, 5.5.1 http://www.nytimes.com/2005/05/01/books/review/01QUARTL.html [First chapter appended.] By ALISSA QUART ONE NATION UNDER THERAPY How the Helping Culture Is Eroding Self-Reliance. By Christina Hoff Sommers and Sally Satel. 310 pp. St. Martin's Press. $23.95. THE alarmist nonfiction book is a staple of publishing. In fact, it is such a staple that it has its own backlash genre, the anti-alarmist alarmist book. Anti-alarmist alarmist books argue the counterintuitive points: that the kids are all right, that everything is getting better not worse, and that we have nothing to fear but therapy itself. Christine Hoff Sommers and Sally Satel's ''One Nation Under Therapy: How the Helping Culture Is Eroding Self-Reliance'' is one of the latest examples, joining a canon that includes ''The Myth of Self-Esteem,'' ''The Progress Paradox'' and ''The Culture of Fear.'' According to Sommers and Satel, our self-pity and self-concern are doing something far worse than simply annoying our friends. Self-absorption, they claim, is destroying America. ''The American Creed that has sustained the nation is now under powerful assault by the apostles of therapism,'' they write. ''The fateful question is: Will Americans actively defend the traditional creed of stoicism and the ideology of achievement or will they continue to allow the nation to slide into therapeutic self-absorption and moral debility?'' In their fight against this moral debility, Sommers, the author of ''The War Against Boys'' and ''Who Stole Feminism?,'' and Satel, a psychiatrist and the author of ''P.C., M.D.: How Political Correctness Is Corrupting Medicine'' (both are resident scholars at the American Enterprise Institute in Washington), coin a word to describe their enemy: ''therapism,'' defined as the tendency to valorize ''openness, emotional self-absorption and the sharing of feelings.'' Therapism has many tentacles, exemplified by a huge collection of counselors so caricatured here that they resemble Al Franken's touchy-feely ''Saturday Night Live'' creation Stuart Smalley. Therapism is the force behind the ''brain disease'' explanations of drug addiction that the authors say let addicts off the hook. It is also implicit in ''the perils of overthinking.'' (I was concerned to discover my tendency to overthink could be hazardous to my health, although I consoled myself with the knowledge that most of the Upper West Side was overthinking as well.) Chapter titles, like ''The Myth of the Fragile Child'' and ''September 11, 2001: The Mental Health Crisis That Wasn't,'' convey a Grinch-like tone. And the book just gets frostier. Sommers and Satel are most nontherapeutic (read: coldest) when they go after educators for coddling children, in particular with the ''emotionally correct'' treatment of students after 9/11. In fact, the authors excoriate the National Education Association for seeing the attack ''mainly in terms of the threat it posed to children's mental health.'' And they reserve a Dickensian harshness for educators' attempts to build their students' self-esteem: ''Those who encourage children to 'feel good about themselves' may be cheating them, unwittingly, out of becoming the kind of conscientious, humane and enlightened people Mill had in mind'' -- referring to the rigorously educated John Stuart Mill, a scholar at 8. Sentences like these make one glad that neither author teaches kindergarten. After their toughen-up-the-children crusade, Sommers and Satel move on to people with cancer. They praise only sufferers who refrain from exploring their emotions, like the columnist Molly Ivins, extolled for her ''spirited refusal to open up'' after she learned she had breast cancer. Then it's survivors of war crimes; only those who refuse to see themselves as traumatized earn the authors' approval. They also present accounts of Ugandans and Cambodians who have suffered atrocities but nevertheless ''functioned well'' despite their post-traumatic stress disorder symptoms. There's a lot of information in here, typically newspaper anecdotes dolled up with quotations from Mill. But information doesn't add up to cogent analysis. It also doesn't add up to a solution for the problem of national self-absorption. The main remedy Sommers and Satel put forward is that people should take responsibility for themselves. The book imagines self-reliance to be an antidote to self-obsession, a bit of a problem since self-absorption and self-reliance are both forms of selfishness. Let's say, for instance, you quit therapy and at the same time stop ruminating and writing memoirs (self-absorption) to become a careerist professional or perhaps a world traveler. You may achieve equanimity but lose contact with others (self-reliance). But is one of two self-oriented life strategies really superior to the other? To be sure, not all of the book's arguments are so gratuitously grouchy. And to their credit, Sommers and Satel summon copious examples of the excesses of therapy and its related industries, from the tale of a research professor of psychology who had grief counseling foisted upon him by a funeral home to a school exercise that encourages children to share their fears about playing tag. They are also particularly astute when drawing the line between the idea of the self held by moral philosophers who ''attribute unacceptable conduct to flawed character, weakness of will, failure of conscience or bad faith'' and the therapeutic idea of self, where personal shortcomings are maladies, syndromes and disorders. This distinction is useful. We tend to forget that psychology is just another system of knowledge, like moral philosophy (or reality television). Like any other discipline or genre, it has limits, and it is worth remembering that there are other ways for us to think of what we are and what a self is. Indeed, therapy is so popular, in part because of its intimacy, that the therapeutic way of thinking can easily be be exploited and degraded. Occasionally, debunking therapeutic culture is both good and necessary, which is why ''One Nation Under Therapy'' seems refreshing at first, perhaps even up to the task of shrinking therapy-induced panics and diagnostic trends down to size, in the tradition of books like Elaine Showalter's ''Hystories'' and Joan Acocella's ''Creating Hysteria.'' But soon it becomes all too clear that Sommers and Satel are interested not in exploring how psychoanalysis has degenerated into vulgar self-help but in starting an emotional temperance movement. They are also waging a dated war against an imagined army of censorious liberals, attacking ''sensitivity and bias committees'' in publishing houses, state governments, test-writing companies and in groups like the American Psychological Association. According to the authors, this Axis of Snivel is responsible for a therapized ''powerful censorship regime.'' This is a notion that seems to spring from the Bush I era, as does one of the book's section headings: ''How Therapism and Multiculturalism Circumvent Morality.'' ''One Nation Under Therapy'' is not just anti-alarmist alarmist nonfiction. It is culture-wars kitsch. Quaint cultural conservatism aside, what would happen if we took the advice of ''One Nation Under Therapy'' to heart? We might get more work done, although we'd think less. We'd show our children tough love and, presumably, foster a new generation of tough-love advocates. But we might also create a society of diminished passions and sensitivities. Even for Sommers and Satel, this might not be a welcome development. After all, once we were all self-reliant and free of anxiety, we would no longer need the reassurance of books like theirs. Alissa Quart is the author of ''Branded: The Buying and Selling of Teenagers.'' ---------------- First Chapter: 'One Nation Under Therapy' http://www.nytimes.com/2005/05/01/books/chapters/0501-1st-sommers.html By CHRISTINA HOFF SOMMERS and SALLY SATEL The Myth of the Fragile Child In 2001, the Girl Scouts of America introduced a "Stress Less Badge" for girls aged eight to eleven. It featured an embroidered hammock suspended from two green trees. According to the Junior Girl Scout Badge Book, girls earn the award by practicing "focused breathing," creating a personal "stress less kit," or keeping a "feelings diary." Burning ocean-scented candles, listening to "Sounds of the Rain Forest," even exchanging foot massages are also ways to garner points. Explaining the need for the Stress Less Badge to the New York Times, a psychologist from the Girl Scout Research Institute said that studies show "how stressed girls are today." Earning an antistress badge, however, can itself be stressful. The Times reported that tension increased in Brownie Troop 459 in Sunnyvale, California, when the girls attempted to make "anti-anxiety squeeze balls out of balloons and Play-Doh." According to Lindsay, one of the Brownies, "The Play-Doh was too oily and disintegrated the balloon. It was very stressful." The psychologist who worried about Lindsay and her fellow Girls Scouts is not alone. Anxiety over the mental equanimity of American children is at an all-time high. In May of 2002, the principal of Franklin Elementary School in Santa Monica, California, sent a newsletter to parents informing them that children could no longer play tag during the lunch recess. As she explained, "The running part of this activity is healthy and encouraged; however, in this game, there is a 'victim' or 'It,' which creates a sell-esteem issue." School districts in Texas, Maryland, New York, and Virginia "have banned, limited, or discouraged" dodgeball. "Anytime you throw an object at somebody," said an elementary school coach in Cambridge, Massachusetts, "it creates an environment of retaliation and resentment." Coaches who permit children to play dodgeball "should be fired immediately," according to the physical education chairman at Central High School in Naperville, Illinois. In response to this attack on dodgeball, Rick Reilly, the Sports Illustrated columnist, chided parents who want "their Ambers and their Alexanders to grow up in a cozy womb of non-competition." Reillv responds to educators like the Naperville chairman of physical education by saying, "You mean there's weak in the world? There's strong? Of course there is, and dodgeball is one of the first opportunities in life to figure out which one you are and how you're going to deal with it." Reilly's words may resonate comfortably with many of his readers, and with most children as well; but progressive educators tend to dismiss his reaction as just another expression of a benighted opposition to the changes needed if education is to become truly caring and sensitive. This movement against stressful games gained momentum after the publication of an article by Neil Williams, professor of physical education at Eastern Connecticut State College, in a journal sponsored by the National Association for Sports and Physical Education, which represents nearly eighteen thousand gym teachers and physical education professors. In the article, Williams consigned games such as Red Rover, relay races, and musical chairs to "the Hall of Shame." Why? Because the games are based on removing the weakest links. Presumably, this undercuts children's emotional development and erodes their self-esteem. In a follow-up article, Williams also pointed to a sinister aspect of Simon Says. "The major problem," he wrote, "is that the teacher is doing his or her best to deceive and entrap students." He added that psychologically this game is the equivalent of teachers demonstrating the perils of electricity to students "by jolting them with an electric current if they touch the wrong button." The new therapeutic sensibility rejects almost all forms of competition in favor of a gentle and nurturing climate of cooperation. Which games, then, are safe and affirming? Some professionals in physical education advocate activities in which children compete only with themselves such as juggling, unicycling, pogo sticking, and even "learning to ... manipulate wheelchairs with ease." In a game like juggling there is no threat of elimination. But experts warn teachers to be judicious in their choice of juggling objects. A former member of The President's Council on Youth Fitness and Sports suggests using silken scarves rather than, say, uncooperative tennis balls that lead to frustration and anxiety. "Scarves," he told the Los Angeles Times, "are soft, nonthreatening, and float down slowly." As the head of a middle school physical education program in Van Nuys, California, points out, juggling scarves "lessens performance anxiety and boosts self-esteem." Writer John Leo, like Reilly, satirized the gentle-juggling culture by proposing a stress-free version of musical chairs: Why not make sure each child has a guaranteed seat for musical chairs? With proper seating, the source of tension is removed. Children can just relax, enjoy the music and talk about the positive feelings that come from being included. Leo was kidding. But the authors of a popular 1998 government-financed antibullying curriculum guide called Quit It! were not. One exercise intended for kindergarten through third grade instructs teachers on how to introduce children to a new way to play tag: Before going outside to play, talk about how students feel when playing a game of tag. Do they like to be chased? Do they like to do the chasing? How does it feel to be tagged out? Get their ideas about other ways the game might be played. After students share their fears and apprehensions about tag, teachers may introduce them to a nonthreatening alternative called "Circle of Friends" where "nobody is ever 'out.'" If students become overexcited or angry while playing Circle of Friends, the guide recommends using stress-reducing exercises to "help the transition from active play to focused work." Reading through Quit It!, you have to remind yourself that it is not satire, nor is it intended for emotionally disturbed children. It is intended for normal five- to seven-year-olds in our nation's schools. Our Sensitive and Vulnerable Youth But is overprotectiveness really such a bad thing? Sooner or later children will face stressful situations, disappointments, and threats to their self-esteem. Why not shield them from the inevitable as long as possible? The answer is that overprotected kids do not flourish. To treat them as combustible bundles of frayed nerves does them no favors. Instead it deprives them of what they need. Children must have independent, competitive rough-and-tumble play. Not only do they enjoy it, it is part of their normal development. Anthony Pellegrini, a professor of early childhood education at the University of Minnesota, defines rough-and-tumble play as behavior that includes "laughing, running, smiling, jumping ... wrestling, play fighting, chasing, and fleeing." Such play, he says, brings children together, it makes them happy and it promotes healthy socialization. Children who are adept at rough play also "tend to be liked and to be good social problem solvers." Commenting on the recent moves to ban competitive zero-sum playground games like tag, Pelligrini told us, "It is ridiculous ... even squirrels play chase." The zealous protectiveness is not confined to the playground. In her eye-opening book The Language Police, Diane Ravitch shows how a once-commendable program aimed at making classroom materials less sexist and racist has morphed into a powerful censorship regime. "Sensitivity and bias" committees, residing in publishing houses, state governments, test-writing companies, and in groups like the American Psychological Association, now police textbooks and other classroom materials, scouring them for any reference or assertion that could possibly make some young reader feel upset, insecure, or shortchanged in life. In 1997, President Bill Clinton appointed Ravitch to an honorary education committee charged with developing national achievement tests. The Department of Education had awarded a multimillion-dollar contract to Riverside Publishing, a major testing company and a subsidiary of Houghton Mifflin, to compose the exam. Ravitch and her committee were there to provide oversight. As part of the process, the Riverside test developers sent Ravitch and her fellow committee members, mostly veteran teachers, several sample reading selections. The committee reviewed them carefully and selected the ones they considered the most lucid, engaging, and appropriate for fourth-grade test takers. Congress eventually abandoned the idea of national tests. However, Ravitch learned that several of the passages she and her colleagues had selected had not survived the scrutiny of the Riverside censors. For example, two of the selections that got high marks from Ravitch and her colleagues were about peanuts. Readers learned that they were a healthy snack and had first been cultivated by South American Indians and then, after the Spanish conquest, were imported into Europe. The passage explained how peanuts became important in the United States, where they were planted and cultivated by African slaves. It told of George Washington Carver, the black inventor and scientist, who found many new uses for peanuts. The Riverside sensitivity monitors had a field day. First of all, they said, peanuts are not a healthy snack for all children. Some are allergic. According to Ravitch, "The reviewers apparently assumed that a fourth-grade student who was allergic to peanuts might get distracted if he or she encountered a test question that did not acknowledge the dangers of peanuts." The panel was also unhappy that the reading spoke of the Spaniards having "defeated" the South American tribes. Its members did not question the accuracy of the claim, but Ravitch surmises, "They must have concluded that these facts would hurt someone's feelings." Perhaps they thought that some child of South American Indian descent who came upon this information would feel slighted, and so suffer a disadvantage in taking the test. Ravitch's group had especially liked a story about a decaying tree stump on the forest floor and how it becomes home to an immense variety of plants, insects, birds, and animals. The passage compared the stump to a bustling apartment complex. Ravitch and the other committee members enjoyed its charm and verve. It also taught children about a fascinating ecology. But the twenty sensitivity panelists at Riverside voted unanimously against it: "Youngsters who have grown up in a housing project may be distracted by similarities to their own living conditions. An emotional response may be triggered." Ravitch presents clear evidence that our schools are in the grip of powerful sensitivity censors who appear to be completely lacking in good judgment and are accountable to no one but themselves. She could find no evidence that sensitivity censorship of school materials helps children. On the contrary, the abridged texts are enervating. "How boring," she says, "for students to be restricted only to stories that flatter their self-esteem or that purge complexity and unpleasant reality from history and current events." The idea that kids can cope with only the blandest of stories is preposterous. Staples like "Little Red Riding Hood," "Jack and the Beanstalk," and "Hansel and Gretel" delight children despite (or because of) their ghoulish aspects. Kids love to hear ghost stories on Halloween and to ride roller coasters, screaming as they hurtle down the inclines. Therapeutic protectiveness is like putting blinders on children before taking them for a walk through a vibrant countryside. Excessive concern over imagined harms can hinder children's natural development. Moreover, in seeking to solve nonexistent problems, it distracts teachers from focusing on their true mission-to educate children and to prepare them to be effective adults. Commenting on Ravitch's findings, Jonathan Yardley, columnist and book critic at the Washington Post, wrote, "A child with a rare disease may have to be put in a bubble, but putting the entire American system of elementary and secondary education into one borders on insanity." Many American teachers seem to believe children must be spared even the mildest criticism. Kevin Miller, a professor of psychology at the University of Illinois at Urbana-Champaign, has studied differences between Chinese and American pedagogy. In one of his videotapes, a group of children in a math class in China are learning about place values. The teacher asks a boy to make the number 14 using one bundle of ten sticks along with some single sticks, and the child uses only the bundle. The teacher then asks the class, in a calm, noncensorious tone, "Who can tell me what is wrong with this?" When Miller shows the video to American teachers, they are taken aback. They find it surprising to see an instructor being so openly critical of a student's performance. "Most of the teachers in training we've shown this to express the worry that this could be damaging to children's self-esteem," Miller reports. Even the minority of American student teachers who don't disapprove of the practice agree that the practice of giving students explicit feedback in public "contravenes what we do in the U.S." Rossella Santagata, a research psychologist at LessonLab in Santa Monica, California, has studied how American and Italian teachers differ in their reactions to students' mistakes. Italian teachers are very direct: they have no qualms about telling students their answer is wrong. In so doing, they violate all of the sensitivity standards that prevail in the United States. Santagata has a videotape of a typical exchange between an American math teacher and a student. An eighth grader named Steve is supposed to give the prime factors of the number 34; instead he lists all the factors. It is not easy for the teacher to be affirmative about Steve's answer. But she finds a way, "Okay. Now Steve you're exactly right that those are all factors. Prime factorization means that you only list the numbers that are prime. So can you modify your answer to make it all only prime numbers?" (Emphasis in original.) Santagata told us that when she shows this exchange to audiences of Italian researchers, they find the teachers strained response ("exactly right") hysterically funny. By contrast, American researchers see nothing unusual or amusing-because, as Santagata says, "Such reactions are normal." Even college students are not exempt from this new solicitude. . . . From checker at panix.com Mon May 2 16:21:55 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:21:55 -0400 (EDT) Subject: [Paleopsych] =?iso-8859-1?q?NYTBR=3A_=27Incompleteness=27=3A_Wai?= =?iso-8859-1?q?ting_for_G=F6del?= Message-ID: NYTBR: 'Incompleteness': Waiting for G?del New York Times Book Review, 5.5.1 http://www.nytimes.com/2005/05/01/books/review/01SCHULMA.html By POLLY SHULMAN INCOMPLETENESS The Proof and Paradox of Kurt G?del. By Rebecca Goldstein. Illustrated. 296 pp. Atlas Books/ W. W. Norton & Company. $22.95. REBECCA GOLDSTEIN, as anyone knows who has read her novels -- particularly ''The Mind-Body Problem'' -- understands that people are thinking beings, and the mind's loves matter at least as much as the heart's. After all, she's not just a novelist, but a philosophy professor. She casts ''Incompleteness,'' her brief life of the logician Kurt G?del (1906-78), as a touching intellectual love story. Though G?del was married, his wife barely appears here; as Goldstein tells it, his romance was with mathematical Platonism, the idea that the glories of mathematics exist eternally beyond our grasp. G?del's Platonism inspired him to deeds as daring as any knight's: he proved his famous incompleteness theorem for its sake. His Platonism also set him apart from his intellectual contemporaries. Only Einstein shared it, and could solace G?del's loneliness, Goldstein argues. A biography with two focuses -- a man and an idea -- ''Incompleteness'' unfolds its surprisingly accessible story with dignity, tenderness and awe. News of G?del's Platonism, or Einstein's, might surprise readers familiar with popular interpretations of their work. For centuries, science seemed to be tidying the mess of the real world into an eternal order beautiful and pure -- a heavenly file cabinet labeled mathematics. Then, in the early 20th century, Einstein published his relativity theory, Werner Heisenberg his uncertainty principle and G?del his incompleteness theorem. Many thinkers -- from the logical positivists with whom G?del drank coffee in the Viennese cafes of the 1920's to existentialists, postmodernists and annoying people at cocktail parties -- have taken those three results as proof that reality is subjective and we can't see beyond our noses. You can hardly blame them. As Goldstein points out, the very names of the theories seem to mock the notion of objective truth. But she makes a persuasive case that G?del and Einstein understood their work to prove the opposite: there is something greater than our little minds; reality exists, whether or not we can ever touch it. It's appropriate, though sad for G?del, that his work has been interpreted to have simultaneously opposite meanings. The proofs of his famous theorems rely on just that sort of twisty thinking: statements like the famous Liar's Paradox, ''This statement is false,'' which flip their meanings back and forth. In the case of the Liar's Paradox, if the statement is true, then it's false -- but if it's false, then it's true. Like that paradox, an assertion that talks about itself, G?del's theorems are meta-statements, which speak about themselves. Because G?del made so much of self-reference and paradox, previous books about his work -- like Douglas Hofstadter's ''G?del, Escher, Bach'' -- tend to emphasize the playfulness of his ideas. Not Goldstein's. She tells his story in a minor key, following G?del into the paranoia that overtook him after Einstein's death, growing out of his loneliness and unrelenting rationality. After all, paranoia, like math, makes people dig deeper and deeper to find meaning. G?del's work addresses the core of mathematics: finding proofs. Proofs are mathematicians' road to truth. To find them, mathematicians from the ancient Greeks on have set up systems consisting of three basic elements: axioms, true statements so intuitively obvious they are self-evident; rules of inference, logical principles indicating how to use axioms to prove new, less obviously true statements; and those new true statements, called theorems. (Many Americans met axioms and proofs for the first and last time in 10th-grade geometry.) A century ago, mathematicians began taking these systems to an extreme. Since mathematical intuition can be as unreliable as other kinds of intuitions -- often things that seem obvious turn out to be just plain wrong -- they tried to eliminate it from their axioms. They built new systems of arbitrary symbols and formal rules for manipulating them. Of course, they chose those particular symbols and rules because of their resemblance to mathematical systems we care about (such as arithmetic). But, by choosing rules and symbols that work whether or not there's any meaning behind them, the mathematicians kept the potential corruption of intuition at bay. The dream of these formalists was that their systems contained a proof for every true statement. Then all mathematics would unfurl from the arbitrary symbols, without any need to appeal to an external mathematical truth accessible only to our often faulty intuition. G?del proved exactly the opposite, however. He showed that in any formal system complicated enough to describe the numbers and operations of arithmetic, as long as the axioms don't lead to contradictions there will always be some statement that is not provable -- and the contradiction of it will not be provable either. He also showed that there's no way to prove from within the system that the system itself won't give rise to contradictions. So, any formal system worth bothering with will either sprout contradictions -- which is bad news, since once you have a contradiction, you can prove anything at all, including 2 + 2 = 5 -- or there will be perfectly ordinary statements that may well be true but can never be proved. You can see why this result rocked mathematics. You can also see why positivists, existentialists and postmodernists had a field day with it, particularly since, once you find one of those unprovable statements, you're free to add it to your system as an axiom, or else to add its complete opposite. Either way, you'll get a new system that works fine. That makes math sound pretty subjective, doesn't it? Well, G?del didn't think so, and his reason grows beautifully from his spectacular proof itself, which Goldstein describes with lucid discipline. Though the proof relies on a meticulous, fiddly mechanism that took an entire semester to build up when I studied logic as a math major in college, its essence fits magically into a few pages of a book for laypeople. It can even, arguably, fit in a single paragraph of a book review -- though that may be stretching. To put it roughly, G?del proved his theorem by taking the Liar's Paradox, that steed of mystery and contradiction, and harnessing it to his argument. He expressed his theorem and proof in mathematical formulas, of course, but the idea behind it is relatively simple. He built a representative system, and within it he constructed a proposition that essentially said, ''This statement is not provable within this system.'' If he could prove that that was true, he figured, he would have found a statement that was true but not provable within the system, thus proving his theorem. His trick was to consider the statement's exact opposite, which says, ''That first statement -- the one that boasted about not being provable within the system -- is lying; it really is provable.'' Well, is that true? Here's where the Liar's Paradox shows its paces. If the second statement is true, then the first one is provable -- and anything provable must be true. But remember what that statement said in the first place: that it can't be proved. It's true, and it's also false -- impossible! That's a contradiction, which means G?del's initial assumption -- that the proposition was provable -- is wrong. Therefore, he found a true statement that can't be proved within the formal system. Thus G?del showed not only that any consistent formal system complicated enough to describe the rules of grade-school arithmetic would have an unprovable statement, but that it would have an unprovable statement that was nonetheless true. Truth, he concluded, exists ''out yonder'' (as Einstein liked to put it), even if we can never put a finger on it. John von Neumann, the father of game theory, took up G?del's cause in America; in England, Alan Turing provided an alternative proof of G?del's theorem while inventing theoretical computer science. Whatever G?del's work had to say about reality, it changed the course of mathematics forever. Polly Shulman is a contributing editor for Science magazine and has written about mathematics for many other publications. From checker at panix.com Mon May 2 16:22:12 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:22:12 -0400 (EDT) Subject: [Paleopsych] NS: Teenagers special Message-ID: Teenagers special: The original rebels http://www.newscientist.com/article.ns?id=mg18524891.100&print=true * 05 March 2005 * Lynn Dicks * Lynn Dicks is a writer based near Cambridge, UK EAST Africa, one-and-a-half million years ago: a group of women sit with their young children. They are heavy-browed with small skulls - not quite human, but almost. Some are checking their children for ticks, others teaching them how to dig tubers out of the ground. Not far off, a gaggle of teenage girls lounge under a tree, sniggering and pointing at some young men who are staging fights nearby. The older women beckon: "Come and help us dig out this root - it will make a great meal," they seem to say. But the girls reply with grunts and slouch off, sulkily. Could this really have happened? Our immediate ancestors, Homo erectus, may not have had large brains, high culture or even language, but could they have boasted the original teenage rebels? That question has been hotly contested in the past few years, with some anthropologists claiming to have found evidence of an adolescent phase in fossil hominids, and others seeing signs of a more ape-like pattern of development, with no adolescent growth spurt at all. This is not merely an academic debate. Humans today are the only animals on Earth to have a teenage phase, yet we have very little idea why. Establishing exactly when adolescence first evolved and finding out what sorts of changes in our bodies and lifestyles it was associated with could help us understand its purpose. We humans take twice as long to grow up as our nearest relatives, the great apes. Instead of developing gradually from birth to adulthood, our growth rate slows dramatically over the first three years of life, and we grow just a few centimetres a year for the next eight years or so. Then suddenly, at puberty, growth accelerates again to as much as 12 centimetres a year. Over the following three years adolescents grow an astonishing 15 per cent in both height and width. Though the teenage years are most commonly defined by raging hormones, the development of secondary sexual characteristics and attitude problems, what is unique in humans is this sudden and rapid increase in body size following a long period of very slow growth. No other primate has a skeletal growth spurt like this so late in life. Why do we? Until recently, the dominant explanation was that physical growth is delayed by our need to grow large brains and to learn all the complex behaviour patterns associated with humanity - speaking, social interaction and so on. While such behaviour is still developing, humans cannot easily fend for themselves, so it is best to stay small and look youthful. That way you do not eat too much, and your parents and other members of the social group are motivated to continue looking after you. What's more, studies of mammals show a strong relationship between brain size and the rate of development, with larger-brained animals taking longer to reach adulthood. Humans are at the far end of this spectrum. If this theory is correct, the earliest hominids, Australopithecus, with their ape-sized brains, should have grown up quickly, with no adolescent phase. So should H. erectus, whose brain, though twice the size of that of Australopithecus at around 850 cubic centimetres, was still relatively small. The great leap in brain capacity comes only with the evolution of our own species and Neanderthals, starting almost 200,000 years ago. Brains expanded to around 1350 cm3 in our direct ancestors and 1600 cm3 in Neanderthals. So if the development of large brains accounts for the teenage growth spurt, the origin of adolescence should be here. The trouble is, some of the fossil evidence seems to tell a different story. The human fossil record is extremely sparse, and the number of fossilised children minuscule. Nevertheless in the past few years anthropologists have begun to look at what can be learned of the lives of our ancestors from these youngsters. One of the most studied is the famous Turkana boy, an almost complete skeleton of H. erectus from 1.6 million years ago found in Kenya in 1984. The surprise discovery is that there are some indications that he was a young teenager when he died. Accurately assessing how old someone is from their skeleton is a tricky business. Even with a modern human, you can only make a rough estimate based on the developmental stage of teeth and bones and the skeleton's general size. For example, most people gain their first permanent set of molars at age 6 and the second at 12, but the variation is huge. Certain other features of the skull also develop chronologically, although the changes that occur in humans are not necessarily found in other hominids. In the middle teenage years, after the adolescent growth spurt, the long bones of the limbs cease to grow because the areas of cartilage at their ends, where growth has been taking place, turn into rigid bone. This change can easily be seen on an X-ray. You need as many of these developmental markers as possible to get an estimate of age. The Turkana boy did not have his adult canines, which normally erupt before the second set of molars, so his teeth make him 10 or 11 years old. The features of his skeleton put him at 13, but he was as tall as a modern 15-year-old. "By human standards, he was very tall for his dental age," says anthropologist Holly Smith from the University of Michigan at Ann Arbor. But you get a much more consistent picture if you look at Turkana boy in the context of chimpanzee patterns of growth and development. Then, his dental age, bone age and height all agree he was 7 or 8 years old. To Smith, this implies that the growth of H. erectus was primitive and the adolescent growth spurt had not yet evolved. Susan Anton of New York University disagrees. She points to research by Margaret Clegg, now at the University of Southampton in the UK, showing that even in modern humans the various age markers often do not match up. Clegg studied a collection of 18th and 19th-century skeletons of known ages from a churchyard in east London. When she tried to age the skeletons blind, she found the disparity between skeletal and dental age was often as great as that of the Turkana boy. One 10-year-old boy, for example, had a dental age of 9, the skeleton of a 6-year-old but was tall enough to be 11. "The Turkana kid still has a rounded skull, and needs a lot of growth to reach the adult shape," Anton adds. Like apes, the face and skull of H. erectus changed shape significantly between youth and adulthood. Anton thinks that H. erectus had already developed modern human patterns of growth, with a late, if not quite so extreme, adolescent spurt. She believes Turkana boy was just about to enter it. If she's right, and small-brained H. erectus went through a teenage phase, that scuppers the orthodox idea linking late growth with development of a large brain. Anthropologist Steven Leigh from the University of Illinois at Urbana-Champaign is among those who are happy to move on. He believes the idea of adolescence as catch-up growth is naive; it does not explain why the growth rate increases so dramatically. He points out that many primates have growth spurts in particular body regions that are associated with reaching maturity, and this makes sense because by timing the short but crucial spells of maturation to coincide with the seasons when food is plentiful, they minimise the risk of being without adequate food supplies while growing. What makes humans unique is that the whole skeleton is involved. For Leigh, this is the key. Coordinated widespread growth, he says, is about reaching the right proportions to walk long distances efficiently. "It's an adaptation for bipedalism," he says. According to Leigh's theory, adolescence evolved as an integral part of efficient upright locomotion, as well as to accommodate more complex brains. Fossil evidence suggests that our ancestors took their first steps on two legs as long as six million years ago. If proficient walking was important for survival, perhaps the teenage growth spurt has very ancient origins. Leigh will not be drawn, arguing that there are too few remains of young hominids to draw definite conclusions. While many anthropologists will consider Leigh's theory a step too far, he is not the only one with new ideas about the evolution of teenagers. A very different theory has been put forward by Barry Bogin from the University of Michigan-Dearborn. He believes adolescence in our species is precisely timed to improve the success of the first reproductive effort. In girls, notes Bogin, full adult shape and features are achieved several years before they reach full fertility at around the age of 18. "The time between looking fertile and being fertile allows women to practise social, sexual and cultural activities associated with adulthood, with a low risk of having their own children," says Bogin. When they finally do have children, they are better prepared to look after them. "As a result, firstborns of human mothers die much less often than firstborns of any other species." In boys, you see the opposite. They start producing viable sperm at 13 or 14 years of age, when they still look like boys. The final increase in muscle size that turns them into men does not happen until 17 or 18. In the interim boys, who feel like men, can practise male rivalries without being a threat to adult men or an attractive option to adult women. When boys do become sexually active, they have practised and are more likely to be successful without getting hurt. Bogin's theory makes totally different predictions to Leigh's. If the timing of adolescence is related to uniquely human cultural practices, our species should be the first and only one to have a teenage phase. "H. erectus definitely did not have an adolescence," he asserts. Such strong and opposing views make it all the more necessary to scour the fossil record for clues. One approach, which has produced a surprising result, relies on the minute analysis of tooth growth. Every nine days or so the growing teeth of both apes and humans acquire ridges on their enamel surface. These perikymata are like rings in a tree trunk: the number of them tells you how long the crown of a tooth took to form. Across mammals, the speed of tooth development is closely related to how fast the brain grows, the age you mature and the age you die. Teeth are good indicators of life history because their growth is less related to the environment and nutrition than is the growth of the skeleton. Slower tooth growth is an indication that the whole of life history was slowing down, including age at maturity. Back in the 1980s Christopher Dean, an anatomist at University College London, was the first to measure tooth growth in fossils using perikymata. He found that australopithecines dating from between 3 and 4 million years ago had tooth crowns that formed quickly. Like apes, their first molars erupted at 4 years old and the full set of teeth were in place by 12. Over the years, Dean's team has collected enough teeth to show that H. erectus also had faster tooth growth than modern man, but not so fast as earlier hominids. "Things had moved on a bit," he says. "They had their full set of teeth by about 15." Modern humans reach this stage by about age 20. The change in H. erectus seems to imply that the growth pattern of modern humans was beginning to develop, with an extended childhood and possibly an adolescent growth spurt. Dean cautions, though, that the link between dental and skeletal development in ancestral hominids remains uncertain. These findings could equally support Leigh's or Bogin's theories. A more decisive piece of evidence came last year, when researchers in France and Spain published their findings from an analysis of Neanderthal teeth. A previous study of a remarkably well-preserved skeleton of a Neanderthal youth, known as Le Moustier 1, from south-west France had suggested that, with a dental age of 15 and the frame of an 11-year-old, the kid was about to undergo an adolescent growth spurt. But the analysis of his perikymata reveals quite a different picture. Rather than continuing the trend towards slower development seen in H. erectus, Neanderthals had returned to much faster tooth growth (Nature, vol 428, p 936) and hence, possibly, a shorter childhood. Does this mean they didn't have an adolescence? Lead researcher Fernando Ramirez-Rozzi, of the French National Centre for Scientific Research (CNRS) in Paris thinks Neanderthals died young - about 25 years oldprimarily because of the cold, harsh conditions they had to endure in glacial Europe. Under pressure from this high mortality, they evolved to grow up quicker than their immediate ancestors. "They probably reached maturity at about 15," he says, "but it could have been even younger." They would have matured too fast to accommodate an adolescent burst of growth. He points to research showing that populations of Atlantic cod have genetically changed to mature more quickly under the intense fishing pressure of the 1980s. Others contest Ramirez-Rozzi's position. "You can't assume, just because Neanderthals' teeth grew faster, that their entire body developed faster," says Jennifer Thompson of the University of Nevada, Las Vegas, one of the researchers involved in the Le Moustier 1 study. Controversy rages, but these latest findings at least highlight one aspect of adolescence that most scientists can agree on. Whatever the immediate purpose of the late growth spurt, it was made possible by an increase in life expectancy. And that being so, one way to work out when the first teenagers originated is to look at the lifespan of a species. This is exactly what Rachel Caspari of the University of Michigan at Ann Arbor has been doing. Her most recent study, published in July 2004, shows an astonishing increase in longevity that separates modern Homo sapiens from all other hominids, including Neanderthals (Proceedings of the National Academy of Sciences, vol 101, p 10895). She categorised adult fossils as old or young by assessing whether they had as much wear on their last molar, or wisdom tooth, as on other molars. "In modern humans we see a massive increase in the number of people surviving to be grandparents," she says. The watershed comes as recently as 30,000 years ago. On this evidence, Neanderthals and H. erectus probably had to reach adulthood quickly, without delaying for an adolescent growth spurt. So it looks as though Bogin is correct we are the original teenagers. Whether he is right about the purpose of adolescence is another matter. He admits we will never know for sure. "Fossils will never give us growth curves," he says, "and we should not expect our ancestors to grow like we do." Printed on Tue Mar 08 20:56:28 GMT 2005 ----------- more... http://www.newscientist.com/popuparticle.ns?id=in61 Instant Expert: Teenagers The teenager is a [1]uniquely human phenomenon. Adolescents are known to be moody, insecure, argumentative, [2]angst-ridden, impulsive, [3]impressionable, reckless and rebellious. Teenagers are also characterised by [4]odd sleeping patterns, awkward [5]growth spurts, [6]bullying, [7]acne and [8]slobbish behaviour. So what could be the possible benefit of the teenage phase? Most other animals - apes and human ancestors included - skip that stage altogether, developing rapidly from infancy to full adulthood. Humans, in contrast, have a very puzzling four-year gap between sexual maturity and prime reproductive age. Anthropologists disagree on when the [9]teenage phase first evolved, but pinpointing that date could help define its purpose. There are a variety of current explanations for the existence of teenagers. Some believe that we need longer for our [10]large brains to develop. Other explanations suggest that a teenage phase allows kids to learn about [11]complex social behaviour and [12]other difficult skills, or that it is even required to develop coordinated bipedal bodies adapted to [13]travelling long distances. Raging hormones Scientists once thought that the brain's internal structure was fixed at the end of childhood, and teenage behaviour was blamed on raging hormones and a lack of experience. Then researchers discovered that the brain [14]undergoes significant changes during adolescence. According to many recent studies, teen brains really are unique (see interactive graphic). Though many brain areas mature during childhood, [15]others mature later - such as the frontal and parietal lobes, responsible for planning and self-control. Other studies have shown that teens [16]fail to see the consequences of their actions, and that [17]sudden increases in nerve connectivity in teen brains may make it difficult for teenagers to read social situations and other people's emotions. Risky behaviour One study in 2004 showed that teens have less brain activity in areas responsible for [18]motivation and risk assessment, perhaps explaining why they are more likely to take part in [19]risky activities such as [20]abusing drugs and alcohol, develop a [21]hard-to-kick smoking habit or indulge in [22]under-age sex. Teenage pregnancies and rising rates of sexually transmitted diseases among teens are big problems - especially because today's teen generation is the [23]biggest the world has seen: a 2003 UN report revealed that 1 in 5 people were between 10 and 19, a total of 1.2 billion people. But not everyone agrees on the best way to tackle the problem. Some believe that comprehensive [24]sex education is the key, while others argue for [25]abstinence only education courses. John Pickrell, 3 March 2004 References 1. http://www.newscientist.com/channel/being-human/teenagers/dn1654 2. http://www.newscientist.com/channel/being-human/teenagers/dn2925 3. http://www.newscientist.com/channel/being-human/teenagers/dn3812 4. http://www.newscientist.com/channel/being-human/teenagers/mg18524811.700 5. http://www.newscientist.com/channel/being-human/teenagers/mg13818715.600 6. http://www.newscientist.com/channel/being-human/teenagers/mg18524891.100 7. http://www.newscientist.com/channel/being-human/teenagers/mg17623742.500 8. http://www.newscientist.com/channel/being-human/teenagers/mg14219223.600 9. http://www.newscientist.com/channel/being-human/teenagers/mg18524891.100 10. http://www.newscientist.com/channel/being-human/teenagers/mg13618505.000 11. http://www.newscientist.com/channel/being-human/teenagers/mg13718634.500 12. http://www.newscientist.com/channel/being-human/teenagers/mg17623725.300 13. http://www.newscientist.com/channel/being-human/teenagers/dn6681 14. http://www.newscientist.com/channel/being-human/teenagers/mg17623650.200 15. http://www.newscientist.com/channel/being-human/teenagers/mg16522224.200 16. http://www.newscientist.com/channel/being-human/teenagers/dn6738 17. http://www.newscientist.com/channel/being-human/teenagers/dn2925 18. http://www.newscientist.com/channel/being-human/teenagers/dn4718 19. http://www.newscientist.com/channel/being-human/teenagers/dn4460 20. http://www.newscientist.com/channel/being-human/teenagers/mg13718594.000 21. http://www.newscientist.com/channel/being-human/teenagers/dn4163 22. http://www.newscientist.com/channel/being-human/teenagers/dn6957 23. http://www.newscientist.com/channel/being-human/teenagers/dn4253 24. http://www.newscientist.com/channel/being-human/teenagers/mg18324580.800 25. http://www.newscientist.com/channel/being-human/teenagers/dn6957 -------------- Adolescence unique to modern humans http://www.newscientist.com/article.ns?id=dn1654&print=true * 12:25 06 December 2001 * Claire Ainsworth The uniquely human habit of taking 18 years or so to mature is a recent development in our evolutionary history. Growth patterns of fossil teeth have shown that a prolonged growing-up period evolved long after our ancestors started walking upright and making tools. Our great ape relatives, the chimpanzees and gorillas, take about 11 years to reach adulthood. Scientists speculate that delaying this process allows children to absorb our complex languages, culture and family relationships. What's more, we need extra time for our large brains to grow - they are half as big again as those of the earliest humans, Homo erectus, who appeared some 2 million years ago. Christopher Dean of University College London and his team studied teeth from H. erectus, our Australopithecus human-like ancestors such as the famous "Lucy", and Proconsul nyanzae, an ape ancestor. The rate of tooth development is tightly linked to how long it takes to become fully grown. Teeth rings Teeth grow by adding on enamel in small increments, leaving striations rather like shell ridges or tree rings. By studying these rings, the team could work out how fast the teeth grew. They found that H. erectus's teeth grew at almost the same rate as those of both modern and fossil apes and Australopithecus - suggesting a shorter growing-up period. This was surprising, as H. erectus walked upright, was about the same size as us and made simple tools - all traits associated with being human, says Dean. But it fits with the fact that H. erectus's brain was much smaller. By comparing the growth rate of the back and front teeth, the team estimated that H. erectus children produced their first permanent molars at around 4.5 years, and their second at 7.5 years. This compares with 6 and 12 years in modern humans and 3 and 5 years for modern apes, indicating that H. erectus was starting down the road of modern dental development. Journal reference: Nature (vol 414, p 628) Related Articles * [13]Old bones may be earliest human ancestor * 11 July 2001 * [14]A 3.5 million year-old skull unearthed in Kenya may force a re-examination of the evolution of modern humans * 21 March 2001 * [15]The most ancient human-like remains are unearthed in Kenya * 5 December 2000 Weblinks * [16]Human Origins * [17]Evolutionary Anatomy Unit, UCL * [18]Nature References 13. http://www.newscientist.com/article.ns?id=dn995 14. http://www.newscientist.com/article.ns?id=dn542 15. http://www.newscientist.com/article.ns?id=dn240 16. http://www.mnh.si.edu/anthro/humanorigins/ 17. http://evolution.anat.ucl.ac.uk/ 18. http://www.nature.com/ ------------------ Teen angst rooted in busy brain http://www.newscientist.com/article.ns?id=dn2925&print=true * 19:00 16 October 2002 * Duncan Graham-Rowe Scientists believe they have found a cause of adolescent angst. Nerve activity in the teenaged brain is so intense that they find it hard to process basic information, researchers say, rendering the teenagers emotionally and socially inept. Robert McGivern and his team of neuroscientists at San Diego State University, US, found that as children enter puberty, their ability to quickly recognise other people's emotions plummets. What is more, this ability does not return to normal until they are around 18 years old. McGivern reckons this goes some way towards explaining why teenagers tend to find life so unfair, because they cannot read social situations as efficiently as others. Previous studies have shown that puberty is marked by sudden increases in the connectivity of nerves in parts of the brain. In particular, there is a lot of nerve activity in the prefrontal cortex. "This plays an important role in the assessment of social relationships, as well as planning and control of our social behaviour," says McGivern. Western turmoil He and his team devised a study specifically to see whether the prefrontal cortex's ability to function altered with age. Nearly 300 people aged between 10 and 22 were shown images containing faces or words, or a combination of the two. The researchers asked them to describe the emotion expressed, such as angry, happy, sad or neutral. The team found the speed at which people could identify emotions dropped by up to 20 per cent at the age of 11. Reaction time gradually improved for each subsequent year, but only returned to normal at 18. During adolescence, social interactions become the dominant influence on our behaviour, says McGivern. But at just the time teenagers are being exposed to a greater variety of social situations, their brains are going through a temporary "remodelling", he says. As a result, they can find emotional situations more confusing, leading to the petulant, huffy behaviour for which adolescents are notorious. But this may only be true for Western cultures. Adolescents often play a less significant role in these societies, and many have priorities very different from their parents', leading to antagonism between them. This creates more opportunity for confusion. "One would expect to observe a great deal more emotional turmoil in such kids," he says. Journal reference: Brain and Cognition (vol 50, p 173) Related Articles * [12]Brain expression response linked to personality * 20 June 2002 * [13]Angry outbursts linked to brain dysfunction * 27 May 2002 * [14]Physical changes may be responsible for 'feeling' emotions * 19 September 2000 Weblinks * [15]Psychology, San Diego State University * [16]Early Experience and Brain Development research * [17]Brain and Cognition References 12. http://www.newscientist.com/article.ns?id=dn2439 13. http://www.newscientist.com/article.ns?id=dn2331 14. http://www.newscientist.com/article.ns?id=dn3 15. http://www.psychology.sdsu.edu/ 16. http://www.macbrain.org/ 17. http://www.academicpress.com/b&c -------------- Movie smoking encourages kids to light up http://www.newscientist.com/article.ns?id=dn3812&print=true * 13:14 10 June 2003 * Shaoni Bhattacharya Watching movie stars light up on screen is the biggest single factor in influencing teenagers to smoke, suggests a new US study. Adolescents who had never smoked were almost three times more likely to then take up the habit if they had watched films packed with smoking scenes, compared to their peers who had seen films with the least amount of on-screen smoking. "There was a tremendous impact," says research leader Madeline Dalton, at Dartmouth Medical School in Hanover, New Hampshire. "Movies were the strongest predictor of who would go on to smoke - stronger than peers smoking, family smoking, or the personality of the child." "We know from past studies it's very rare for smoking to be portrayed in a negative light. Smokers [in movies] tend to be tough guys or sexy, rebellious women - which appeal to adolescents," she told New Scientist. Dalton's colleague Michael Beach adds: "Our data indicate that 52 per cent of smoking initiation among adolescents in this study can be attributed to movie smoking exposure." "The effect is stronger than the effect of traditional cigarette advertising and promotion, which accounts for 'only' 34 per cent of new experimentation," notes Stanton Glantz, at the Center for Tobacco Control Research and Education, in an editorial accompanying the study published online in The Lancet. Smoke screen The study began by recruiting over 2600 US schoolchildren aged 10 to 14 who had never smoked. Each child was then asked if they had watched any of 50 movies randomly selected from 601 box office hits released between 1988 and 1999. The number of occurrences of smoking in each film was recorded by trained coders. When followed up one to two years later, 10 per cent of the children had tried smoking. Those in the top quarter of exposure to movie smoking were 2.7 times more likely to have tried a cigarette than those in the lowest quarter of exposure. This effect was independent of other factors that might influence the child's smoking behaviour, such as friends or family smoking. "It's more evidence that movies have a strong impact on adolescents," says Dalton. "Previous studies have suggested that smoking in movies influences adolescent smoking behaviour, but this is the first study to show that viewing smoking in movies predicts who will start smoking in the future." Dalton, an expert in cancer risk behaviour in children, says a previous study by the team showed that children were more likely to smoke if their favourite actor smoked. Movies which depict smoking should be given an adult rating or "R rating" in the US, suggests Glantz, which would mean that children under 17 could not see the film without a parent. "An R rating for smoking in movies would prevent about 330 adolescents [in the US] from starting to smoke and ultimately extend 170 lives every day," he writes. Journal reference: The Lancet (vol 361, no 9373, early online publication) Related Articles * [12]Controversy over passive smoking danger * 16 May 2003 * [13]Violent song lyrics increase aggression * 4 May 2003 * [14]Public smoking ban slashes heart attacks * 1 April 2003 Weblinks * [15]Dartmouth Medical School * [16]Center for Tobacco Control Research and Education * [17]Action on Smoking and Health, UK * [18]The Lancet References 12. http://www.newscientist.com/article.ns?id=dn3737 13. http://www.newscientist.com/article.ns?id=dn3695 14. http://www.newscientist.com/article.ns?id=dn3557 15. http://www.dartmouth.edu/dms/index.shtml 16. http://repositories.cdlib.org/ctcre/ 17. http://www.ash.org.uk/ 18. http://www.thelancet.com/home --------------- Bedtimes could pinpoint the end of adolescence http://www.newscientist.com/article.ns?id=mg18524811.700&print=true * 08 January 2005 * Andy Coghlan AT WHAT point does adolescence end? Perhaps at the point when we start to go to bed progressively earlier rather than later and later. The end of puberty, or sexual maturation, is well defined. It is the point when bones stop growing, at around age 16 for girls and 17.5 for boys. But for adolescence, the transition from childhood to adulthood, there is no clear endpoint. "I don't know of any markers for it," says Till Roenneberg of the Centre for Chronobiology at the University of Munich in Germany. "Everyone talks about it but no one knows when adolescence ends. It is seen as a mixed bag of physical, psychological and sociological factors." His suggestion is based on a study of the sleep habits of 25,000 individuals of all ages in Switzerland and Germany. The study looked at when people go to sleep during vacations, when they are free to sleep any time. It reveals a distinct peak of night-owlishness at around age 20. Women reach this peak at 19.5 years old on average, and men at 20.9 years. After that, individuals gradually return to earlier and earlier sleeping patterns, until things go haywire in old age. Roenneberg, whose findings appear in Current Biology (vol 14, p R1038), thinks that the peak in lateness is the first plausible biological marker for the end of adolescence. The study confirms that 20-year-olds sleep any time except in the evening, says Malcolm von Schantz of the Surrey Sleep Research Centre at the University of Surrey, UK. "Don't I know it - they're in my lectures!" The suggestion that this peak in sleep habits marks the end of adolescence is intriguing, he says, but more research will be needed to prove these behavioural changes are a result of physiological changes, rather than lifestyle. "A lot of things happen to you around this age," von Schantz points out. If it is a physiological effect, forcing teenagers to get to school for, say, 8 am, could be a mistake, Roenneberg says. They probably take nothing in for the first two lessons because they are still in biological "sleep time", and end up with a horrendous sleep deficit by the weekend. ------------- Letters: Growth spurts http://www.newscientist.com/article.ns?id=mg13818715.600&print=true * 01 May 1993 * TIM BROMAGE Barry Bogin has written a marvellous explanation about why we go through the adolescent growth spurt ('Why must I be a teenager at all?', 6 March). We can equally marvel at another spurt not often mentioned by biologists and anthropologists, namely the mid-childhood spurt occurring between ages six and eight. It varies in magnitude and shows itself as a mere blip on Bogin's growth rate curve, but it has a very interesting history. First, it is an opportunity for body growth to get its due after so many years of devoting unequal resources to growing a large brain. Every parent will attest to the struggle of dressing children and the seeming incoherence of a clothing industry that makes pullovers to fit little bodies that have to pull them over a brain which is nearly 95 per cent of its adult size at 6 years of age. Second, this spurt is related to the adolescent spurt in developed nations. Puberty is not only the time of accelerated growth, it marks the beginning of the end of growth too. In order for adolescents to reach the normal height for their population they must be so far along by the time puberty hits them. Thus for children with relatively short pre-pubertal growth periods, an extra push is needed early on (the mid-childhood spurt) to get them to their right preadolescent height before the pubertal growth spurt. Children of populations with puberty closer to 16 years of age, such as those of some underdeveloped nations, do not experience the mid-childhood spurt because they have more prepubertal time to grow up. Tim Bromage City University of New York -------------- Spotty genes - News http://www.newscientist.com/article.ns?id=mg17623742.500&print=true * 21 December 2002 HERE'S something else grumpy teenagers can blame their parents for - their zits. In a study of identical and non-identical twins, Veronique Bataille of St Thomas' Hospital in London and her team have shown that acne is 80 per cent genetic. Environmental factors, such as eating the wrong foods or wearing greasy make-up, are relatively unimportant. They will report their results in The Journal of Investigative Dermatology. Finding the genes involved could clarify what triggers acne, and possibly lead to cheaper, more effective treatments. Worldwide, prescription drugs for acne cost about $4 billion each year. ------------ Fine young slobs?: Kids who spend hours hunched in front of television or computer screens may look as healthy as their active brothers and sisters. But are they storing up trouble? Helen Saul reports http://www.newscientist.com/article.ns?id=mg14219223.600&print=true * 23 April 1994 * HELEN SAUL A teenage computer games addict sits engrossed in front of a screen, hardly moving for hours on end. A parent, worried about strangers and excessive traffic on the roads, insists on driving the children to school. Increasing pressure on children to perform well in academic subjects relegates physical exercise to the bottom of the priority list at school. It all seems like a recipe for weak bodies and illness. But are today's children and adolescents really as sedentary and unfit as popular images would have us believe? And if they are, what should be done to persuade them to be more active? On the face of it, the facts are hardly encouraging. The Broadcasting Audience Research Board's figures for 1993 show that in Britain children between the ages of four and 15 watch an average of between two and three hours of TV each day. The National Curriculum for schools allocates an average of only one hour a week for physical education, and in practice less than 10 per cent of this time is spent exercising. Moreover, as many as one in three children do less than the equivalent of a 10-minute walk each day, according to Neil Armstrong and his research team at the University of Exeter. Yet just as politicians in Britain want to see a return to competitive, team-oriented sport in schools, experts are stressing the need for exercise regimes geared for individuals which require a minimum of formal training. Researchers, meanwhile, are busy questioning the underlying assumptions that today's young couch potatoes are physically weaker than their supposedly more active forbears. For children, it may be a mistake to equate physical inactivity with low physical fitness, says Armstrong. He and his colleagues have spent the past nine years monitoring the links between exercise and fitness in some 700 children, aged between 9 and 16. Part of this endeavour has involved measuring how the hearts and lungs of some of these children perform during strenuous exercise. The result is surprising: the hearts and lungs of inactive children perform just as well as those of habitually active children - and just as well as those of children of previous generations. 'There's no scientific evidence to show that children are less fit than they used to be,' says Armstrong So childhood laziness is no bad thing? Not quite. Physical fitness is clearly influenced by a myriad factors other than exercise, including genetics and diet. It also means different things to different people: a long-distance runner and a weightlifter are both physically fit, but in different ways and because of different exercise regimes. The way hearts and lungs respond to aerobic activity is certainly one measure of physical fitness. But the strength and stamina of skeletal muscles may be just as important. Exactly how important is unclear, for few long-term studies of the effects of exercise on children have attempted to examine all these factors. What's more, even if childhood laziness does not erode physical fitness immediately, children who fail to form the 'exercise habit' are likely to regret it later in life. Studies of adults show that a sedentary lifestyle is as likely to cause heart disease as high blood pressure, smoking or high cholesterol levels. People who fail to take physical exercise are thought to be twice as likely contracting coronary heart disease. They also run higher-than-average risks of developing breast cancer, diabetes and osteoporosis. In the US alone, physical inactivity is estimated to cause 250 000 deaths a year. Treadmill test Adults who don't exercise also perform badly on tests of heart and lung fitness. Put them on a treadmill or cycle ergometer and measure their aerobic fitness by monitoring oxygen uptake, heart rate and carbon dioxide exhaled and you will find that their lungs can't take up as much oxygen as their active counterparts. Armstrong and his colleagues wanted to find out if the same was true for children. So they tested 420 children, measuring their oxygen uptake as they exercised on treadmills or cycle ergometers. The tests supported the results of similar tests carried out some fifty years ago in Chicago. The study also shows that, in contrast to adults, active children perform no better in tests of aerobic fitness than children who don't. Studies in other parts of the world show a similar trend, says Armstrong. Steven Blair, director of epidemiology at the Cooper Institute for Aerobics Research in Dallas, believes that one in five children in the US is physically unfit. But he agrees with Armstrong that the limited data available suggest there has been no major change in physical fitness among young Americans over the past few decades. 'It's very popular in the States for people to dash about saying 'It's terrible. It's getting worse. More and more children are getting more and more unfit.' But how do they know that?' The links between exercise and fitness in children have always been uncertain. It is doubtful that studies based on any one measure of fitness can resolve the main questions. This is certainly true of tests based on oxygen uptake. For one thing, oxygen uptake varies from child to child because of genetic differences that influence muscle growth, strength of the heart and so on. It could be that peak oxygen uptake is too crude a measure to pick up a slight deterioration in lung and heart fitness. Moreover, even if the hearts and lungs of children do not weaken with lack of exercise, that does not necessarily mean that inactive children are physically fit in a broader sense. Yet there are other signs that exercise may not be as important to the fitness of children as it is to adults. In adults, for instance, there is a clear link between physical exercise and blood levels of high density lipoprotein, or 'good' cholesterol. This substance acts to prevent the clogging up of arteries that is caused by high levels of 'bad' cholesterol, or low-density lipo-protein. The more exercise people take, the higher their HDL levels and the more favourable the ratio between HDL and LDL, which remains constant. But there is no evidence that the same is true in children. This, however, may be no reason to celebrate. If children can't feel the physical benefits of exercise, won't it prove harder to persuade them of its value? 'There's no way you can convince 15-year-olds that by being more active today, they'll be less likely to get coronary heart disease when they're 50,' says Oded Bar-Or, professor of exercise sciences at McMaster University in Hamilton, Ontario. The problem is compounded by the fact that inactivity often goes hand in hand with eating too many fatty or sugary foods. More than two in five children in Britain have total cholesterol levels (combined LDL and HDL levels) above the American Health Foundation's safety limit. This is calculated from the cholesterol level in adults that is known to increase the risk of heart disease, taking into account the fact that cholesterol levels increase gradually with age. All the signs are that most children are not active enough to form the kind of exercise habit that could protect them from ill health later in life. As part of the Exeter study, for instance, researchers used portable heart monitors to capture the heart rates of some 266 children from 9.00 am until 9.00 pm. The portable devices used a small transmitter on the chest to send the heart rate data to a receiver on a wrist band. Previously, the researchers had established that, on average, the heart rate of children walking on a treadmill at 6 kilometres per hour is 140 beats per minute. From this, it was possible to calculate that a third of the boys and half of the girls do not even do the equivalent of a brisk 10-minute walk a day. But not everyone is pessimistic. Drawing on data reported in 43 different epidemiological studies, Blair concludes that children and adults alike can reduce the risk of heart disease later in life by burning off just three kilocalories per kilogram of body weight per day. For a child weighing 40 kilograms, this amounts to 120 kcal a day - equivalent to the energy contained in an average biscuit. In a study of 1800 boys and girls aged between 10 and 18 years, Blair found that the most met this standard. Exercise habit That said, most researchers see no harm in encouraging all children to be more active. 'It's all pluses,' says Armstrong. The biggest plus of all is that having developed an exercise habit, a child may be more likely to retain it throughout adulthood. But do active children necessarily become active adults? What little evidence there is suggests the answer is yes. In 1990, researchers in Britain questioned more than 4000 adults on their behaviour, attitudes and beliefs about activity and fitness as part of the Allied Dunbar National Fitness Survey. A quarter of those who said that they were active as teenagers also said that they were active as adults. Only two per cent of those who said that they were inactive at the younger age said that they were active as adults. Ken Fox, who lectures in physical exercise at the University of Exeter, believes that what motivates people to do exercise shifts over time. Overemphasising sports and team games is one reason why adolescents drop out, he says. This is particularly true of girls whose participation in exercise during adolescence falls off more dramatically than that of boys. Groups of girls may not value formalised activities such as competitive sport, he says. 'But aerobics and dance may be more socially acceptable to some groups of girls. And if they value these activities, they're more likely to make the decision to take part.' Bill Kohl, director of the division of childhood and adolescent medicine at the Cooper Institute of Aerobic research, says: 'Fitness and activity is for all kids, not just for those who are athletically gifted. It's not necessary to run a marathon or be Sebastian Coe to get health benefits from physical activity. Helen Saul is a freelance writer specialising in health and medicine. ------------- Teenagers special: Brain storm http://www.newscientist.com/article.ns?id=mg18524891.200&print=true * 05 March 2005 Prefrontal cortex The prefrontal cortex is the home of "executive" functioning, high-level cognitive processes that, among other things, allow us to develop detailed plans, execute them, and block irrelevant actions. This area undergoes a bulking up between the ages of 10 and 12, followed by a dramatic decline in size that continues into the early 20s. This is probably due to a burst of neuronal growth followed by a "pruning" stage in which pathways that are not needed are lost. If the adolescent's brain is still bedding down its executive functions, this might help explain why teenagers can sometimes seem so disorganised and irrational. Right ventral striatum This area of the brain is thought to be involved in motivating reward-seeking behaviour. A study last year showed that teenagers had less activity than adults in this part of the brain during a reward-based gambling game. The researchers speculate that teens may be driven to risky but potentially high-reward behaviours such as shoplifting and drug-taking because this area is underactive. Pineal gland The pineal gland produces the hormone melatonin, levels of which rise in the evening, signalling to the body that it is time to sleep. During adolescence melatonin peaks later in the day than in children or adults. This could be why teenagers tend to be so fond of late nights and morning lie-ins. Corpus callosum These are nerve fibres linking the left and right sides of the brain. The parts thought to be involved in language learning undergo high growth rates before and during puberty, but this growth then slows. This might help explain why the ability to learn new languages declines rapidly after the age of 12. Cerebellum This part of the brain continues to grow until late adolescence. It governs posture and movement, helping to maintain balance and ensure that movements are smooth and directed. It influences other regions of the brain responsible for motor activity and may also be involved in language and other cognitive functions. ------------ Teenagers special: Going all the way http://www.newscientist.com/article.ns?id=mg18524891.300&print=true * 05 March 2005 * Alison George LYNSEY TULLIN was 15 when she became pregnant. The only contraception she and her boyfriend had used was wishful thinking: "I didn't think it would happen to me," she says. Tullin, who lives in Oldham in northern England, decided to keep the baby, now aged 3, although as a consequence her father has disowned her. Tullin is not alone. In the UK nearly 3 per cent of females aged 15 to 19 became mothers in 2002, many of them unintentionally. And unplanned pregnancies are not the only consequence of teenage sex - rates of sexually transmitted diseases (STDs) are also rocketing in British adolescents, both male and female. The numerous and complex societal trends behind these statistics have been endlessly debated without any easy solutions emerging. Policy makers tend to focus on the direct approach, targeting young adolescents in the classroom. In many western schools teenagers get sex education classes giving explicit information about sex and contraception. But recently there has been a resurgence of some old-fashioned advice: just say no. The so-called abstinence movement urges teens to take virginity pledges and cites condoms only to stress their failure rate. It is sweeping the US, and is now being exported to countries such as the UK and Australia. Confusingly, both sides claim their strategy is the one that leads to fewest pregnancies and STD cases. But a close look at the research evidence should give both sides pause for thought. It is a morally charged debate in which each camp holds entrenched views, and opinions seem to be based less on facts than on ideology. "It's a field fraught with subjective views," says Douglas Kirby, a sex education researcher for the public-health consultancy ETR Associates in Scotts Valley, California. For most of history, pregnancy in adolescence has been regarded not as a problem but as something that is normal, so long as it happens within marriage. Today some may still feel there is nothing unnatural about older adolescents in particular becoming parents. But in industrialised countries where extended education and careers for women are becoming the norm, parenthood can be a distinct disadvantage. Teenage mums are more likely to drop out of education, to be unemployed and to have depression. Their children run a bigger risk of being neglected or abused, growing up without a father, failing at school and abusing drugs. The US has by far the highest number of teenage pregnancies and births in the west; 4.3 per cent of females aged between 15 and 19 gave birth there in 2002. This is significantly higher than the rate in the UK (2.8 per cent), which itself has the highest rate in western Europe (see Chart). Another alarming statistic is the number of teenagers catching STDs. In the UK the incidences of chlamydia, syphilis and gonorrhoea in under-20s have all more than doubled since 1995. The biggest rise has been in chlamydia infections in females under 20; cases have more than tripled, up to 18,674 in 2003. Chlamydia often causes no symptoms for many years but it can lead to infertility in women and painful inflammation of the testicles in men. No surprise, then, that teenage sex and pregnancy has become a political issue. The UK government has set a target to halve the country's teen pregnancy rate by 2010, and the US government has set similar goals. But achieving these targets will not be easy. In an age when adolescence has never been so sexualised, in most western countries people often begin to have sex in their mid to late teens; by the age of 17, between 50 and 60 per cent are no longer virgins. Since the 1960s, UK schools have increasingly accepted that many teenagers will end up having sex and have focused efforts on trying to minimise any ensuing harm. Sex education typically involves describing the mechanics of sex and explaining how various contraceptives work, with particular emphasis on condoms because of the protection they provide from many STDs. The sex education strategy gained further support in the early 1990s when policy makers looked to the Netherlands. There, teenage birth rates have plummeted since the 1970s and are now among the lowest in Europe, with about 0.8 per cent of females aged between 15 and 19 giving birth in 2002. No one knows why for sure, as Dutch culture differs from that of the UK and America in several ways. But it is generally attributed to frank sex education in schools and open attitudes to sex. Dutch teenagers, says Roger Ingham, director of the Centre for Sexual Health Research at the University of Southampton,"have less casual sex and are older when they first have sex compared with the UK". But a new sexual revolution is under way. Spearheaded by the religious right, the so-called abstinence movement is based on the premise that sex outside marriage is morally wrong. "We're trying to say there's another approach to your sexuality," says Jimmy Hester, co-founder of one of the oldest pro-abstinence campaigns, True Love Waits, based in Nashville, Tennessee. Abstinence-based education got US government backing in 1981, when Congress passed a law to fund sex education that promoted self-restraint. More money was allocated through welfare laws passed in 1996, which provided $50 million a year. A key plank of the abstinence approach is to avoid giving advice on contraception. The logic is that such information would give the message that it's OK to have sex. "The moment we do that, we water down the commitment," says Hester. If contraception is mentioned at all, it is to highlight its failings - often using inaccurate or distorted data. A report for the US House of Representatives published last December found that 11 out of the 13 federally funded abstinence programmes studied contained false or misleading information. Examples of inaccurate statements included: "Pregnancy occurs one out of every seven times that couples use condoms," and: "Condoms fail to prevent HIV 31 per cent of the time." They also use some questionable logic regarding the success rate of abstinence (see "Heads I win, tails you lose"). While some states advocate "abstinence-plus" programmes, providing a level of advice on contraception alongside heavy promotion of chastity, the hard-line "abstinence only" approach is in the ascendant in the US. Around a third of US secondary schools have abstinence-only programmes, and nearly 3 million young people have publicly pledged to remain virgins until they marry. And it is spreading. Last June an American group came to the UK to promote the Silver Ring Thing, a Christian movement that encourages teens to publicly pledge to remain virgins until marriage and to keep their promise with the aid of a $12 ring. And True Love Waits has held virginity rallies in Australia. This trend comes amid claims that the UK's more liberal approach not only does not work, but has the opposite effect. "Free pills and condoms boost promiscuity" screamed the headline on the front page of UK newspaper The Times last year (5 April 2004). It was prompted by research by David Paton, an economist at the University of Nottingham, UK, which found that in some areas that had increased access to family planning services, teen pregnancy rates had remained the same and STD rates had actually risen. There are now increasing calls from conservative and religious groups for schools in the UK to consider the abstinence option. A programme called Love for Life is now operating in 60 per cent of schools in Northern Ireland. It could be described as abstinence-plus that is heavy on the abstinence. Its founder, Richard Barr, a GP from Craigavon, County Armagh, says that focusing on contraception ignores the bigger picture of human sexuality. "There's a massive need for a more holistic approach, not just a damage-limitation approach." And the UK mainland is home to a small but growing number of groups, most of them with Christian roots, promoting abstinence-centred education. The word abstinence is less in vogue than across the Atlantic, however, and such groups are more likely to talk in terms of delaying sex until young people are in a committed relationship. But does the abstinence approach work? Do teenagers - a group not renowned for their propensity to do what they are told - take any notice when adults tell them not to have sex? Proponents of abstinence claim research supports their strategy. But the vast majority of studies that have been done in this area have been small, short-term evaluations without control groups. "There have only been three well-designed trials where an 'intervention' group is compared with a control group and participants are tracked over time," says Kirby. One of these, published in 1997, looked at a five-session abstinence-only initiative in California. The trial tracked 10,600 teenagers for 17 months (Family Planning Perspectives, vol 29, p 100). The researchers found it had no impact on the sexual behaviour or pregnancy rates of teenagers. The other two studies had similar results. "None of them show that any abstinence-only programmes had any impact on behaviour," says Kirby. Although not a controlled trial, one of the largest studies of the effect of abstinence pledges tracked the sex lives of 12,000 US teenagers aged between 12 and 18 (American Journal of Sociology, vol 106, p 859). A group led by Peter Bearman, a sociologist at Columbia University in New York, investigated whether taking a virginity pledge affected the age when people first had sex. It did, with an average delay of 18 months. The pledgers also got married earlier and had fewer partners overall. But when Bearman went back six years later and looked at the STD rates in the same people, now aged between 18 and 24, he was in for a surprise. In research presented at the National STD conference in Philadelphia last year, he found that though pledgers had had fewer sexual partners than non-pledgers, they were just as likely to have had an STD. And the reason? "Pledgers use condoms less," says Bearman. "It's difficult to simultaneously imagine not intending to have sex and being contraceptively prepared." Here lies the problem that many have with the idea of abstinence-only education. While it may work for those kids who live up to the ideal, those who don't are left without the knowledge to protect themselves when they do have sex. "It's not rocket science," says Bearman. But here's where proponents of the liberal approach can stop feeling smug. Because despite many people's unquestioning assumption that comprehensive sex education is the best way to reduce teenage pregnancy, there is actually little good-quality evidence backing this view. One of the problems in carrying out randomised controlled trials in this area is the question of who should be used as the control group. Most schools now have some form of sex education in place, however rudimentary, and it would be unethical to take this away from some children to create the control group. Instead researchers have tended to compare standard sex education with new initiatives specially designed to reduce pregnancy rates. But the results have been unimpressive. A systematic review in 2002 of 26 such studies showed that not one of them improved the use of birth control or reduced the teenage pregnancy rate (British Medical Journal, vol 324, p 1426). But in the past few years, a handful of randomised controlled trials have been published showing that some carefully designed sex education programmes do appear to work. One of the most effective is the Carrera Adolescent Pregnancy Prevention Program, aimed at 13 to 15-year-olds in a poor area of New York (Perspectives on Sexual and Reproductive Health, vol 34, p 244). Abstinence is mentioned during the programme, but most of the emphasis is on contraception. A three-year study showed that the pregnancy rate of teenage girls who took the programme was less than half the rate of those who didn't. Analysis showed this was due to both greater condom use and delayed onset of sex. Why should these programmes be any different? As well as lasting longer, they were, says Kirby, "interactive and personalised, not just abstract facts". The Carrera programme, for example, not only covered sexual behaviour, it tackled the social disadvantages that lead to teenage pregnancy. Along with information on and free access to contraceptives, it involved intensive youth work such as sports, job clubs and homework help. Most UK sex education programmes seem half-hearted in comparison, providing the bare biological facts, perhaps alongside a demonstration of how to put a condom on a cucumber. "It's something I feel quite angry about," says Michael Adler, a former STD physician at University College London Hospital. In his job he saw many casualties of unsafe sex. "We're failing young people right at the beginning," he says. Unfortunately policy makers have recently lost a good source of information about what works and what doesn't. The US Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, commissioned a panel of external experts to carry out a rigorous review of various sex education programmes. The panel identified five strategies that were successful in reducing the rate of teenage pregnancy, all based on comprehensive sex education, and the details were posted on the organisation's website. But in 2002 that information disappeared and the CDC will no longer release it. According to the CDC press office, the review programme is being "re-evaluated". But sceptics fear it has been dumped because its conclusions don't fit with the Bush's administration's views. "They were inconsistent with the ideology to which this administration adheres," says Bill Smith of the Sexuality Information and Education Council of the United States, a liberal sex education advocacy group based in New York. What of the study that made the newspaper headlines in the UK last year, showing that contraception provision is linked with higher STD rates? Perhaps it should not really be taken as a damning indictment of the liberal approach. The study looked at National Health Service family planning clinics, not school-based comprehensive sex education. Simply doling out condoms without tackling the wider issues is unlikely to have much impact. Anyway, should the correlation between sex clinics and STD levels really be so surprising? "Has it occurred to [David Paton] that they put more services in areas with high rates?" asks Roger Ingham. In fact, amid all the scare stories, the average age when a person first has sex now appears to be levelling out at around 17 in the US and 16 in the UK. And although rates of STDs are on the increase in the UK, teenage pregnancy and birth rates are on a downward trend, as they have been in most developed countries for several years. A report from the Alan Guttmacher Institute, a reproductive health research group in New York, concludes this is due to factors such as the rise of careers for women, and the increasing importance of education and training (Family Planning Perspectives, vol 32, p 14). Perhaps it is unsurprising, then, that it is among society's lowest income groups that teen pregnancy rates are highest. In the face of such complex societal forces, those who try to influence teenagers' behaviour on a day-to-day basis undoubtedly have a tough job on their hands. There may be no single solution. More research is needed to produce detailed information on which kind of sex education programmes work best, and in which contexts. One approach is to involve older teenagers, on the premise that 14-year-olds may be more likely to listen to 18-year-olds than people of their parents' generation. Since having her son, Lynsey Tullin has started working for Brook, a young people's sexual health charity, to ensure that today's teenagers are more savvy about sex. "We talk the same language," she says. A tactic that she finds hits home is to describe new parenthood in all its gory details - the nappies, the lack of sleep, a social life in tatters. "We run workshops about being parents, telling them what we went through," she says. "It's a shock." Different approaches to teenage sexuality Comprehensive sex education Provides explicit information about contraception, sexuality and sexual health Abstinence-only approach Teaches that the only place for sex is within marriage, and the only certain way to avoid pregnancy and STDs is abstinence. Does not teach about contraception Abstinence-plus Promotes abstinence as the best choice, but provides varying degrees of information on contraception in case teens do become sexually active Heads I win, tails you lose LOOK at any abstinence-only literature, and you'll read that this is the only certain way to prevent pregnancy and avoid catching a sexually transmitted disease (STD). "Abstinence. Failure rate 0 per cent," is the claim on one pro-abstinence website. But does this make sense? The most important measure of any method of preventing pregnancy and STDs is not its ideal effectiveness, but its "use effectiveness" - how successful it is in the real, sometimes messy, world of sex. Condoms, for instance, have a 97 per cent success rate at preventing pregnancy if used correctly, but have an estimated use-effectiveness of 86 per cent, due to problems such as tearing or slipping. If people who intend to use condoms but never get as far as opening the pack are included, some studies suggest the use-effectiveness of condoms could be as low as 30 per cent - the sort of figure abstinence fans shout from the rooftops. What about applying the same real-world rules to abstinence? Unfortunately there are no studies detailing the use-effectiveness of abstinence in preventing pregnancy, but it is highly unlikely to be 100 per cent, as commonly claimed by its proponents. Their reasoning goes like this: individuals who set out to remain abstinent but succumb to temptation and have sex are no longer seen as abstinence "users". And those who become pregnant may even be marked up as a failure for the contraception strategy if, say, they attempted to use a condom but bungled it. Abstinence campaigners are very vocal about the failings of contraception. But is it perhaps time to own up about the failure rate of abstinence? ------------- Teenagers special: Bully boys http://www.newscientist.com/article.ns?id=mg18524891.400&print=true * 05 March 2005 * Clare Wilson LAST year the UK pop music station BBC Radio 1 mounted a "Beat Bullying" campaign, and over six weeks it was flooded with more than 1 million requests for its free "Beat Bullying" wristbands. As it struggled to meet demand, a thriving market opened up on eBay for these blue plastic bracelets. Bullying, it seems, struck a nerve - which is hardly surprising, given that an estimated 1 in 5 secondary schoolchildren in the UK has been bullied. Most efforts to tackle the problem involve working with the perpetrators as well as their victims. Teachers may be urged to help bullies recognise and modify their behaviour. Bullies, it is commonly believed, often come from unaffectionate or violent families, and may have poor social skills and low self-esteem. "Particularly in America, the traditional view is that [bullies] are malfunctioning," says Peter Smith, a psychologist at Goldsmiths College, University of London, who has advised the UK government on how best to tackle bullying. But could this view be wrong? Far from being the result of a damaged psyche, could bullying be a successful social strategy - albeit one that is very unpleasant for people on the receiving end? Over the past few years, some psychologists, including Smith, have started to think so. They believe that at least some kinds of bullying boost the status of the bully among his or her peers. Several studies by Anthony Pellegrini, an evolutionary developmental psychologist at the University of Minnesota, support this theory. In one study published in 2003, he asked a group of 138 schoolchildren aged between 12 and 14 to say how aggressive their classmates were, both physically and psychologically. On a separate scale, they had to say which members of the opposite sex they would ask to a party (Journal of Experimental Child Psychology, vol 85, p 257). Those most likely to get an invite were boys who were physically aggressive and girls who were psychologically so. "Boys have high status with their male peers if they're bullies, and girls like them," says Pellegrini. If it turns out to be true that bullying raises the perpetrator's social status, trying to change bullies' behaviour by boosting social skills and self-esteem may not work. "Some bullies, at least, are socially skilled," says Smith. "These skills have a function, which is to enhance your status in a competitive peer group." -------------- Teenagers special: Live now, pay later http://www.newscientist.com/article.ns?id=mg18524891.500&print=true * 05 March 2005 In the west, some of the biggest threats to teenagers long-term health stem from bad habits such as eating unhealthily and smoking. Policy makers are also paying growing attention to adolescents' mental health. Fewer than 20 per cent of 13-to-15-year-olds in England eat the recommended five portions of fruit and vegetables a day American teenagers spend an average of 3 to 4 hours a day watching TV In Australia, 20 to 25 per cent of under-17-year-olds are overweight or obese Almost a quarter of 15 and 16-year-olds in the UK smoke regularly Some estimates suggest that up to 1 in 5 adolescents have some form of psychological problem, ranging from eating disorders to depression or self-harming In England 11 per cent of 11-to-15-year-olds have used drugs in the last month. From checker at panix.com Mon May 2 16:22:41 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:22:41 -0400 (EDT) Subject: [Paleopsych] Wiki: Moral relativism Message-ID: Moral relativism http://en.wikipedia.org/wiki/Moral_relativism >From Wikipedia, the free encyclopedia. Moral relativism is the position that [2]moral propositions do not reflect [3]absolute or [4]universal truths. It not only holds that ethical judgments emerge from social [5]customs and personal preferences, but also that there is no single standard by which to assess an ethical proposition's truth. Many relativists see moral [6]values as applicable only within certain cultural boundaries. Some would even suggest that one person's ethical judgments or acts cannot be judged by another, though most relativists propound a more limited version of the theory. Some moral relativists -- for example, [7]Jean-Paul Sartre (1905-1980) -- hold that a personal and [8]subjective [9]moral core lies at the foundation of our moral acts. They believe that public [10]morality is a reflection of social convention, and that only personal, subjective morality is truly authentic. Moral relativism is not the same as moral [11]pluralism, which acknowledges the co-existence of opposing ideas and practices, but does not require that they be equally valid. Moral relativism, in contrast, contends that opposing moral positions have no truth value, and that there is no preferred standard of reference by which to judge them. Contents [12]1 History [13]2 Some philosophical considerations [14]3 Critics of relativism [15]4 See also [16]5 References and sources [17]6 External links [[18]edit] History Moral relativism is not new. [19]Protagoras' (circa 481-420 BC) assertion that "man is the measure of all things" is an early [20]philosophical precursor to modern relativism. The [21]Greek historian [22]Herodotus (circa 484-420 BC) observed that each society thinks its own belief system and way of doing things are best. Various ancient [23]philosophers also questioned the idea of an absolute standard of morality. The 18th century [24]Enlightenment philosopher, [25]David Hume (1711-1776), is in several important respects the father of both modern [26]emotivism and moral relativism, though Hume himself was not a relativist. He distinguished between matters of fact and matters of value, and suggested that moral judgments consist of the latter, for they do not deal with verifiable facts that obtain in the world, but only with our sentiments and passions, though he argued that some of our sentiments are universal. He is famous for denying any objective standard for morality, and suggested that the universe is indifferent to our preferences and our troubles. In the modern era, [27]anthropologists such as [28]Ruth Benedict (1887-1948), cautioned observers not to use their own cultural standards to evaluate those they were studying, which is known as [29]ethnocentricism. Benedict said there are no morals, only customs, and in comparing customs, the anthropologist, "insofar as he remains an anthropologist ... is bound to avoid any weighting of one in favor of the other." To some extent, the increasing body of knowledge of great differences in belief among societies caused both social scientists and philosophers to question whether there can be any objective, absolute standards pertaining to values. This caused some to posit that differing systems have equal validity, with no standard for adjudicating among conflicting beliefs. The Finnish philosopher-anthropologist, [30]Edward Westermarck (1862-1939) was among the first to formulate a detailed theory of moral relativism. He contended that all moral ideas are subjective judgments that reflect one's upbringing. He rejected [31]G.E. Moore's (1873-1958) intuitionism -- in vogue during the early part of the 20th century, and which identified moral propositions as true or false, and known to us through a special faculty of [32]intuition -- due to the obvious differences in beliefs among societies, which he said was evidence that there is no innate, intuitive power. [[33]edit] Some philosophical considerations So-called descriptive or normative relativists (for example, [34]Ralph Barton Perry), accept that there are fundamental disagreements about the right course of action even when the same facts obtain and the same consequences are likely to arise. However, the descriptive relativist does not necessarily deny that there is one correct moral appraisal, given the same set of circumstances. Other descriptivists believe that opposing moral beliefs can both be true, though critics point out that this leads to obvious logical problems. The latter descriptivists, for example, several leading [35]Existentialists, believe that morality is entirely subjective and personal, and beyond the judgment of others. In this view, moral judgments are more akin to aesthetic considerations and are not amenable to rational analysis. In contrast, the metaethical relativist maintains that all moral judgments are based on either societal or individual standards, and that there is no single, objective standard by which one can assess the truth of a moral proposition. While he preferred to deal with more practical, real-life ethical matters, the British philosopher [36]Bernard Williams (1929-2003) reluctantly came to this conclusion when he put on his metaethicist's hat. Metaethical relativists, in general, believe that the descriptive properties of terms such as good, bad, right, and wrong are not subject to [37]universal [38]truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what ought to be done based on societal or individiual norms, and these cannot be adjudicated using some independent standard of evaluation, for the latter standard will always be societal or personal and not universal, unlike, for example, the scientific standards for assessing temperature or for determining mathematical truths. Moral relativism stands in marked contrast to [39]moral absolutism, [40]moral realism, and [41]moral naturalism, which all maintain that there are moral facts, facts that can be both known and judged, whether through some process of verification or through intuition. These philosophies see morality as something that obtains in the world. Examples include the philosophy of [42]Jean-Jacques Rousseau (1712-1778), who saw man's nature as inherently good, or of [43]Ayn Rand, who believed morality is derived from man's exercising his unobstructed rationality. Others believe moral knowledge is something that can be derived by external sources such as a deity or revealed doctrines, as would be maintained by various [44]religions. Some hold that moral facts inhere in nature or reality, either as particular instances of perfect ideas in an eternal realm, as adumbrated by [45]Plato (429-347 BC); or as a simple, unanalyzable property, as advocated by Moore. In each case, however, moral facts are invariant, though the circumstances to which they apply might be different. Moreover, in each case, moral facts are objective and can be determined. Some philosophers maintain that moral relativism devolves into [46]emotivism, the movement inspired by [47]logical positivists in the early part of the 20th Century. Leading exponents of logical positivism include [48]Rudolph Carnap {1891-1970} and [49]A. J. Ayer {1910-1989}. Going beyond Hume, the positivists contended that a proposition is meaningful only if it can be verified by [50]logical or scientific inquiry. Thus, [51]metaphysical propositions, which cannot be verified in this manner, are not simply incorrect, they are meaningless, nonsensical. Moral judgments are primarily expressions of emotional preferences or states, devoid of cognitive content; consequently, they are not subject to verification. As such, moral propositions are essentially meaningless utterances or, at best, express personal attitudes (see, for example, [52]Charles L. Stevenson {1908-1979}). Not all relativists would hold that moral propositions are meaningless; indeed, many make any number of assertions about morality, assertions that they undoubtedly believe to be meaningful. However, other philosophers have argued that, since we have no means of analysing a moral proposition, it is essentially meaningless, and, in their view, relativism is therefore tantamount to emotivism. The political theorist, [53]Leo Strauss (1899-1973), subscribed to a species of relativism, for he believed there are no objective criteria for assessing ethical principles, and that a rational morality is only possible in the limited sense that one must accept its ultimate subjectivity. This view is very similar to the one advocated by the existentialist philosophers, [54]Martin Heidegger (1889-1976) and Sartre. The latter famously maintained that ethical principles only arise from our personal feelings at the time we act, and not from any antecedent principles. [55]Karl Marx (1818-1883) was a moral relativist, for he thought each moral system was simply a product of the dominant class, and that the movement of history will settle moral questions, not a fixed, universal standard. [[56]edit] Critics of relativism Those who believe in moral absolutes are often highly critical of moral relativism; some have been known to equate it with outright immorality or amorality. [57]The Holocaust, [58]Stalinism, [59]apartheid, [60]genocide, [61]unjust wars, [62]genital mutilation, [63]slavery, [64]terrorism, [65]Nazism, etc., present difficult problems for relativists. An observer in a particular time and place, depending on his outlook (e.g., culture, religion, background), might call something good that another observer in a particular time and place would call evil. Slavery, for example, was thought by many to be acceptable, even good, in other times and places, while it is viewed by many (though certainly not all), today, as a great evil. Many critics of relativism would say that any number of evils can be justified based on subjective or cultural preferences, and that morality requires some universal standard against which to measure ethical judgments. Some relativists will state that this is an unfair criticism of relativism, for it is really a metaethical theory, and not a normative one, and that the relativist may have strong moral beliefs, notwithstanding his foundational position. Critics of this view, however, argue the complaint is disingenuous, and that the relativist is not making a mere metaethical assertion; that is, one that deals with the logical or linguistic structure of ethical propositions. These critics contend that stating there is no preferred standard of truth, or that standards are equally true, addresses the ultimate validity and truth of the ethical judgments themselves, which, they contend, is a normative judgment. In other words, the separation between metaethics and normative ethics is arguably a distinction without a difference. Some philosophers, for example, [66]Michael E. Berumen {1952-} and [67]R. M. Hare (1919-2002), argue that moral propositions are subject to logical rules, notwithstanding the absence of any factual content, including those subject to cultural or religious standards or norms. Thus, for example, they contend that one cannot hold contradictory ethical judgments. This allows for moral discourse with shared standards, notwithstanding the descriptive propeties or truth conditions of moral terms. They do not affirm or deny there are moral facts, only that logic applies to our moral assertions; consequently, they contend, there is an objective and preferred standard of moral justification, albeit in a very limited sense. These philosophers also point out that, aside from logical constraints, all systems treat certain moral terms alike in an evaluative sense. This is similar to our treatment of other terms such as less or more, the meaning of which is universally understood and not dependent upon independent standards (measurements, for example, can be converted). It applies to good and bad when used in their non-moral sense, too: for example, when we say, "this is a good wrench" or "this is a bad wheel." This evaluative property of certain terms also allows people of different beliefs to have meaningful discussions on moral questions, even though they disagree about certain facts. Berumen, among others, has said that if relativism were wholly true, there would be no reason to prefer it over any other theory, given its fundamental contention that there is no preferred standard of truth. He says that it is not simply a metaethical theory, but a normative one, and that its truth, by its own definition, cannot in the final analysis be assessed or weighed against other theories. [[68]edit] See also * [69]Analytical philosophy * [70]Anthropology * [71]Business ethics * [72]Deontology * [73]Emotivism * [74]Ethics * [75]Logic * [76]Metaethics * [77]Moral codes * [78]Moral purchasing * [79]Morality * [80]Objectivism * [81]Philosophy * [82]Situational ethics * [83]Subjectivism [[84]edit] References and sources Curt Baier, "Difficulties in the Emotive-Imperative Theory" in Moral Judgement: Readings in Contemporary Meta-Ethics Ruth Benedict, Patterns of Culture (Mentor) Michael E. Berumen, Do No Evil: Ethics with Applications to Economic Theory and Business (iUniverse) R.M. Hare, Sorting out Ethics (Oxford University Press) David Hume, An Enquiry Concerning the Principles of Morals, Editied by Tom L. Beauchamp(Oxford University Press) G.E. Moore, Principia Ethica (Cambridge University Press) Jean-Paul Sartre, "Existentialism is a Humanism" in Existentialism From Dostoevsky to Sartre, Edited by Walter Kaufmann (World Publishing Company) Leo Strauss, The Rebirth of Classical Political Rationalism, Edited by Thomas L. Pangle (University of Chicago Press) Edward Westermarck, The Origin and Development of the Moral Ideas (Macmillan) Bernard Williams, Ethics and the Limits of Philosophy (Harvard University Press) [[85]edit] External links * [86]Objectivism and Relativism (http://www.utm.edu/research/iep/e/ethics.htm#Metaphysi cal%20Issues:%20Objectivism%20and%20Relativism) * [87]Moral Relativism (http://www.AllAboutPhilosophy.org/Moral-Relativism.htm ) A Christian Perspective. [89]Categories: [90]Ethics | [91]Social philosophy References 2. http://en.wikipedia.org/wiki/Moral 3. http://en.wikipedia.org/wiki/Moral_absolutism 4. http://en.wikipedia.org/wiki/Moral_universalism 5. http://en.wikipedia.org/wiki/Customs 6. http://en.wikipedia.org/wiki/Values 7. http://en.wikipedia.org/wiki/Jean-Paul_Sartre 8. http://en.wikipedia.org/wiki/Subjective 9. http://en.wikipedia.org/wiki/Moral_core 10. http://en.wikipedia.org/wiki/Morality 11. http://en.wikipedia.org/wiki/Pluralism 12. http://en.wikipedia.org/wiki/Moral_relativism#History 13. http://en.wikipedia.org/wiki/Moral_relativism#Some_philosophical_considerations 14. http://en.wikipedia.org/wiki/Moral_relativism#Critics_of_relativism 15. http://en.wikipedia.org/wiki/Moral_relativism#See_also 16. http://en.wikipedia.org/wiki/Moral_relativism#References_and_sources 17. http://en.wikipedia.org/wiki/Moral_relativism#External_links 18. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=1 19. http://en.wikipedia.org/wiki/Protagoras 20. http://en.wikipedia.org/wiki/Philosophical 21. http://en.wikipedia.org/wiki/Greek 22. http://en.wikipedia.org/wiki/Herodotus 23. http://en.wikipedia.org/wiki/Philosophers 24. http://en.wikipedia.org/wiki/Enlightenment 25. http://en.wikipedia.org/wiki/David_Hume 26. http://en.wikipedia.org/wiki/Emotivism 27. http://en.wikipedia.org/wiki/Anthropologists 28. http://en.wikipedia.org/wiki/Ruth_Benedict 29. http://en.wikipedia.org/wiki/Ethnocentricism 30. http://en.wikipedia.org/wiki/Edward_Westermarck 31. http://en.wikipedia.org/wiki/G.E._Moore 32. http://en.wikipedia.org/wiki/Intuition 33. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=2 34. http://en.wikipedia.org/w/index.php?title=Ralph_Barton_Perry&action=edit 35. http://en.wikipedia.org/wiki/Existentialists 36. http://en.wikipedia.org/wiki/Bernard_Williams 37. http://en.wikipedia.org/wiki/Universal 38. http://en.wikipedia.org/wiki/Truth 39. http://en.wikipedia.org/wiki/Moral_absolutism 40. http://en.wikipedia.org/wiki/Moral_realism 41. http://en.wikipedia.org/w/index.php?title=Moral_naturalism&action=edit 42. http://en.wikipedia.org/wiki/Jean-Jacques_Rousseau 43. http://en.wikipedia.org/wiki/Ayn_Rand 44. http://en.wikipedia.org/wiki/Religion 45. http://en.wikipedia.org/wiki/Plato 46. http://en.wikipedia.org/wiki/Emotivism 47. http://en.wikipedia.org/wiki/Logical_positivists 48. http://en.wikipedia.org/wiki/Rudolph_Carnap 49. http://en.wikipedia.org/wiki/A._J._Ayer 50. http://en.wikipedia.org/wiki/Logic 51. http://en.wikipedia.org/wiki/Metaphysical 52. http://en.wikipedia.org/wiki/Charles_L._Stevenson 53. http://en.wikipedia.org/wiki/Leo_Strauss 54. http://en.wikipedia.org/wiki/Martin_Heidegger 55. http://en.wikipedia.org/wiki/Karl_Marx 56. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=3 57. http://en.wikipedia.org/wiki/The_Holocaust 58. http://en.wikipedia.org/wiki/Stalinism 59. http://en.wikipedia.org/wiki/Apartheid 60. http://en.wikipedia.org/wiki/Genocide 61. http://en.wikipedia.org/w/index.php?title=Unjust_war&action=edit 62. http://en.wikipedia.org/wiki/Genital_mutilation 63. http://en.wikipedia.org/wiki/Slavery 64. http://en.wikipedia.org/wiki/Terrorism 65. http://en.wikipedia.org/wiki/Nazism 66. http://en.wikipedia.org/wiki/Michael_E._Berumen 67. http://en.wikipedia.org/wiki/R._M._Hare 68. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=4 69. http://en.wikipedia.org/wiki/Analytical_philosophy 70. http://en.wikipedia.org/wiki/Anthropology 71. http://en.wikipedia.org/wiki/Business_ethics 72. http://en.wikipedia.org/wiki/Deontology 73. http://en.wikipedia.org/wiki/Emotivism 74. http://en.wikipedia.org/wiki/Ethics 75. http://en.wikipedia.org/wiki/Logic 76. http://en.wikipedia.org/wiki/Metaethics 77. http://en.wikipedia.org/wiki/Moral_codes 78. http://en.wikipedia.org/wiki/Moral_purchasing 79. http://en.wikipedia.org/wiki/Morality 80. http://en.wikipedia.org/wiki/Objectivism 81. http://en.wikipedia.org/wiki/Philosophy 82. http://en.wikipedia.org/wiki/Situational_ethics 83. http://en.wikipedia.org/wiki/Subjectivism 84. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=5 85. http://en.wikipedia.org/w/index.php?title=Moral_relativism&action=edit§ion=6 86. http://www.utm.edu/research/iep/e/ethics.htm#Metaphysical%20Issues:%20Objectivism%20and%20Relativism 87. http://www.AllAboutPhilosophy.org/Moral-Relativism.htm 88. http://en.wikipedia.org/wiki/Moral_relativism 89. http://en.wikipedia.org/w/index.php?title=Special:Categories&article=Moral_relativism 90. http://en.wikipedia.org/wiki/Category:Ethics 91. http://en.wikipedia.org/wiki/Category:Social_philosophy From checker at panix.com Mon May 2 16:22:53 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:22:53 -0400 (EDT) Subject: [Paleopsych] Internet Encyclopedia of Philosophy: Ethics Message-ID: Ethics [Internet Encyclopedia of Philosophy] http://www.utm.edu/research/iep/e/ethics.htm#Metaphysical%20Issues:%20Objectivism%20and%20Relativism 2003 The field of ethics, also called moral philosophy, involves systematizing, defending, and recommending concepts of right and wrong behavior. Philosophers today usually divide ethical theories into three general subject areas: metaethics, normative ethics, and applied ethics. Metaethics investigates where our ethical principles come from, and what they mean. Are they merely social inventions? Do they involve more than expressions of our individual emotions? Metaethical answers to these questions focus on the issues of universal truths, the will of God, the role of reason in ethical judgments, and the meaning of ethical terms themselves. Normative ethics takes on a more practical task, which is to arrive at moral standards that regulate right and wrong conduct. This may involve articulating the good habits that we should acquire, the duties that we should follow, or the consequences of our behavior on others. Finally, applied ethics involves examining specific controversial issues, such as abortion, infanticide, animal rights, environmental concerns, homosexuality, capital punishment, or nuclear war. By using the conceptual tools of metaethics and normative ethics, discussions in applied ethics try to resolve these controversial issues. The lines of distinction between metaethics, normative ethics, and applied ethics are often blurry. For example, the issue of abortion is an applied ethical topic since it involves a specific type of controversial behavior. But it also depends on more general normative principles, such as the right of self-rule and the right to life, which are litmus tests for determining the morality of that procedure. The issue also rests on metaethical issues such as, "where do rights come from?" and "what kind of beings have rights?" _________________________________________________________________ Table of Contents (Clicking on the links below will take you to that part of this article) * [2]Metaethics * [3]Metaphysical Issues: Objectivism and Relativism * [4]Psychological Issues in Metaethics * [5]Egoism and Altruism * [6]Emotion and Reason * [7]Male and Female Morality [8]Normative Ethics * [9]Virtue Theories * [10]Duty Theories * [11]Consequentialist Theories * [12]Types of Utilitarianism * [13]Ethical Egoism and Social Contract Theory [14]Applied Ethics * [15]Normative Principles in Applied Ethics * [16]Issues in Applied Ethics [17]References and Further Reading _________________________________________________________________ Metaethics The term "meta" means after or beyond, and, consequently, the notion of metaethics involves a removed, or bird's eye view of the entire project of ethics. We may define metaethics as the study of the origin and meaning of ethical concepts. When compared to normative ethics and applied ethics, the field of metaethics is the least precisely defined area of moral philosophy. Two issues, though, are prominent: (1) metaphysical issues concerning whether morality exists independently of humans, and (2) psychological issues concerning the underlying mental basis of our moral judgments and conduct. [18]Back to Table of Contents Metaphysical Issues: Objectivism and Relativism "Metaphysics" is the study of the kinds of things that exist in the universe. Some things in the universe are made of physical stuff, such as rocks; and perhaps other things are nonphysical in nature, such as thoughts, spirits, and gods. The metaphysical component of metaethics involves discovering specifically whether moral values are eternal truths that exist in a spirit-like realm, or simply human conventions. There are two general directions that discussions of this topic take, one other-worldly and one this-worldly. Proponents of the other-worldly view typically hold that moral values are objective in the sense that they exist in a spirit-like realm beyond subjective human conventions. They also hold that they are absolute, or eternal, in that they never change, and also that they are universal insofar as they apply to all rational creatures around the world and throughout time. The most dramatic example of this view is Plato, who was inspired by the field of mathematics. When we look at numbers and mathematical relations, such as 1+1=2, they seem to be timeless concepts that never change, and apply everywhere in the universe. Humans do not invent numbers, and humans cannot alter them. Plato explained the eternal character of mathematics by stating that they are abstract entities that exist in a spirit-like realm. He noted that moral values also are absolute truths and thus are also abstract, spirit-like entities. In this sense, for Plato, moral values are spiritual objects. Medieval philosophers commonly grouped all moral principles together under the heading of "eternal law" which were also frequently seen as spirit-like objects. 17^th century British philosopher Samuel Clarke described them as spirit-like relationships rather than spirit-like objects. In either case, though, they exist in a sprit-like realm. A different other-worldly approach to the metaphysical status of morality is divine commands issuing from God's will. Sometimes called voluntarism, this view was inspired by the notion of an all-powerful God who is in control of everything. God simply wills things, and they become reality. He wills the physical world into existence, he wills human life into existence and, similarly, he wills all moral values into existence. Proponents of this view, such as medieval philosopher William of Ockham, believe that God wills moral principles, such as "murder is wrong," and these exist in God's mind as commands. God informs humans of these commands by implanting us with moral intuitions or revealing these commands in scripture. The second and more this-worldly approach to the metaphysical status of morality follows in the skeptical philosophical tradition, such as that articulated by Greek philosopher Sextus Empiricus, and denies the objective status of moral values. Technically skeptics did not reject moral values themselves, but only denied that values exist as spirit-like objects, or as divine commands in the mind of God. Moral values, they argued, are strictly human inventions, a position that has since been called moral relativism. There are two distinct forms of moral relativism. The first is individual relativism, which holds that individual people create their own moral standards. Friedrich Nietzsche, for example, argued that the superhuman creates his or her morality distinct from and in reaction to the slave-like value system of the masses. The second is cultural relativism which maintains that morality is grounded in the approval of ones society and not simply in the preferences of individual people. This view was advocated by Sextus, and in more recent centuries by Michel Montaigne and William Graham Sumner. In addition to espousing skepticism and relativism, this-worldly approaches to the metaphysical status of morality deny the absolute and universal nature of morality and hold instead that moral values in fact change from society to society throughout time and throughout the world. They frequently attempt to defend their position by citing examples of values that differ dramatically from one culture to another, such as attitudes about polygamy, homosexuality and human sacrifice. [19]Back to Table of Contents Psychological Issues in Metaethics A second area of metaethics involves the psychological basis of our moral judgments and conduct, particularly understanding what motivates us to be moral. We might explore this subject by asking the simple question, "Why be moral?" Even if I am aware of basic moral standards, such as dont kill and dont steal, this does not necessarily mean that I will be psychologically compelled to act on them. Some answers to the question Why be moral? are to avoid punishment, to gain praise, to attain happiness, to be dignified, or to fit in with society. [20]Back to Table of Contents Egoism and Altruism One important area of moral psychology concerns the inherent selfishness of humans. 17^th century British philosopher Thomas Hobbes held that many, if not all, of our actions are prompted by selfish desires. Even if an action seems selfless, such as donating to charity, there are still selfish causes for this, such as experiencing power over other people. This view is called psychological egoism and maintains that self-oriented interests ultimately motivate all human actions. Closely related to psychological egoism is a view called psychological hedonism which is the view that pleasure is the specific driving force behind all of our actions. 18^th century British philosopher Joseph Butler agreed that instinctive selfishness and pleasure prompt much of our conduct. However, Butler argued that we also have an inherent psychological capacity to show benevolence to others. This view is called psychological altruism and maintains that at least some of our actions are motivated by instinctive benevolence. [21]Back to Table of Contents Emotion and Reason A second area of moral psychology involves a dispute concerning the role of reason in motivating moral actions. If, for example, I make the statement abortion is morally wrong, am I making a rational assessment or only expressing my feelings? On the one side of the dispute, 18^th century British philosopher David Hume argued that moral assessments involve our emotions, and not our reason. We can amass all the reasons we want, but that alone will not constitute a moral assessment. We need a distinctly emotional reaction in order to make a moral pronouncement. Reason might be of service in giving us the relevant data, but, in Hume's words, "reason is, and ought to be, the slave of the passions." Inspired by Humes anti-rationalist views, some 20th century philosophers, most notably A.J. Ayer, similarly denied that moral assessments are factual descriptions. For example, although the statement it is good to donate to charity may on the surface look as though it is a factual description about charity, it is not. Instead, a moral utterance like this involves two things. First, I (the speaker) I am expressing my personal feelings of approval about charitable donations and I am in essence saying "Hooray for charity!" This is called the emotive element insofar as I am expressing my emotions about some specific behavior. . Second, I (the speaker) am trying to get you to donate to charity and am essentially giving the command, "Donate to charity!" This is called the prescriptive element in the sense that I am prescribing some specific behavior. From Humes day forward, more rationally-minded philosophers have opposed these emotive theories of ethics and instead argued that moral assessments are indeed acts of reason. 18^th century German philosopher Immanuel Kant is a case in point. Although emotional factors often do influence our conduct, he argued, we should nevertheless resist that kind of sway. Instead, true moral action is motivated only by reason when it is free from emotions and desires. A recent rationalist approach, offered by Kurt Baier, was proposed in direct opposition to the emotivist and prescriptivist theories of Ayer and others. Baier focuses more broadly on the reasoning and argumentation process that takes place when making moral choices. All of our moral choices are, or at least can be, backed by some reason or justification. If I claim that it is wrong to steal someone's car, then I should be able to justify my claim with some kind of argument. For example, I could argue that stealing Smith's car is wrong since this would upset her, violate her ownership rights, or put the thief at risk of getting caught. According to Baier, then, proper moral decision making involves giving the best reasons in support of one course of action versus another. [22]Back to Table of Contents Male and Female Morality A third area of moral psychology focuses on whether there is a distinctly female approach to ethics that is grounded in the psychological differences between men and women. Discussions of this issue focus on two claims: (1) traditional morality is male-centered, and (2) there is a unique female perspective of the world which can be shaped into a value theory. According to many feminist philosophers, traditional morality is male-centered since it is modeled after practices that have been traditionally male-dominated, such as acquiring property, engaging in business contracts, and governing societies. The rigid systems of rules required for trade and government were then taken as models for the creation of equally rigid systems of moral rules, such as lists of rights and duties. Women, by contrast, have traditionally had a nurturing role by raising children and overseeing domestic life. These tasks require less rule following, and more spontaneous and creative action. Using the woman's experience as a model for moral theory, then, the basis of morality would be spontaneously caring for others as would be appropriate in each unique circumstance. On this model, the agent becomes part of the situation and acts caringly within that context. This stands in contrast with male-modeled morality where the agent is a mechanical actor who performs his required duty, but can remain distanced from and unaffected by the situation. A care-based approach to morality, as it is sometimes called, is offered by feminist ethicists as either a replacement for or a supplement to traditional male-modeled moral systems. [23]Back to Table of Contents Normative Ethics Normative ethics involves arriving at moral standards that regulate right and wrong conduct. In a sense, it is a search for an ideal litmus test of proper behavior. The Golden Rule is a classic example of a normative principle: We should do to others what we would want others to do to us. Since I do not want my neighbor to steal my car, then it is wrong for me to steal her car. Since I would want people to feed me if I was starving, then I should help feed starving people. Using this same reasoning, I can theoretically determine whether any possible action is right or wrong. So, based on the Golden Rule, it would also be wrong for me to lie to, harass, victimize, assault, or kill others. The Golden Rule is an example of a normative theory that establishes a single principle against which we judge all actions. Other normative theories focus on a set of foundational principles, or a set of good character traits. The key assumption in normative ethics is that there is only one ultimate criterion of moral conduct, whether it is a single rule or a set of principles. Three strategies will be noted here: (1) virtue theories, (2) duty theories, and (3) consequentialist theories. [24]Back to Table of Contents Virtue Theories Many philosophers believe that morality consists of following precisely defined rules of conduct, such as "don't kill," or "don't steal." Presumably, I must learn these rules, and then make sure each of my actions live up to the rules. Virtue theorists, however, place less emphasis on learning rules, and instead stress the importance of developing good habits of character, such as benevolence. Once I've acquired benevolence, for example, I will then habitually act in a benevolent manner. Historically, virtue theory is one of the oldest normative traditions in Western philosophy, having its roots in ancient Greek civilization. Plato emphasized four virtues in particular, which were later called cardinal virtues: wisdom, courage, temperance and justice. Other important virtues are fortitude, generosity, self-respect, good temper, and sincerity. In addition to advocating good habits of character, virtue theorists hold that we should avoid acquiring bad character traits, or vices, such as cowardice, insensibility, injustice, and vanity. Virtue theory emphasizes moral education since virtuous character traits are developed in one's youth. Adults, therefore, are responsible for instilling virtues in the young. Aristotle argued that virtues are good habits that we acquire, which regulate our emotions. For example, in response to my natural feelings of fear, I should develop the virtue of courage which allows me to be firm when facing danger. Analyzing 11 specific virtues, Aristotle argued that most virtues fall at a mean between more extreme character traits. With courage, for example, if I do not have enough courage, I develop the disposition of cowardice, which is a vice. If I have too much courage I develop the disposition of rashness which is also a vice. According to Aristotle, it is not an easy task to find the perfect mean between extreme character traits. In fact, we need assistance from our reason to do this. After Aristotle, medieval theologians supplemented Greek lists of virtues with three Christian ones, or theological virtues: faith, hope, and charity. Interest in virtue theory continued through the middle ages and declined in the 19^th century with the rise of alternative moral theories below. In the mid 20^th century virtue theory received special attention from philosophers who believed that more recent approaches ethical theories were misguided for focusing too heavily on rules and actions, rather than on virtuous character traits. Alasdaire MacIntyre defended the central role of virtues in moral theory and argued that virtues are grounded in and emerge from within social traditions. [25]Back to Table of Contents Duty Theories Many of us feel that there are clear obligations we have as human beings, such as to care for our children, and to not commit murder. Duty theories base morality on specific, foundational principles of obligation. These theories are sometimes called deontological, from the Greek word deon, or duty, in view of the foundational nature of our duty or obligation. They are also sometimes called nonconsequentialist since these principles are obligatory, irrespective of the consequences that might follow from our actions. For example, it is wrong to not care for our children even if it results in some great benefit, such as financial savings. There are four central duty theories. The first is that championed by 17th century German philosopher Samuel Pufendorf, who classified dozens of duties under three headings: duties to God, duties to oneself, and duties to others. Concerning our duties towards God, he argued that there are two kinds: (1) a theoretical duty to know the existence and nature of God, and (2) a practical duty to both inwardly and outwardly worship God. Concerning our duties towards oneself, these are also of two sorts: (1) duties of the soul, which involve developing ones skills and talents, and (2) duties of the body, which involve not harming our bodies, as we might through gluttony or drunkenness, and not killing oneself. Concerning our duties towards others, Pufendorf divides these between absolute duties, which are universally binding on people, and conditional duties, which are the result of contracts between people. Absolute duties are of three sorts: (1) avoid wronging others; (2) treat people as equals, and (3) promote the good of others. Conditional duties involve various types of agreements, the principal one of which is the duty is to keep ones promises. A second duty-based approach to ethics is rights theory. Most generally, a right is a justified claim against another persons behavior such as my right to not be harmed by you. Rights and duties are related in such a way that the rights of one person implies the duties of another person. For example, if I have a right to payment of $10 by Smith, then Smith has a duty to pay me $10. This is called the correlativity of rights and duties. The most influential early account of rights theory is that of 17^th century British philosopher John Locke, who argued that the laws of nature mandate that we should not harm anyone's life, health, liberty or possessions. For Locke, these are our natural rights, given to us by God. Following Locke, the United States Declaration of Independence authored by Thomas Jefferson recognizes three foundational rights: life, liberty, and the pursuit of happiness. Jefferson and others rights theorists maintained that we deduce other more specific rights from these, including the rights of property, movement, speech, and religious expression. There are four features traditionally associated with moral rights. First, rights are natural insofar as they are not invented or created by governments. Second, they are universal insofar as they do not change from country to country. Third, they are equal in the sense that rights are the same for all people, irrespective of gender, race, or handicap. Fourth, they are inalienable which means that I ca not hand over my rights to another person, such as by selling myself into slavery. A third duty-based theory is that by Kant, which emphasizes a single principle of duty. Influenced by Pufendorf, Kant agreed that we have moral duties to oneself and others, such as developing one's talents, and keeping our promises to others. However, Kant argued that there is a more foundational principle of duty that encompasses our particular duties. It is a single, self-evident principle of reason that he calls the "categorical imperative." A categorical imperative, he argued, is fundamentally different from hypothetical imperatives that hinge on some personal desire that we have, for example, "If you want to get a good job, then you ought to go to college." By contrast, a categorical imperative simply mandates an action, irrespective of one's personal desires, such as "You ought to do X." Kant gives at least four versions of the categorical imperative, but one is especially direct: Treat people as an end, and never as a means to an end. That is, we should always treat people with dignity, and never use them as mere instruments. For Kant, we treat people as an end whenever our actions toward someone reflect the inherent value of that person. Donating to charity, for example, is morally correct since this acknowledges the inherent value of the recipient. By contrast, we treat someone as a means to an end whenever we treat that person as a tool to achieve something else. It is wrong, for example, to steal my neighbor's car since I would be treating her as a means to my own happiness. The categorical imperative also regulates the morality of actions that affect us individually. Suicide, for example, would be wrong since I would be treating my life as a means to the alleviation of my misery. Kant believes that the morality of all actions can be determined by appealing to this single principle of duty. A fourth and more recent duty-based theory is that by British philosopher W.D. Ross, which emphasizes prima facie duties. Like his 17th and 18th century counterparts, Ross argues that our duties are "part of the fundamental nature of the universe." However, Ross's list of duties is much shorter, which he believes reflects our actual moral convictions: * Fidelity: the duty to keep promises * Reparation: the duty to compensate others when we harm them * Gratitude: the duty to thank those who help us * Justice: the duty to recognize merit * Beneficence: the duty to improve the conditions of others * Self-improvement: the duty to improve our virtue and intelligence * Nonmaleficence: the duty to not injure others Ross recognizes that situations will arise when we must choose between two conflicting duties. In a classic example, suppose I borrow my neighbor's gun and promise to return it when he asks for it. One day, in a fit of rage, my neighbor pounds on my door and asks for the gun so that he can take vengeance on someone. On the one hand, the duty of fidelity obligates me to return the gun; on the other hand, the duty of nonmaleficence obligates me to avoid injuring others and thus not return the gun. According to Ross, I will intuitively know which of these duties is my actual duty, and which is my apparent or prima facie duty. In this case, my duty of nonmaleficence emerges as my actual duty and I should not return the gun. [26]Back to Table of Contents Consequentialist Theories It is common for us to determine our moral responsibility by weighing the consequences of our actions. According to consequentialist normative theories, correct moral conduct is determined solely by a cost-benefit analysis of an action's consequences: Consequentialism: An action is morally right if the consequences of that action are more favorable than unfavorable. Consequentialist normative principles require that we first tally both the good and bad consequences of an action. Second, we then determine whether the total good consequences outweigh the total bad consequences. If the good consequences are greater, then the action is morally proper. If the bad consequences are greater, then the action is morally improper. Consequentialist theories are sometimes called teleological theories, from the Greek word telos, or end, since the end result of the action is the sole determining factor of its morality. Consequentialist theories became popular in the 18^th century by philosophers who wanted a quick way to morally assess an action by appealing to experience, rather than by appealing to gut intuitions or long lists of questionable duties. In fact, the most attractive feature of consequentialism is that it appeals to publicly observable consequences of actions. Most versions of consequentialism are more precisely formulated than the general principle above. In particular, competing consequentialist theories specify which consequences for affected groups of people are relevant. Three subdivisions of consequentialism emerge: Ethical Egoism:an action is morally right if the consequences of that action are more favorable than unfavorable only to the agent performing the action. Ethical Altruism: an action is morally right if the consequences of that action are more favorable than unfavorable to everyone except the agent. Utilitarianism: an action is morally right if the consequences of that action are more favorable than unfavorable to everyone. All three of these theories focus on the consequences of actions for different groups of people. But, like all normative theories, the above three theories are rivals of each other. They also yield different conclusions. Consider the following example. A woman was traveling through a developing country when she witnessed a car in front of her run off the road and roll over several times. She asked the hired driver to pull over to assist, but, to her surprise, the driver accelerated nervously past the scene. A few miles down the road the driver explained that in his country if someone assists an accident victim, then the police often hold the assisting person responsible for the accident itself. If the victim dies, then the assisting person could be held responsible for the death. The driver continued explaining that road accident victims are therefore usually left unattended and often die from exposure to the countrys harsh desert conditions. On the principle of ethical egoism, the woman in this illustration would only be concerned with the consequences of her attempted assistance as she would be affected. Clearly, the decision to drive on would be the morally proper choice. On the principle of ethical altruism, she would be concerned only with the consequences of her action as others are affected, particularly the accident victim. Tallying only those consequences reveals that assisting the victim would be the morally correct choice, irrespective of the negative consequences that result for her. On the principle of utilitarianism, she must consider the consequences for both herself and the victim. The outcome here is less clear, and the woman would need to precisely calculate the overall benefit versus disbenefit of her action. [27]Back to Table of Contents Types of Utilitarianism Jeremy Bentham presented one of the earliest fully developed systems of utilitarianism. Two features of his theory are noteworty. First, Bentham proposed that we tally the consequences of each action we perform and thereby determine on a case by case basis whether an action is morally right or wrong. This aspect of Benthams theory is known as act-utilitiarianism. Second, Bentham also proposed that we tally the pleasure and pain which results from our actions. For Bentham, pleasure and pain are the only consequences that matter in determining whether our conduct is moral. This aspect of Benthams theory is known as hedonistic utilitarianism. Critics point out limitations in both of these aspects. First, according to act-utilitarianism, it would be morally wrong to waste time on leisure activities such as watching television, since our time could be spent in ways that produced a greater social benefit, such as charity work. But prohibiting leisure activities doesnt seem reasonable. More significantly, according to act-utilitarianism, specific acts of torture or slavery would be morally permissible if the social benefit of these actions outweighed the disbenefit. A revised version of utilitarianism called rule-utilitarianism addresses these problems. According to rule-utilitarianism, a behavioral code or rule is morally right if the consequences of adopting that rule are more favorable than unfavorable to everyone. Unlike act utilitarianism, which weighs the consequences of each particular action, rule-utilitarianism offers a litmus test only for the morality of moral rules, such as stealing is wrong. Adopting a rule against theft clearly has more favorable consequences than unfavorable consequences for everyone. The same is true for moral rules against lying or murdering. Rule-utilitarianism, then, offers a three-tiered method for judging conduct. A particular action, such as stealing my neighbors car, is judged wrong since it violates a moral rule against theft. In turn, the rule against theft is morally binding because adopting this rule produces favorable consequences for everyone. John Stuart Mills version of utilitarianism is rule-oriented. Second, according to hedonistic utilitarianism, pleasurable consequences are the only factors that matter, morally speaking. This, though, seems too restrictive since it ignores other morally significant consequences that are not necessarily pleasing or painful. For example, acts which foster loyalty and friendship are valued, yet they are not always pleasing. In response to this problem, G.E. Moore proposed ideal utilitarianism, which involves tallying any consequence that we intuitively recognize as good or bad (and not simply as pleasurable or painful). Also, R.M. Hare proposed preference utilitarianism, which involves tallying any consequence that fulfills our preferences. [28]Back to Table of Contents Ethical Egoism and Social Contract Theory We have seen that Thomas Hobbes was an advocate of the methaethical theory of psychological egoism the view that all of our actions are selfishly motivated. Upon that foundation, Hobbes developed a normative theory known as social contract theory, which is a type of rule-ethical-egoism. According to Hobbes, for purely selfish reasons, the agent is better off living in a world with moral rules than one without moral rules. For without moral rules, we are subject to the whims of other people's selfish interests. Our property, our families, and even our lives are at continual risk. Selfishness alone will therefore motivate each agent to adopt a basic set of rules which will allow for a civilized community. Not surprisingly, these rules would include prohibitions against lying, stealing and killing. However, these rules will ensure safety for each agent only if the rules are enforced. As selfish creatures, each of us would plunder our neighbors' property once their guards were down. Each agent would then be at risk from his neighbor. Therefore, for selfish reasons alone, we devise a means of enforcing these rules: we create a policing agency which punishes us if we violate these rules. [29]Back to Table of Contents Applied Ethics Applied ethics is the branch of ethics which consists of the analysis of specific, controversial moral issues such as abortion, animal rights, or euthanasia. In recent years applied ethical issues have been subdivided into convenient groups such as medical ethics, business ethics, environmental ethics, and sexual ethics. Generally speaking, two features are necessary for an issue to be considered an "applied ethical issue." First, the issue needs to be controversial in the sense that there are significant groups of people both for and against the issue at hand. The issue of drive-by shooting, for example, is not an applied ethical issue, since everyone agrees that this practice is grossly immoral. By contrast, the issue of gun control would be an applied ethical issue since there are significant groups of people both for and against gun control. The second requirement for in issue to be an applied ethical issue is that it must be a distinctly moral issue. On any given day, the media presents us with an array of sensitive issues such as affirmative action policies, gays in the military, involuntary commitment of the mentally impaired, capitalistic vs. socialistic business practices, public vs. private health care systems, or energy conservation. Although all of these issues are controversial and have an important impact on society, they are not all moral issues. Some are only issues of social policy. The aim of social policy is to help make a given society run efficiently by devising conventions, such as traffic laws, tax laws, and zoning codes. Moral issues, by contrast, concern more universally obligatory practices, such as our duty to avoid lying, and are not confined to individual societies. Frequently, issues of social policy and morality overlap, as with murder which is both socially prohibited and immoral. However, the two groups of issues are often distinct. For example, many people would argue that sexual promiscuity is immoral, but may not feel that there should be social policies regulating sexual conduct, or laws punishing us for promiscuity. Similarly, some social policies forbid residents in certain neighborhoods from having yard sales. But, so long as the neighbors are not offended, there is nothing immoral in itself about a resident having a yard sale in one of these neighborhoods. Thus, to qualify as an applied ethical issue, the issue must be more than one of mere social policy: it must be morally relevant as well. In theory, resolving particular applied ethical issues should be easy. With the issue of abortion, for example, we would simply determine its morality by consulting our normative principle of choice, such as act-utilitarianism. If a given abortion produces greater benefit than disbenefit, then, according to act-utilitarianism, it would be morally acceptable to have the abortion. Unfortunately, there are perhaps hundreds of rival normative principles from which to choose, many of which yield opposite conclusions. Thus, the stalemate in normative ethics between conflicting theories prevents us from using a single decisive procedure for determining the morality of a specific issue. The usual solution today to this stalemate is to consult several representative normative principles on a given issue and see where the weight of the evidence lies. [30]Back to Table of Contents Normative Principles in Applied Ethics Arriving at a short list of representative normative principles is itself a challenging task. The principles selected must not be too narrowly focused, such as a version of act-egoism that might focus only on an action's short-term benefit. The principles must also be seen as having merit by people on both sides of an applied ethical issue. For this reason, principles that appeal to duty to God are not usually cited since this would have no impact on a nonbeliever engaged in the debate. The following principles are the ones most commonly appealed to in applied ethical discussions: * Personal benefit: acknowledge the extent to which an action produces beneficial consequences for the individual in question. * Social benefit: acknowledge the extent to which an action produces beneficial consequences for society. * Principle of benevolence: help those in need. * Principle of paternalism: assist others in pursuing their best interests when they cannot do so themselves. * Principle of harm: do not harm others. * Principle of honesty: do not deceive others. * Principle of lawfulness: do not violate the law. * Principle of autonomy: acknowledge a person's freedom over his/her actions or physical body. * Principle of justice: acknowledge a person's right to due process, fair compensation for harm done, and fair distribution of benefits. * Rights: acknowledge a person's rights to life, information, privacy, free expression, and safety. The above principles represent a spectrum of traditional normative principles and are derived from both consequentialist and duty-based approaches. The first two principles, personal benefit and social benefit, are consequentialist since they appeal to the consequences of an action as it affects the individual or society. The remaining principles are duty-based. The principles of benevolence, paternalism, harm, honesty, and lawfulness are based on duties we have toward others. The principles of autonomy, justice, and the various rights are based on moral rights. An example will help illustrate the function of these principles in an applied ethical discussion. In 1982 a couple from Bloomington, Indiana gave birth to a severely retarded baby. The infant, known as Baby Doe, also had its stomach disconnected from its throat and was thus unable to receive nourishment. Although this stomach deformity was correctable through surgery, the couple did not want to raise a severely retarded child and therefore chose to deny surgery, food, and water for the infant. Local courts supported the parents' decision, and six days later Baby Doe died. Should corrective surgery have been performed for Baby Doe? Arguments in favor of corrective surgery derive from the infant's right to life and the principle of paternalism which stipulates that we should pursue the best interests of others when they are incapable of doing so themselves. Arguments against corrective surgery derive from the personal and social disbenefit which would result from such surgery. If Baby Doe survived, its quality of life would have been poor and in any case it probably would have died at an early age. Also, from the parent's perspective, Baby Doe's survival would have been a significant emotional and financial burden. When examining both sides of the issue, the parents and the courts concluded that the arguments against surgery were stronger than the arguments for surgery. First, foregoing surgery appeared to be in the best interests of the infant, given the poor quality of life it would endure. Second, the status of Baby Doe's right to life was not clear given the severity of the infant's mental impairment. For, to possess moral rights, it takes more than merely having a human body: certain cognitive functions must also be present. The issue here involves what is often referred to as moral personhood, and is central to many applied ethical discussions. [31]Back to Table of Contents Issues in Applied Ethics As noted, there are many controversial issues discussed by ethicists today, some of which will be briefly mentioned here. Biomedical ethics focuses on a range of issues which arise in clinical settings. Health care workers are in an unusual position of continually dealing with life and death situations. It is not surprising, then, that medical ethics issues are more extreme and diverse than other areas of applied ethics. Prenatal issues arise about the morality of surrogate mothering, genetic manipulation of fetuses, the status of unused frozen embryos, and abortion. Other issues arise about patient rights and physician's responsibilities, such as the confidentiality of the patient's records and the physician's responsibility to tell the truth to dying patients. The AIDS crisis has raised the specific issues of the mandatory screening of all patients for AIDS, and whether physicians can refuse to treat AIDS patients. Additional issues concern medical experimentation on humans, the morality of involuntary commitment, and the rights of the mentally retarded. Finally, end of life issues arise about the morality of suicide, the justifiability of suicide intervention, physician assisted suicide, and euthanasia. The field of business ethics examines moral controversies relating to the social responsibilities of capitalist business practices, the moral status of corporate entities, deceptive advertising, insider trading, basic employee rights, job discrimination, affirmative action, drug testing, and whistle blowing. Issues in environmental ethics often overlaps with business and medical issues. These include the rights of animals, the morality of animal experimentation, preserving endangered species, pollution control, management of environmental resources, whether eco-systems are entitled to direct moral consideration, and our obligation to future generations. Controversial issues of sexual morality include monogamy vs. polygamy, sexual relations without love, homosexual relations, and extramarital affairs. Finally, there are issues of social morality which examine capital punishment, nuclear war, gun control, the recreational use of drugs, welfare rights, and racism. [32]Back to Table of Contents References and Further Reading Anscombe,Elizabeth Modern Moral Philosophy (1958), Philosophy, 1958, Vol. 33, reprinted in her Ethics, Religion and Politics (Oxford: Blackwell, 1981). Aristotle, Nichomachean Ethics, in Barnes, Jonathan, ed., The Complete Works of Aristotle (Princeton, N.J.: Princeton University Press, 1984). Ayer, A. J., Language, Truth and Logic (New York: Dover Publications, 1946). Bentham, Jeremy, Introduction to the Principles of Morals and Legislation (1789), in The Works of Jeremy Bentham, edited by John Bowring (London: 1838-1843). Hare, R.M., Moral Thinking, (Oxford: Clarendon Press, 1981). Hare, R.M., The Language of Morals (Oxford: Oxford University Press, 1952). Hobbes, Thomas, Leviathan, ed., E. Curley, (Chicago, IL: Hackett Publishing Company, 1994). Hume, David, A Treatise of Human Nature (1739-1740), eds. David Fate Norton, Mary J. Norton (Oxford; New York: Oxford University Press, 2000). Kant, Immanuel, Grounding for the Metaphysics of Morals, tr, James W. Ellington (Indianapolis: Hackett Publishing Company, 1985). Locke, John, Two Treatises, ed., Peter Laslett (Cambridge: Cambridge University Press, 1963). MacIntyre, Alasdair, After Virtue, second edition, (Notre Dame: Notre Dame University Press, 1984). Mackie, John L., Ethics: Inventing Right and Wrong, (New York: Penguin Books, 1977). Mill, John Stuart, Utilitarianism, in Collected Works of John Stuart Mill, ed., J.M. Robson (London: Routledge and Toronto, Ont.: University of Toronto Press, 1991). Moore, G.E., Principia Ethica, (Cambridge: Cambridge University Press, 1903). Noddings, Nel, Ethics from the Stand Point Of Women, in Deborah L. Rhode, ed., Theoretical Perspectives on Sexual Difference (New Haven, CT: Yale University Press, 1990). Ockham, William of, Fourth Book of the Sentences, tr. Lucan Freppert, The Basis of Morality According to William Ockham (Chicago: Franciscan Herald Press, 1988). Plato, Republic, 6:510-511, in Cooper, John M., ed., Plato: Complete Works (Indianapolis: Hackett Publishing Company, 1997). Samuel Pufendorf, De Jure Naturae et Gentium (1762), tr. Of the Law of Nature and Nations. Samuel Pufendorf, De officio hominis et civis juxta legem naturalem (1673), tr., The Whole Duty of Man according to the Law of Nature (London, 1691). Sextus Empiricus, Outlines of Pyrrhonism, trs. J. Annas and J. Barnes, Outlines of Scepticism (Cambridge: Cambridge University Press, 1994). Stevenson, Charles L., The Ethics of Language, (New Haven: Yale University Press, 1944). Sumner, William Graham, Folkways (Boston: Guinn, 1906). [33]Back to Table of Contents _________________________________________________________________ Author Information: James Fieser Email: [34]jfieser at utm.edu HomePage: [35]http://www.utm.edu/~jfieser/ References 1. http://www.iep.utm.edu/ 2. http://www.utm.edu/research/iep/e/ethics.htm#Metaethics 3. http://www.utm.edu/research/iep/e/ethics.htm#Metaphysical Issues: Objectivism and Relativism 4. http://www.utm.edu/research/iep/e/ethics.htm#Psychological Issues in Metaethics 5. http://www.utm.edu/research/iep/e/ethics.htm#Egoism and Altruism 6. http://www.utm.edu/research/iep/e/ethics.htm#Emotion and Reason 7. http://www.utm.edu/research/iep/e/ethics.htm#Male and Female Morality 8. http://www.utm.edu/research/iep/e/ethics.htm#Normative Ethics 9. http://www.utm.edu/research/iep/e/ethics.htm#Virtue Theories 10. http://www.utm.edu/research/iep/e/ethics.htm#Duty Theories 11. http://www.utm.edu/research/iep/e/ethics.htm#Consequentialist Theories 12. http://www.utm.edu/research/iep/e/ethics.htm#Types of Utilitarianism 13. http://www.utm.edu/research/iep/e/ethics.htm#Ethical Egoism and Social Contract Theory 14. http://www.utm.edu/research/iep/e/ethics.htm#Applied Ethics 15. http://www.utm.edu/research/iep/e/ethics.htm#Normative Principles in Applied Ethics 16. http://www.utm.edu/research/iep/e/ethics.htm#Issues in Applied Ethics 17. http://www.utm.edu/research/iep/e/ethics.htm#References and Further Reading 18. http://www.utm.edu/research/iep/e/ethics.htm#top 19. http://www.utm.edu/research/iep/e/ethics.htm#top 20. http://www.utm.edu/research/iep/e/ethics.htm#top 21. http://www.utm.edu/research/iep/e/ethics.htm#top 22. http://www.utm.edu/research/iep/e/ethics.htm#top 23. http://www.utm.edu/research/iep/e/ethics.htm#top 24. http://www.utm.edu/research/iep/e/ethics.htm#top 25. http://www.utm.edu/research/iep/e/ethics.htm#top 26. http://www.utm.edu/research/iep/e/ethics.htm#top 27. http://www.utm.edu/research/iep/e/ethics.htm#top 28. http://www.utm.edu/research/iep/e/ethics.htm#top 29. http://www.utm.edu/research/iep/e/ethics.htm#top 30. http://www.utm.edu/research/iep/e/ethics.htm#top 31. http://www.utm.edu/research/iep/e/ethics.htm#top 32. http://www.utm.edu/research/iep/e/ethics.htm#top 33. http://www.utm.edu/research/iep/e/ethics.htm#top 34. mailto:jfieser at utm.edu?subject=Loved%20Your%20Ethics%20Article! 35. http://www.utm.edu/~jfieser/ From checker at panix.com Mon May 2 16:23:52 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:23:52 -0400 (EDT) Subject: [Paleopsych] Sign and Sight: The human flaw Message-ID: The human flaw http://print.signandsight.com/features/74.html 5.3.23 A festival in Berlin's Haus der Kulturen der Welt examines beauty with exhibitions, discussions and dance. By Arnd Wesemann Can't we simply find something beautiful for a change? Does everything have to be immediately relegated to the level of the ridiculous and the kitsch? Why do we desire a thing of beauty and yet regard it with suspicion? What methods of seduction are in play when the beautiful woman in the advertisement appears more beautiful than the beautiful woman next to you? And can one regard heroic masculine poses as an expression of biological superiority without making fascist idols of them? Before you know it the beauty has faded. (Photo: Wang Gongxin & Lin Tianmiao: "Here? Or There?" (detail). 2002. Video installation. Photographer: The artists) Beauty is booming in German universities. After a decade of intensive gender research and practice in equality - sexual, religious and racial - a roll-back is under way. Beauty lost its power because it defected to the side of advertising, computer animation and plastic surgery. And because beauty contradicts the principle of egalitarianism. "Beauty entices", says Winfried Menninghaus, [1]professor of comparative literature at Berlin's Free University, who is currently touring and talking on the subject. According to Menninghaus, Darwinian theory, which like biologism is undergoing a renaissance, states that beauty solely serves biological selection. This is why so many cultures have undermined the power of beauty. Islam covers up its women to prevent inequality from determining the choice of partner. And uniforms are there to lower the pressure of competition. [linke3_300dpi.jpg] The level of competition in the globalised world has spawned the new adoration of the beautiful and strong. In fact, Menninghaus tells us, clothing and fashion signalled the end of Darwinian selection. Nakedness necessitated clothing and thus culture. Since then the naked body has been taboo. As a way of concealing the painful memories of the now surmounted natural state, nakedness has always simultaneously stood for obscenity and the ideal of beauty. Art history was the first to idealise the body; later the health and fitness industry and all the other preening and pruning practices built up around nudity adopted the strict dictates of the beauty ideal. 65 percent of US Americans are overweight. The conclusion: the body is bad, it belongs to the forces of evil. The idea of beauty is therefore also bound up with the rediscovery of shame. The real body stands ashamed before the propagated ideal. Everybody knows the body can never be as flawless as it has to be: pure and sinless, healthy and efficient. And yet one searches for it, at least in art. And then one denounces art for this reason. (Photo: Susanne Linke: "Im Bade wannen". Photographer: Klaus Rabien) The whole of Paris was outraged when [2]choreographer Jan Fabre put a naked, oil-covered dancer on stage and called the piece "Beauty warrior". The oil, the impure element, ran counter to beauty. At the JFK airport in New York two dozen mouth-wateringly gorgeous black models recently [3]posed naked in shackles. America wanted to protest, but by alluding to the legacy of slavery and inflamed desire for beautiful others, Vanessa Beecroft silenced her critics. [zhuang_huidetail_300dpi.jpg] Now the celebrated French [4]sinologist, Francois Jullien, currently touring with his new book "Le nu impossible", has suggested looking at beauty through the eyes of other cultures. Berlin's Haus der Kulturen der Welt (House of World Cultures) has taken up the challenge and on March 18 opened its festival [5]"About Beauty", comprising exhibition, dance programme and a series of podium discussions. Jullien views beauty from the Chinese perspective. In his book, he maintains that to the Chinese eye a person cannot be beautiful as such. According to ancient Tao wisdom, it is in movement that a person attains beauty, in Tai-Chi for example. The Chinese syllable "mei" (literally: fat sheep) means beauty. It is used to describe good food, a sense of well-being, a pleasant bodily feeling. And, ironically enough, also the United States (literally: beautiful land). So it is possible to have beauty without burdening it with ideals of physical self-improvement and abstinence. Why not just enjoy life? But Europeans abide by Jacques Lacan, who stated that pleasure is also a dictate. (Photo: Zhuang Hui: "Chashan County ? June 25". Sculpture.) [eidos_tao_300dpineu.jpg] The Berlin choreographers Jutta Hell and Dieter Baumann rehearsed a [6]dance piece in Shanghai titled "Eidos_Tao" with Chinese dancers. Tao, which is generally translated as "the Way" means movement in China, the flowing, unstoppable movement of dance as opposed to our classical ideal of fixed "eidos". Precisely here, says Jullien, lies the difference. Chinese see beauty in flux, while we try to force it to stand still. Good food and letting the daughters dance are still the measure of beauty in remote areas of southern China. Traditional generosity is beautiful too. (Photo: "EIDOS_TAO". Performance. Photographer: Dirk Bleicker) [shanghai_beauty9_300.jpg] One might suspect that Europe simply does not want to find the beautiful beautiful. Bertolt Brecht coined the phrase: "Beauty comes from overcoming difficulties". The peak is only beautiful when it has been scaled. Pleasure is beautiful when it has to be paid for in sweat. Perhaps this is why beauty hardly qualifies as an aesthetic category any more. Schiller's sentence "Beauty is freedom in the appearance" has only been dug up again for his bicentennial. He spoke of dignity as a category of beauty. The dignity of the healthy, of the beautiful body? What Schiller really meant - and what the Chinese believe today - has largely been forgotten: superior intellect, wise politics, expert craftmanship, human prowess. For the Chinese, only what is true and good is also beautiful, says Jullien. [7]Essayist Dave Hickey goes a step further. In his book "The Invisible Dragon", he describes how this "classical" stance is about to be driven out of the Chinese. (Photo: "Shanghai Beauty". Performance. Photographer: Dirk Bleicker) They too are subject to the influence of academies, museums and universities. As in Europe, these institutions search for beauty in constructs and systems. But the Chinese no more believe in concepts than they do in making sacrifices to achieve an end. Their traditional view of beauty is a celebration of change, eternal circulation and transformation. And according to Hickey, this is precisely the opposite of everything rigid and statutory embodied by institutions. But this culture of the transformative is in retreat, and it is disappearing faster than people are aware of. As Chinese [8]choreographer Jin Xing puts it: "Chinese bodies look weak in comparison with beautiful African bodies. And the Chinese don't have the overriding sense of envy and justice that makes bodies hard and people rich in the West. But the concept of spending money in a fitness studio is still utterly alien in China. The Chinese work hard because true beauty for us is wealth." "?ber Sch?nheit - About Beauty". 18.3.05 - 15.5.05. [9]Haus der Kulturen der Welt, Berlin * Arnd Wesemann is editor of [10]Ballet-Tanz magazine. The article was originally published in German in the [11]S?ddeutsche Zeitung, on 17 March, 2005. Translation: [12]lp. sign and sight funded by Bundeskulturstiftung References 1. http://www.complit.fu-berlin.de/institut/lehrpersonal/menninghaus.html 2. http://csw.art.pl/new/99/fabre_e.html 3. http://newsgrist.typepad.com/underbelly/2004/10/terminal_5_exhi.html 4. http://www.upsy.net/spip/article.php3?id_article=30 5. http://www.hkw.de/en/programm/programm2005/AboutBeauty-Ausstellungsprogramm/c_index.html 6. http://www.hkw.de/en/programm/tagesprogramm/Eidos_Tao/c_index.html 7. http://www.archibot.com/stories/st_davehickey.html 8. http://www.hkw.de/en/programm/tagesprogramm/shanghaibeauty/c_index.html 9. http://www.hkw.de/index_en.html 10. http://www.ballet-tanz.de/ 11. http://www.sueddeutsche.de/ 12. http://www.signandsight.com/service/37.html From checker at panix.com Mon May 2 16:23:14 2005 From: checker at panix.com (Premise Checker) Date: Mon, 2 May 2005 12:23:14 -0400 (EDT) Subject: [Paleopsych] Monterey Herald: Genetic mingling mixes human, animal cells Message-ID: Genetic mingling mixes human, animal cells http://www.montereyherald.com/mld/montereyherald/business/11525171.htm On a farm about six miles outside this gambling town, Jason Chamberlain looks over a flock of about 50 smelly sheep, many of them possessing partially human livers, hearts, brains and other organs. The University of Nevada-Reno researcher talks matter-of-factly about his plans to euthanize one of the pregnant sheep in a nearby lab. He can't wait to examine the effects of the human cells he had injected into the fetus' brain about two months ago. "It's mice on a large scale," Chamberlain says with a shrug. As strange as his work may sound, it falls firmly within the new ethics guidelines the influential National Academies issued this past week for stem cell research. In fact, the Academies' report endorses research that co-mingles human and animal tissue as vital to ensuring that experimental drugs and new tissue replacement therapies are safe for people. Doctors have transplanted pig valves into human hearts for years, and scientists have injected human cells into lab animals for even longer. But the biological co-mingling of animal and human is now evolving into even more exotic and unsettling mixes of species, evoking the Greek myth of the monstrous chimera, which was part lion, part goat and part serpent. In the past two years, scientists have created pigs with human blood, fused rabbit eggs with human DNA and injected human stem cells to make paralyzed mice walk. Particularly worrisome to some scientists are the nightmare scenarios that could arise from the mixing of brain cells: What if a human mind somehow got trapped inside a sheep's head? The "idea that human neuronal cells might participate in 'higher order' brain functions in a nonhuman animal, however unlikely that may be, raises concerns that need to be considered," the academies report warned. In January, an informal ethics committee at Stanford University endorsed a proposal to create mice with brains nearly completely made of human brain cells. Stem cell scientist Irving Weissman said his experiment could provide unparalleled insight into how the human brain develops and how degenerative brain diseases like Parkinson's progress. Stanford law professor Hank Greely, who chaired the ethics committee, said the board was satisfied that the size and shape of the mouse brain would prevent the human cells from creating any traits of humanity. Just in case, Greely said, the committee recommended closely monitoring the mice's behavior and immediately killing any that display human-like behavior. The Academies' report recommends that each institution involved in stem cell research create a formal, standing committee to specifically oversee the work, including experiments that mix human and animal cells. Weissman, who has already created mice with 1 percent human brain cells, said he has no immediate plans to make mostly human mouse brains, but wanted to get ethical clearance in any case. A formal Stanford committee that oversees research at the university would also need to authorize the experiment. Few human-animal hybrids are as advanced as the sheep created by another stem cell scientist, Esmail Zanjani, and his team at the University of Nevada-Reno. They want to one day turn sheep into living factories for human organs and tissues and along the way create cutting-edge lab animals to more effectively test experimental drugs. Zanjani is most optimistic about the sheep that grow partially human livers after human stem cells are injected into them while they are still in the womb. Most of the adult sheep in his experiment contain about 10 percent human liver cells, though a few have as much as 40 percent, Zanjani said. Because the human liver regenerates, the research raises the possibility of transplanting partial organs into people whose livers are failing. Zanjani must first ensure no animal diseases would be passed on to patients. He also must find an efficient way to completely separate the human and sheep cells, a tough task because the human cells aren't clumped together but are rather spread throughout the sheep's liver. Zanjani and other stem cell scientists defend their research and insist they aren't creating monsters - or anything remotely human. "We haven't seen them act as anything but sheep," Zanjani said. Zanjani's goals are many years from being realized. He's also had trouble raising funds, and the U.S. Department of Agriculture is investigating the university over allegations made by another researcher that the school mishandled its research sheep. Zanjani declined to comment on that matter, and university officials have stood by their practices. Allegations about the proper treatment of lab animals may take on strange new meanings as scientists work their way up the evolutionary chart. First, human stem cells were injected into bacteria, then mice and now sheep. Such research blurs biological divisions between species that couldn't until now be breached. Drawing ethical boundaries that no research appears to have crossed yet, the Academies recommend a prohibition on mixing human stem cells with embryos from monkeys and other primates. But even that policy recommendation isn't tough enough for some researchers. "The boundary is going to push further into larger animals," New York Medical College professor Stuart Newman said. "That's just asking for trouble." Newman and anti-biotechnology activist Jeremy Rifkin have been tracking this issue for the last decade and were behind a rather creative assault on both interspecies mixing and the government's policy of patenting individual human genes and other living matter. Years ago, the two applied for a patent for what they called a "humanzee," a hypothetical - but very possible - creation that was half human and chimp. The U.S. Patent and Trademark Office finally denied their application this year, ruling that the proposed invention was too human: Constitutional prohibitions against slavery prevents the patenting of people. Newman and Rifkin were delighted, since they never intended to create the creature and instead wanted to use their application to protest what they see as science and commerce turning people into commodities. And that's a point, Newman warns, that stem scientists are edging closer to every day: "Once you are on the slope, you tend to move down it." From christian.rauh at uconn.edu Mon May 2 22:49:41 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Mon, 02 May 2005 18:49:41 -0400 Subject: [Paleopsych] What would *you* take to a desert island? Message-ID: <4276AE85.70105@uconn.edu> -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.gif Type: image/gif Size: 10164 bytes Desc: not available URL: From unstasis at gmail.com Tue May 3 00:30:42 2005 From: unstasis at gmail.com (Stephen Lee) Date: Mon, 2 May 2005 20:30:42 -0400 Subject: [Paleopsych] test Message-ID: <951ad0705050217301564af3@mail.gmail.com> Hey just heard from Greg Bear that nothig seemed to be going on in the group.. So I figured I may as well put a message in to see if it does. Might just be no one has postd sense Aril 29th? Does this seem accurate? And as a fun little filler question. What would you see the mnain diffferences and difficulties bringing in to be as a full applicable science of both paleopsychology contrasted to Asimov's concept of psychohistory. Or ignore this as most test should be ignored. -- -- If Nothing Is Then Nothing Was But something is everwhere Just because -- http://www.freewebs.com/rewander http://hopeisus.fateback.com/story.html From anonymous_animus at yahoo.com Tue May 3 18:07:52 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 3 May 2005 11:07:52 -0700 (PDT) Subject: [Paleopsych] useless people, reality shows etc In-Reply-To: <200505031711.j43HBrR07037@tick.javien.com> Message-ID: <20050503180752.97622.qmail@web30802.mail.mud.yahoo.com> Gerry says: >>I wouldn't call these jobs examples of Japan losing its humanity but rather an indication that government is providing work for those who are unemployed but wish (and need) something to do.<< --Oh, I agree. People tend to do better when they have something to do to contribute to society. I was referring to the label "useless people" which was probably meant tongue in cheek, but which I've seen used more and more in a serious tone. There are a lot of people who see human beings in terms of their economic worth, and if someone doesn't adapt to the economic system, they're labeled "parasites" or something similar. Which reminds me of the fascist view of human beings, that they are cogs in a machine, worth only their material output. >>The leader who has a knack for bullying the rest of the group is usually the one who makes it to the end and finishes a winner. Cunning and dishonesty are values promoted by these survivor programs and the one who wins is he/she who is most deceptive. These values are ones NOT taught to children by caring parents.<< --Very true. I'm wondering how many people in our culture view "the game" as one of cutting other people's throats, and what effect that has on the health of the overall system. Ideally, our economic system would reward talent and hard work. But what happens when the rewards go to those who are better at manipulating others? Many young people seem to have incorporated those values in the sexual arena, with girls rewarding the most manipulative boys with sex, and boys rewarding girls who use their sexuality to get ahead of other girls. Where did they learn it? Do we blame 60's-style "free love" or the more competitive 80's yuppie ethic? >>Howard's notion of Capitalism with Soul is a humanistic thrust into an otherwise corrupt world.<< --Another way of looking at it is that it's a more ecological view of capitalism, putting it in context rather than divorcing it from other values. The goal of capitalism is not necessarily to make as much money as you can by manipulating others and feeding an emptiness in people maintained by endless striving to get ahead of others. It is to find the hidden yearnings in an audience, the unarticulated dreams, and make them real. If you watch TV ads, you'll see a lot of spiritual or deep emotional themes, attached to products which you wouldn't normally think of as "spiritual". A car is not who you are, but marketers learned in recent decades to market cars as if they were extensions of the self, especially the sexual self. Imagine if all the psychological knowledge and creative genius going into marketing products went into marketing capitalism with soul, marketing curiosity about science and understanding of ecology (natural and human). Imagine if video games taught math and physics, without losing their entertainment value. Why are we relying on an educational system based on textbooks and lectures, when the real money and talent is going into entertainment and advertising? >>Yet not all souls are alike. Some are more generous and congenial than others.<< --I think whatever the game rewards, you get more of. Reward kids for being curious about science and you'll get more kids interested in science. Reward kids for being aggressive or manipulative, and you'll get more of that. Our current system is inconsistent in its rewards, so we get inconsistent results. >>People who promote less science and more religion are those who are fed up not with Darwinism, but with young bullies who believe in a cut-throat bottom-line rather than producing a caring and thoughtful human being.<< --I think religious people have a variety of motives. Some just resent that producers market movies and music to their kids that they feel teaches bad values (I can actually relate to that... while I'm not offended by profanity or sex in films, it gets annoying when it's used as a habitual selling point). Others are very darwinian in the economic sense, but socialistic in the sexual arena. Your money is your own, but your sexuality belongs to the community or to God. Some are more interested in tax cuts, with religion being used to justify it. If the GOP raised taxes, it wouldn't matter how often they mention God, a certain percentage of voters would abandon the party. If Democrats lowered taxes but supported abortion rights, I have no idea how lines would split. Many just felt marginalized in college, reacting against arrogance in liberal professors, or to feeling rejected by liberal kids. Perhaps one reason why many evangelicals see liberalism in terms of 60's stereotypes rather than the state of current liberal thinking. That kind of thing goes in cycles. When conservatism is dominant, kids who don't fit in feel the same marginalization and rejection that evangelical kids felt in the 70's. And many Evangelicals associate promiscuity and drug use with liberalism or secularism, which is a bit like associating Enron with conservatism. Just because a kid is promiscuous does NOT mean s/he was raised by environmentalists or antiwar activists. Conservatism does not guarantee good parenting, but stereotypes die hard. The young bullies are not all atheists. Many are taught violence in the home, often in the name of religion, and then take it out on other kids. The assumption is that kids are "running wild" because of "permissive parenting" but often if you look into it, the kids have been severely punished, then neglected when corporal punishment backfired. But there's always a national myth, which takes precedence over reality. The "save marriage" movement includes a lot of conservative evangelicals who have a higher divorce rate than atheists. The myth says, "We are protecting marriage" but the reality says "we can't keep our own marriages together, so let's go after gay marriage instead." It takes time for these things to sort themselves out and for the myth to fall back in line with reality. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From waluk at earthlink.net Tue May 3 18:08:38 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Tue, 03 May 2005 11:08:38 -0700 Subject: [Paleopsych] test In-Reply-To: <951ad0705050217301564af3@mail.gmail.com> References: <951ad0705050217301564af3@mail.gmail.com> Message-ID: <4277BE26.6060804@earthlink.net> I just checked the Paleopsych files and the month of May has been listed including Christian Rauh's taking along a computer with internet access to a desert island. Stephen Lee wrote: >Hey just heard from Greg Bear that nothig seemed to be going on in the >group.. So I figured I may as well put a message in to see if it does. > >Might just be no one has postd sense Aril 29th? Does this seem accurate? > >And as a fun little filler question. What would you see the mnain >diffferences and difficulties bringing in to be as a full applicable >science of both paleopsychology contrasted to Asimov's concept of >psychohistory. > >Or ignore this as most test should be ignored. > > > From anonymous_animus at yahoo.com Tue May 3 18:12:59 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 3 May 2005 11:12:59 -0700 (PDT) Subject: [Paleopsych] IQ and race In-Reply-To: <200505031711.j43HBrR07037@tick.javien.com> Message-ID: <20050503181259.98945.qmail@web30802.mail.mud.yahoo.com> Greg says: >>Not only does this scam claim to prove your nagging suspicions that blacks are inferior to whites, but it's COMPLETELY GUILT-FREE, because it's RATIONAL, based on PROVABLE MATHEMATICS! And better than that, it's supported by the nagging suspicions of PEOPLE JUST LIKE YOU! People who grew up in a different time.<< --There was an interesting article in the Atlantic magazine called Thin Ice, about what they called Stereotype Threat. They did some experiments, one in which they told a group of white students, "Asians are expected to do better on these tests". The scores of the white students dropped. Apparently, perceptions and expectations have a significant effect on test scores, and when Bush talks about "the soft tyranny of low expectations" he may be right. Groups that feel especially pressured to counteract stereotypes about their performance do poorly on tests, even if they do very well in less pressured settings. A website on the subject: http://www.personal.psu.edu/users/t/r/trc139/ Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From anonymous_animus at yahoo.com Tue May 3 19:05:56 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 3 May 2005 12:05:56 -0700 (PDT) Subject: [Paleopsych] child's play In-Reply-To: <200505031800.j43I0CR24822@tick.javien.com> Message-ID: <20050503190556.13160.qmail@web30802.mail.mud.yahoo.com> >>Children must have independent, competitive rough-and-tumble play. Not only do they enjoy it, it is part of their normal development. Anthony Pellegrini, a professor of early childhood education at the University of Minnesota, defines rough-and-tumble play as behavior that includes "laughing, running, smiling, jumping ... wrestling, play fighting, chasing, and fleeing." Such play, he says, brings children together, it makes them happy and it promotes healthy socialization.<< --That's an odd list... smiling is lumped in with wrestling? And with no distinction among levels of roughness in play fighting? I doubt there's any "politically correct" movement to ban smiling on the playground. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From waluk at earthlink.net Tue May 3 20:04:52 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Tue, 03 May 2005 13:04:52 -0700 Subject: [Paleopsych] useless people, reality shows etc In-Reply-To: <20050503180752.97622.qmail@web30802.mail.mud.yahoo.com> References: <20050503180752.97622.qmail@web30802.mail.mud.yahoo.com> Message-ID: <4277D964.3070807@earthlink.net> Michael Christopher wrote: >>--Oh, I agree. People tend to do better when they have something to do to contribute to society. I was referring to the label "useless people" which was probably meant tongue in cheek, but which I've seen used more and more in a serious tone. There are a lot of people who see human beings in terms of their economic worth, and if someone doesn't adapt to the economic system, they're labeled "parasites" or something similar. Which reminds me of the fascist view of human beings, that they are cogs in a machine, worth only their material output.>> The label "useless people" depends on who is offering a definition. To a confirmed Marxist, this expression references anyone who isn't gainfully employed in a well defined occupation or profession. So often when I'm asked what I do and reply "I'm an Independent Scholar", I usually get a reply of "No, I mean what do you do for employment"? I think economists in general view those not engaged in a clearly defined work ethic as being outside the mainstream and not going with the flow. That's a shame in our rampant capitalistic world. >>--Very true. I'm wondering how many people in our culture view "the game" as one of cutting other people's throats, and what effect that has on the health of the overall system. Ideally, our economic system would reward talent and hard work. But what happens when the rewards go to those who are better at manipulating others? Many young people seem to have incorporated those values in the sexual arena, with girls rewarding the most manipulative boys with sex, and boys rewarding girls who use their sexuality to get ahead of other girls. Where did they learn it? Do we blame 60's-style "free love" or the more competitive 80's yuppie ethic? >> You were the person who mentioned the "surviror shows" which are presently the rage on American television. These are perfect examples of someone being voted out of the group if he/she doesn't offer a distinct advantage to the voter. This has nothing whatsoever to do with leadership but rather with cunning and deception to establish a "king of the hill". I have always placed "blame" (if we can use that term) on a strict Darwinian interpretation of "survival of the fittest" mantra that scholars have come to modify as "survival of the wealthiest". Many of the major academic institutions applaud economists, business school graduates and those who find themselves enamoured with Donald Trump's boardroom and offer them plum rewards for their deceptions. Rather than placing blame, we should instead focus on our ideals and goals that contribute to a group rather than only to personal satisfaction. Incidently this seems to be the thrust of the t.v. show "Extreme Makeover, Home Edition" where a group of volunteers designs, builds, and decorates a home for worthy clients. It's not much, but it's a beginning. You may also be interested in http://www.habitat.org. >>--Another way of looking at it is that it's a more ecological view of capitalism, putting it in context rather than divorcing it from other values. The goal of capitalism is not necessarily to make as much money as you can by manipulating others and feeding an emptiness in people maintained by endless striving to get ahead of others. It is to find the hidden yearnings in an audience, the unarticulated dreams, and make them real. If you watch TV ads, you'll see a lot of spiritual or deep emotional themes, attached to products which you wouldn't normally think of as "spiritual". A car is not who you are, but marketers learned in recent decades to market cars as if they were extensions of the self, especially the sexual self. Imagine if all the psychological knowledge and creative genius going into marketing products went into marketing capitalism with soul, marketing curiosity about science and understanding of ecology (natural and human). Imagine if video games taught math and physics, without losing their entertainment value. Why are we relying on an educational system based on textbooks and lectures, when the real money and talent is going into entertainment and advertising? >> A strong ecological view certainly addresses the BIG picture. Most advertising appeals to sexual issues that create macho man and working housewife. Who would select a hybrid auto when one can drive a sexy Porche! Cars, clothing fashions, home decorations, even food preparation zero in on the latest and sexiest trends that designers have adapted so that the public will have fun while eating, sleeping, or working. According to Bill Gates our education system is outdated when compared to Japan or other up and coming Asian countries and he is absolutely correct. American capitalists chase the image of becoming a billionaire and the quickest and easiest way to attain this goal is in the sports and entertainment industries. Yes, I can imagine....and possibly contribute my two cents, but that won't make a dent in marketing soul along with capitalism. Maybe I should think in terms of moving to Canada :-) . Best regards, Gerry Reinhart-Waller From eshel at physics.ucsd.edu Tue May 3 21:52:57 2005 From: eshel at physics.ucsd.edu (Eshel Ben-Jacob) Date: Tue, 3 May 2005 23:52:57 +0200 Subject: [Paleopsych] test References: <951ad0705050217301564af3@mail.gmail.com> Message-ID: <002101c5502a$78059f10$911bef84@IBMF68D4578947> I got th etest, Eshel Eshel Ben-Jacob. Professor of Physics The Maguy-Glass Professor in Physics of Complex Systems eshel at tamar.tau.ac.il ebenjacob at ucsd.edu Home Page: http://star.tau.ac.il/~eshel/ Visit http://physicaplus.org.il - PhysicaPlus the online magazine of the Israel Physical Society School of Physics and Astronomy 10/2004 -10/2005 Tel Aviv University, 69978 Tel Aviv, Israel Center for Theoretical Biological Physics Tel 972-3-640 7845/7604 (Fax) -6425787 University of California San Diego La Jolla, CA 92093-0354 USA Tel (office) 1-858-534 0524 (Fax) -534 7697 ----- Original Message ----- From: "Stephen Lee" To: "The new improved paleopsych list" Sent: Tuesday, May 03, 2005 2:30 AM Subject: [Paleopsych] test Hey just heard from Greg Bear that nothig seemed to be going on in the group.. So I figured I may as well put a message in to see if it does. Might just be no one has postd sense Aril 29th? Does this seem accurate? And as a fun little filler question. What would you see the mnain diffferences and difficulties bringing in to be as a full applicable science of both paleopsychology contrasted to Asimov's concept of psychohistory. Or ignore this as most test should be ignored. -- -- If Nothing Is Then Nothing Was But something is everwhere Just because -- http://www.freewebs.com/rewander http://hopeisus.fateback.com/story.html _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych +++++++++++++++++++++++++++++++++++++++++++ This Mail Was Scanned By Mail-seCure System at the Tel-Aviv University CC. From checker at panix.com Tue May 3 22:14:00 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:14:00 -0400 (EDT) Subject: [Paleopsych] Scientific American: His Brain, Her Brain Message-ID: His Brain, Her Brain http://www.sciam.com/print_version.cfm?articleID=000363E3-1806-1264-980683414B7F0000 April 25, 2005 It turns out that male and female brains differ quite a bit in architecture and activity. Research into these variations could lead to sex-specific treatments for disorders such as depression and schizophrenia By Larry Cahill On a gray day in mid-January, Lawrence Summers, the president of Harvard University, suggested that innate differences in the build of the male and female brain might be one factor underlying the relative scarcity of women in science. His remarks reignited a debate that has been smoldering for a century, ever since some scientists sizing up the brains of both sexes began using their main finding--that female brains tend to be smaller--to bolster the view that women are intellectually inferior to men. To date, no one has uncovered any evidence that anatomical disparities might render women incapable of achieving academic distinction in math, physics or engineering. And the brains of men and women have been shown to be quite clearly similar in many ways. Nevertheless, over the past decade investigators have documented an astonishing array of structural, chemical and functional variations in the brains of males and females. These inequities are not just interesting idiosyncrasies that might explain why more men than women enjoy the Three Stooges. They raise the possibility that we might need to develop sex-specific treatments for a host of conditions, including depression, addiction, schizophrenia and post-traumatic stress disorder (PTSD). Furthermore, the differences imply that researchers exploring the structure and function of the brain must take into account the sex of their subjects when analyzing their data--and include both women and men in future studies or risk obtaining misleading results. Sculpting the Brain Not so long ago neuroscientists believed that sex differences in the brain were limited mainly to those regions responsible for mating behavior. In a 1966 Scientific American article entitled "Sex Differences in the Brain," Seymour Levine of Stanford University described how sex hormones help to direct divergent reproductive behaviors in rats--with males engaging in mounting and females arching their backs and raising their rumps to attract suitors. Levine mentioned only one brain region in his review: the hypothalamus, a small structure at the base of the brain that is involved in regulating hormone production and controlling basic behaviors such as eating, drinking and sex. A generation of neuroscientists came to maturity believing that "sex differences in the brain" referred primarily to mating behaviors, sex hormones and the hypothalamus. _________________________________________________________________ Several intriguing behavioral studies add to the evidence that some sex differences in the brain arise before a baby draws its first breath. _________________________________________________________________ That view, however, has now been knocked aside by a surge of findings that highlight the influence of sex on many areas of cognition and behavior, including memory, emotion, vision, hearing, the processing of faces and the brain's response to stress hormones. This progress has been accelerated in the past five to 10 years by the growing use of sophisticated noninvasive imaging techniques such as positron-emission tomography (PET) and functional magnetic resonance imaging (fMRI), which can peer into the brains of living subjects. These imaging experiments reveal that anatomical variations occur in an assortment of regions throughout the brain. Jill M. Goldstein of Harvard Medical School and her colleagues, for example, used MRI to measure the sizes of many cortical and subcortical areas. Among other things, these investigators found that parts of the frontal cortex, the seat of many higher cognitive functions, are bulkier in women than in men, as are parts of the limbic cortex, which is involved in emotional responses. In men, on the other hand, parts of the parietal cortex, which is involved in space perception, are bigger than in women, as is the amygdala, an almond-shaped structure that responds to emotionally arousing information--to anything that gets the heart pumping and the adrenaline flowing. These size differences, as well as others mentioned throughout the article, are relative: they refer to the overall volume of the structure relative to the overall volume of the brain. Differences in the size of brain structures are generally thought to reflect their relative importance to the animal. For example, primates rely more on vision than olfaction; for rats, the opposite is true. As a result, primate brains maintain proportionately larger regions devoted to vision, and rats devote more space to olfaction. So the existence of widespread anatomical disparities between men and women suggests that sex does influence the way the brain works. Other investigations are finding anatomical sex differences at the cellular level. For example, Sandra Witelson and her colleagues at McMaster University discovered that women possess a greater density of neurons in parts of the temporal lobe cortex associated with language processing and comprehension. On counting the neurons in postmortem samples, the researchers found that of the six layers present in the cortex, two show more neurons per unit volume in females than in males. Similar findings were subsequently reported for the frontal lobe. With such information in hand, neuroscientists can now explore whether sex differences in neuron number correlate with differences in cognitive abilities--examining, for example, whether the boost in density in the female auditory cortex relates to women's enhanced performance on tests of verbal fluency. Such anatomical diversity may be caused in large part by the activity of the sex hormones that bathe the fetal brain. These steroids help to direct the organization and wiring of the brain during development and influence the structure and neuronal density of various regions. Interestingly, the brain areas that Goldstein found to differ between men and women are ones that in animals contain the highest number of sex hormone receptors during development. This correlation between brain region size in adults and sex steroid action in utero suggests that at least some sex differences in cognitive function do not result from cultural influences or the hormonal changes associated with puberty--they are there from birth. Inborn Inclinations Several intriguing behavioral studies add to the evidence that some sex differences in the brain arise before a baby draws its first breath. Through the years, many researchers have demonstrated that when selecting toys, young boys and girls part ways. Boys tend to gravitate toward balls or toy cars, whereas girls more typically reach for a doll. But no one could really say whether those preferences are dictated by culture or by innate brain biology. To address this question, Melissa Hines of City University London and Gerianne M. Alexander of Texas A&M University turned to monkeys, one of our closest animal cousins. The researchers presented a group of vervet monkeys with a selection of toys, including rag dolls, trucks and some gender-neutral items such as picture books. They found that male monkeys spent more time playing with the "masculine" toys than their female counterparts did, and female monkeys spent more time interacting with the playthings typically preferred by girls. Both sexes spent equal time monkeying with the picture books and other gender-neutral toys. Because vervet monkeys are unlikely to be swayed by the social pressures of human culture, the results imply that toy preferences in children result at least in part from innate biological differences. This divergence, and indeed all the anatomical sex differences in the brain, presumably arose as a result of selective pressures during evolution. In the case of the toy study, males--both human and primate--prefer toys that can be propelled through space and that promote rough-and-tumble play. These qualities, it seems reasonable to speculate, might relate to the behaviors useful for hunting and for securing a mate. Similarly, one might also hypothesize that females, on the other hand, select toys that allow them to hone the skills they will one day need to nurture their young. Simon Baron-Cohen and his associates at the University of Cambridge took a different but equally creative approach to addressing the influence of nature versus nurture regarding sex differences. Many researchers have described disparities in how "people-centered" male and female infants are. For example, Baron-Cohen and his student Svetlana Lutchmaya found that one-year-old girls spend more time looking at their mothers than boys of the same age do. And when these babies are presented with a choice of films to watch, the girls look longer at a film of a face, whereas boys lean toward a film featuring cars. Of course, these preferences might be attributable to differences in the way adults handle or play with boys and girls. To eliminate this possibility, Baron-Cohen and his students went a step further. They took their video camera to a maternity ward to examine the preferences of babies that were only one day old. The infants saw either the friendly face of a live female student or a mobile that matched the color, size and shape of the student's face and included a scrambled mix of her facial features. To avoid any bias, the experimenters were unaware of each baby's sex during testing. When they watched the tapes, they found that the girls spent more time looking at the student, whereas the boys spent more time looking at the mechanical object. This difference in social interest was evident on day one of life--implying again that we come out of the womb with some cognitive sex differences built in. Under Stress In many cases, sex differences in the brain's chemistry and construction influence how males and females respond to the environment or react to, and remember, stressful events. Take, for example, the amygdala. Goldstein and others have reported that the amygdala is larger in men than in women. And in rats, the neurons in this region make more numerous interconnections in males than in females. These anatomical variations would be expected to produce differences in the way that males and females react to stress. To assess whether male and female amygdalae in fact respond differently to stress, Katharina Braun and her co-workers at Otto von Guericke University in Magdeburg, Germany, briefly removed a litter of Degu pups from their mother. For these social South American rodents, which live in large colonies like prairie dogs do, even temporary separation can be quite upsetting. The researchers then measured the concentration of serotonin receptors in various brain regions. Serotonin is a neurotransmitter, or signal-carrying molecule, that is key for mediating emotional behavior. (Prozac, for example, acts by increasing serotonin function.) The workers allowed the pups to hear their mother's call during the period of separation and found that this auditory input increased the serotonin receptor concentration in the males' amygdala, yet decreased the concentration of these same receptors in females. Although it is difficult to extrapolate from this study to human behavior, the results hint that if something similar occurs in children, separation anxiety might differentially affect the emotional well-being of male and female infants. Experiments such as these are necessary if we are to understand why, for instance, anxiety disorders are far more prevalent in girls than in boys. Another brain region now known to diverge in the sexes anatomically and in its response to stress is the hippocampus, a structure crucial for memory storage and for spatial mapping of the physical environment. Imaging consistently demonstrates that the hippocampus is larger in women than in men. These anatomical differences might well relate somehow to differences in the way males and females navigate. Many studies suggest that men are more likely to navigate by estimating distance in space and orientation ("dead reckoning"), whereas women are more likely to navigate by monitoring landmarks. Interestingly, a similar sex difference exists in rats. Male rats are more likely to navigate mazes using directional and positional information, whereas female rats are more likely to navigate the same mazes using available landmarks. (Investigators have yet to demonstrate, however, that male rats are less likely to ask for directions.) Even the neurons in the hippocampus behave differently in males and females, at least in how they react to learning experiences. For example, Janice M. Juraska and her associates at the University of Illinois have shown that placing rats in an "enriched environment"--cages filled with toys and with fellow rodents to promote social interactions--produced dissimilar effects on the structure of hippocampal neurons in male and female rats. In females, the experience enhanced the "bushiness" of the branches in the cells' dendritic trees--the many-armed structures that receive signals from other nerve cells. This change presumably reflects an increase in neuronal connections, which in turn is thought to be involved with the laying down of memories. In males, however, the complex environment either had no effect on the dendritic trees or pruned them slightly. But male rats sometimes learn better in the face of stress. Tracey J. Shors of Rutgers University and her collaborators have found that a brief exposure to a series of one-second tail shocks enhanced performance of a learned task and increased the density of dendritic connections to other neurons in male rats yet impaired performance and decreased connection density in female rats. Findings such as these have interesting social implications. The more we discover about how brain mechanisms of learning differ between the sexes, the more we may need to consider how optimal learning environments potentially differ for boys and girls. Although the hippocampus of the female rat can show a decrement in response to acute stress, it appears to be more resilient than its male counterpart in the face of chronic stress. Cheryl D. Conrad and her co-workers at Arizona State University restrained rats in a mesh cage for six hours--a situation that the rodents find disturbing. The researchers then assessed how vulnerable their hippocampal neurons were to killing by a neurotoxin--a standard measure of the effect of stress on these cells. They noted that chronic restraint rendered the males' hippocampal cells more susceptible to the toxin but had no effect on the females' vulnerability. These findings, and others like them, suggest that in terms of brain damage, females may be better equipped to tolerate chronic stress than males are. Still unclear is what protects female hippocampal cells from the damaging effects of chronic stress, but sex hormones very likely play a role. The Big Picture Extending the work on how the brain handles and remembers stressful events, my colleagues and I have found contrasts in the way men and women lay down memories of emotionally arousing incidents--a process known from animal research to involve activation of the amygdala. In one of our first experiments with human subjects, we showed volunteers a series of graphically violent films while we measured their brain activity using PET. A few weeks later we gave them a quiz to see what they remembered. We discovered that the number of disturbing films they could recall correlated with how active their amygdala had been during the viewing. Subsequent work from our laboratory and others confirmed this general finding. But then I noticed something strange. The amygdala activation in some studies involved only the right hemisphere, and in others it involved only the left hemisphere. It was then I realized that the experiments in which the right amygdala lit up involved only men; those in which the left amygdala was fired up involved women. Since then, three subsequent studies--two from our group and one from John Gabrieli and Turhan Canli and their collaborators at Stanford--have confirmed this difference in how the brains of men and women handle emotional memories. The realization that male and female brains were processing the same emotionally arousing material into memory differently led us to wonder what this disparity might mean. To address this question, we turned to a century-old theory stating that the right hemisphere is biased toward processing the central aspects of a situation, whereas the left hemisphere tends to process the finer details. If that conception is true, we reasoned, a drug that dampens the activity of the amygdala should impair a man's ability to recall the gist of an emotional story (by hampering the right amygdala) but should hinder a woman's ability to come up with the precise details (by hampering the left amygdala). Propranolol is such a drug. This so-called beta blocker quiets the activity of adrenaline and its cousin noradrenaline and, in so doing, dampens the activation of the amygdala and weakens recall of emotionally arousing memories. We gave this drug to men and women before they viewed a short slide show about a young boy caught in a terrible accident while walking with his mother. One week later we tested their memory. The results showed that propranolol made it harder for men to remember the more holistic aspects, or gist, of the story--that the boy had been run over by a car, for example. In women, propranolol did the converse, impairing their memory for peripheral details--that the boy had been carrying a soccer ball. In more recent investigations, we found that we can detect a hemispheric difference between the sexes in response to emotional material almost immediately. Volunteers shown emotionally unpleasant photographs react within 300 milliseconds--a response that shows up as a spike on a recording of the brain's electrical activity. With Antonella Gasbarri and others at the University of L'Aquila in Italy, we have found that in men, this quick spike, termed a P300 response, is more exaggerated when recorded over the right hemisphere; in women, it is larger when recorded over the left. Hence, sex-related hemispheric disparities in how the brain processes emotional images begin within 300 milliseconds--long before people have had much, if any, chance to consciously interpret what they have seen. These discoveries might have ramifications for the treatment of PTSD. Previous research by Gustav Schelling and his associates at Ludwig Maximilian University in Germany had established that drugs such as propranolol diminish memory for traumatic situations when administered as part of the usual therapies in an intensive care unit. Prompted by our findings, they found that, at least in such units, beta blockers reduce memory for traumatic events in women but not in men. Even in intensive care, then, physicians may need to consider the sex of their patients when meting out their medications. Sex and Mental Disorders ptsd is not the only psychological disturbance that appears to play out differently in women and men. A PET study by Mirko Diksic and his colleagues at McGill University showed that serotonin production was a remarkable 52 percent higher on average in men than in women, which might help clarify why women are more prone to depression--a disorder commonly treated with drugs that boost the concentration of serotonin. A similar situation might prevail in addiction. In this case, the neurotransmitter in question is dopamine--a chemical involved in the feelings of pleasure associated with drugs of abuse. Studying rats, Jill B. Becker and her fellow investigators at the University of Michigan at Ann Arbor discovered that in females, estrogen boosted the release of dopamine in brain regions important for regulating drug-seeking behavior. Furthermore, the hormone had long-lasting effects, making the female rats more likely to pursue cocaine weeks after last receiving the drug. Such differences in susceptibility--particularly to stimulants such as cocaine and amphetamine--could explain why women might be more vulnerable to the effects of these drugs and why they tend to progress more rapidly from initial use to dependence than men do. Certain brain abnormalities underlying schizophrenia appear to differ in men and women as well. Ruben Gur, Raquel Gur and their colleagues at the University of Pennsylvania have spent years investigating sex-related differences in brain anatomy and function. In one project, they measured the size of the orbitofrontal cortex, a region involved in regulating emotions, and compared it with the size of the amygdala, implicated more in producing emotional reactions. The investigators found that women possess a significantly larger orbitofrontal-to-amygdala ratio (OAR) than men do. One can speculate from these findings that women might on average prove more capable of controlling their emotional reactions. In additional experiments, the researchers discovered that this balance appears to be altered in schizophrenia, though not identically for men and women. Women with schizophrenia have a decreased OAR relative to their healthy peers, as might be expected. But men, oddly, have an increased OAR relative to healthy men. These findings remain puzzling, but, at the least, they imply that schizophrenia is a somewhat different disease in men and women and that treatment of the disorder might need to be tailored to the sex of the patient. Sex Matters in a comprehensive 2001 report on sex differences in human health, the prestigious National Academy of Sciences asserted that "sex matters. Sex, that is, being male or female, is an important basic human variable that should be considered when designing and analyzing studies in all areas and at all levels of biomedical and health-related research." Neuroscientists are still far from putting all the pieces together--identifying all the sex-related variations in the brain and pinpointing their influences on cognition and propensity for brain-related disorders. Nevertheless, the research conducted to date certainly demonstrates that differences extend far beyond the hypothalamus and mating behavior. Researchers and clinicians are not always clear on the best way to go forward in deciphering the full influences of sex on the brain, behavior and responses to medications. But growing numbers now agree that going back to assuming we can evaluate one sex and learn equally about both is no longer an option. From checker at panix.com Tue May 3 22:14:22 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:14:22 -0400 (EDT) Subject: [Paleopsych] WkStd: Civilization and Its Malcontents Message-ID: Civilization and Its Malcontents http://www.weeklystandard.com/Utilities/printer_preview.asp?idArticle=5546&R=C4FE2FB13 Civilization and Its Malcontents Or, why are academics so unhappy? by Joseph Epstein 05/09/2005, Volume 010, Issue 32 Faculty Towers The Academic Novel and Its Discontents by Elaine Showalter University of Pennsylvania Press, 143 pp., $24.95 I HAD A FRIEND, now long dead, named Walter B. Scott, a professor at Northwestern University whose specialty was theatrical literature, who never referred to university teaching as other than a--or sometimes the--"racket." What Walter, a notably unambitious man, meant was that it was an unconscionably easy way to make a living, a soft touch, as they used to say. Working under conditions of complete freedom, having to show up in the classroom an impressively small number of hours each week, with the remainder of one's time chiefly left to cultivate one's own intellectual garden, at a job from which one could never be fired and which (if one adds up the capacious vacation time) amounted to fewer than six months work a year for pay that is very far from miserable--yes, I'd say "a racket" just about gets it. And yet, as someone who came late to university teaching, I used to wonder why so many people in the racket were so obviously disappointed, depressed, and generally demoralized. Granted, until one achieves that Valhalla for scholars known as tenure--which really means lifetime security, obtainable on no other job that I know--an element of tension is entailed, but then so is it in every other job. As a young instructor, one is often assigned dogsbody work, teaching what is thought to be dull fare: surveys, composition courses, and the rest. But the unhappier academics, in my experience, are not those still struggling to gain a seat at the table, but those who have already grown dour from having been there for a long while. So far as I know, no one has ever done a study of the unhappiness of academics. Who might be assigned to the job? Business-school professors specializing in industrial psychology and employer/employee relations would botch it. Disaffected sociologists would blame it all on society and knock off for the rest of the semester. My own preference would be anthropologists, using methods long ago devised for investigating a culture from the outside in. The closest thing we have to these ideal anthropologists have been novelists writing academic novels, and their lucubrations, while not as precise as one would like on the reasons for the unhappiness of academics, do show a strong and continuing propensity on the part of academics intrepidly to make the worst of what ought to be a perfectly delightful situation. Faculty Towers is a report on the findings of those novelists who have worked the genre long known as the academic novel. The book is written by an insider, for Professor Elaine Showalter, now in her middle sixties, is, as they used to say on the carnival grounds, "with the show." At various places in her slight book, she inserts her own experience as a graduate student and professor, though not to very interesting effect. An early entry in the feminist sweepstakes, she is currently the Avalon Foundation Professor of the Humanities at Princeton, a past president of the Modern Language Association, a founder of "gynocriticism" (or the study of women writers)--in other words, guilty until proven innocent. She has also been described--readers retaining a strong sense of decorum are advised to skip the remainder of this paragraph--as "Camille Paglia with balls," a description meant approbatively, or so at least Princeton must feel, for they print it on princetoninfo.com, a stark indication of the tone currently reigning in American universities. Professor Showalter's book is chiefly a chronological account of Anglophone academic novels for the past sixty or so years, beginning with C.P. Snow's The Masters (1951) and running through examples of the genre produced in the 21st century. Faculty Towers is, for the most part, given over to plot summaries of these novels, usually accompanied by judgments about their quality, with extra bits of feminism (mild scorn is applied where the plight of women in academic life is ignored) thrown in at no extra charge. The book's title, playing off the John Cleese comedy Fawlty Towers, suggests the book's larger theme: that the university, as reflected in the academic novels Showalter examines, has increasingly become rather like a badly run hotel, with plenty of nuttiness to go round. The difficulty here is that Showalter believes that things are not all that nutty. Mirabile dictu: She finds them looking up. "The university," she writes, "is no longer a sanctuary or a refuge; it is fully caught up in the churning community and the changing society; but it is a fragile institution rather than a fortress." The feminism in Faculty Towers is generally no more than a tic, which the book's author by now probably cannot really control, and after a while one gets used to it, without missing it when it fails to show up. The only place Showalter's feminism seriously gets in the way, in my view, is in her judgments of Mary McCarthy's The Groves of Academe (a forgettable--and now quite properly forgotten--novel that she rates too highly) and Randall Jarrell's wickedly amusing Pictures from an Institution (which she attempts, intemperately, to squash). The two misjudgments happen to be nicely connected: the most menacing character in Jarrell's novel, Gertrude Johnson, is based on Mary McCarthy, who may well be one of Showalter's personal heroines, of whom Jarrell has one of his characters remark: "She may be a mediocre novelist but you've got to admit that she's a wonderful liar." Sounds right to me. Being with the show has doubtless clouded Showalter's judgment of Pictures from an Institution, which contains, among several withering criticisms of university life, a marvelously prophetic description of the kind of perfectly characterless man who will eventually--that is to say, now, in our day--rise to the presidencies of universities all over the country. Cozening, smarmy, confidently boring, an appeaser of all and offender of none, "idiot savants of success" (Jarrell's perfect phrase), not really quite human but, like President Dwight Robbins of the novel's Benton College, men (and some women) with a gift for "seeming human"--in short, the kind of person the faculty of Harvard is currently hoping to turn the detoxed Lawrence Summers into if they can't succeed in firing him straightaway for his basic mistake in thinking that they actually believe in free speech. C.P. Snow's The Masters, is a novel about the intramural political alignments involved in finding the right man to replace the dying master of a Cambridge college. In this novel, the worthiness of the university and the significance of the scholars and scientists contending for the job are not questioned; the conflict is between contending but serious points of view: scientific and humanistic, the school of cool progress versus that of warm tradition. In 1951, the university still seemed an altogether admirable place, professors serious and significant. Or so it seemed in the 1950s to those of us for whom going to college was not yet an automatic but still felt to be a privileged choice. One might think that the late 1960s blew such notions completely out of the water. It did, but not before Kingsley Amis, in Lucky Jim (1954), which Showalter rightly calls "the funniest academic satire of the century," first loosed the torpedoes. In Lucky Jim, the setting is a provincial English university and the dominant spirit is one of pomposity, nicely reinforced by cheap-shot one-upmanship and intellectual fraudulence. Jim Dixon, the novel's eponymous hero, striving to become a regular member of the history faculty, is at work on an article titled "The Economic Influence of Developments in Shipbuilding Techniques, 1450 to 1485," a perfect example of fake scholarship in which, as he recognizes, "pseudo light" is cast upon "false problems." Amis puts Dixon through every hell of social embarrassment and comic awkwardness, but the reason Jim is lucky, one might tend to forget in all the laughter, is that in the end he escapes the university and thus a life of intellectual fraudulence and spiritual aridity. Amis's hero is a medieval historian, but the preponderance of academic novels are set in English departments. The reason for this can be found in universities choosing to ignore a remark made by the linguist Roman Jakobson, who, when it was proposed to the Harvard faculty to hire Vladimir Nabokov, said that the zoology department does not hire an elephant, one of the objects of its study, so why should an English department hire a contemporary writer, also best left as an object of study? Jakobson is usually mocked for having made that remark, but he was probably correct: better to study writers than hire them. To hire a novelist for a university teaching job is turning the fox loose in the hen house. The result--no surprise here--has been feathers everywhere. Showalter makes only brief mention of one of my favorite academic novels, The Mind-Body Problem by Rebecca Goldstein. Ms. Goldstein is quoted on the interesting point that at Princeton Jews become gentilized while at Columbia Gentiles become judenized, which is not only amusing but true. Goldstein's novel is also brilliant on the snobbery of university life. She makes the nice point that the poorest dressers in academic life (there are no good ones) are the mathematicians, followed hard upon by the physicists. The reason they care so little about clothes--also about wine and the accoutrements of culture--is that, Goldstein rightly notes, they feel that in their work they are dealing with the higher truths, and need not be bothered with such kakapitze as cooking young vegetables, decanting wine correctly, and knowing where to stay in Paris. Where the accoutrements of culture count for most are in the humanities departments, where truth, as the physical scientists understand it, simply isn't part of the deal. "What do you guys in the English Department do," a scientist at Northwestern once asked me, quite in earnest, "just keep reading Shakespeare over and over, like Talmud?" "Nothing that grand," I found myself replying. Professor Showalter does not go in much for discussing the sex that is at the center of so many academic novels. Which reminds me that the first time I met Edward Shils, he asked me what I was reading. When I said The War Between the Tates by Alison Lurie, he replied, "Academic screwing, I presume." He presumed rightly. How could it be otherwise with academic novels? Apart from the rather pathetic power struggles over department chairmanships, or professorial appointments, love affairs, usually adulterous or officially outlawed ones, provide the only thing resembling drama on offer on the contemporary university campus. Early academic novels confined love affairs to adults on both sides. But by the 1970s, after the "student unrest" (still my favorite of all political euphemisms) of the late 1960s, students--first graduate students, then undergraduates--became the lovers of (often married) professors. If men were writing these novels, the experience was supposed to result in spiritual refreshment; if women wrote them, the male professors were merely damned fools. The women novelists, of course, were correct. The drama of love needs an element of impossibility: think Romeo and Juliet, think Anna Karenina, think Lolita. But in the academic novel, this element seems to have disappeared, especially in regard to the professor-student love affair, where the (usually female) student could no longer be considered very (if at all) innocent. The drama needed to derive elsewhere. That elsewhere hasn't yet been found, unless one counts sexual harassment suits, which are not yet the subject of an academic novel but have been that of Oleanna, a play by David Mamet, who is not an academic but grasped the dramatic element in such dreary proceedings. Sexual harassment, of course, touches on political correctness, which is itself the product of affirmative action, usually traveling under the code name of diversity. Many people outside universities may think that diversity has been imposed on universities from without by ignorant administrators. But professors themselves rather like it; it makes them feel they are doing the right thing and, hence, allows them, however briefly, to feel good about themselves. Nor is diversity the special preserve of prestige-laden or large state-run universities. In the 1970s, I was invited to give a talk at Denison University in Granville, Ohio. I arrived to find all the pieces in place: On the English faculty was a black woman (very nice, by the way), an appropriately snarky feminist, a gay (not teaching the thing called Queer Theory, which hadn't yet been devised), a Jew, and a woman named Ruthie, who drove about in an aged and messy Volkswagen bug, whose place in this otherwise unpuzzling puzzle I couldn't quite figure out. When I asked, I was told, "Oh, Ruthie's from the sixties." From "the sixties," I thought then and still think, sounds like a country, and perhaps it is, but assuredly, to steal a bit of Yeats, no country for old men. By the time I began teaching in the early 1970s, everyone already seemed to be in business for himself, looking for the best deal, which meant the least teaching for the most money at the most snobbishly well-regarded schools. The spirit of capitalism, for all that might be said on its behalf, wreaks havoc when applied to culture and education. The English novelist David Lodge neatly caught this spirit at work when he created, in two of his academic novels, the character Morris Zapp. A scholar-operator, Zapp, as described by Lodge, "is well-primed to enter a profession as steeped in free enterprise as Wall Street, in which each scholar-teacher makes an individual contract with his employer, and is free to sell his services to the highest bidder." Said to be based on the Milton-man Stanley Fish, an identification that Fish apparently has never disavowed but instead glories in, Morris Zapp is the freebooter to a high power turned loose in academic settings: always attempting to strengthen his own position, usually delighted to be of disservice to the old ideal of academic dignity and integrity. Fish himself ended his days with a deanship at the University of Illinois in Chicago for a salary said to be $250,000, much less than a utility infielder in the major leagues makes but, for an academic, a big number. By the time that the 1990s rolled around, all that was really left to the academic novel was to mock the mission of the university. With the onset of so-called theory in English and foreign-language departments, this became easier and easier to do. Professor Showalter does not approve of these goings-on: "The tone of ['90s academic novels]," she writes, "is much more vituperative, vengeful, and cruel than in earlier decades." The crueler the blows are required, I should say, the better to capture the general atmosphere of goofiness, which has become pervasive. Theory and the hodgepodge of feminism, Marxism, and queer theory that resides comfortably alongside it, has now been in the saddle for roughly a quarter-century in American English and Romance-language departments, while also making incursions into history, philosophy, and other once-humanistic subjects. There has been very little to show for it--no great books, no splendid articles or essays, no towering figures who signify outside the academy itself--except declining enrollments in English and other department courses featuring such fare. All that is left to such university teachers is the notion that they are, in a much-strained academic sense, avant-garde, which means that they continue to dig deeper and deeper for lower and lower forms of popular culture--graffiti on Elizabethan chamber pots--and human oddity. The best standard in the old days would have university scholars in literature and history departments publish books that could also be read with enjoyment and intellectual profit by nonscholars. Nothing of this kind is being produced today. In an academic thriller (a subdivision of the academic novel) cited by Showalter called Murder at the MLA, the head of the Wellesley English Department is found "dead as her prose." But almost all prose written in English departments these days is quite as dead as that English teacher. For Professor Showalter, the old days were almost exclusively the bad old days. A good radical matron, she recounts manning the phones for the support group protesting, at the 1968 Modern Language Association meeting, "the organization's conservatism and old-boy governance." Now of course it almost seems as if the annual MLA meetings chiefly exist for journalists to write comic pieces featuring the zany subjects of the papers given at each year's conference. At these meetings, in and out the room the women come and go, speaking of fellatio, which, deep readers that they are, they can doubtless find in Jane Austen. Such has been the politicization of the MLA that a counter-organization has been formed, called the Association of Literary Scholars and Critics, whose raison d'?tre is to get English studies back on track. I am myself a dues-paying ($35 annually) member of that organization. I do not go to its meetings, but I am sent the organization's newsletter and magazine, and they are a useful reminder of how dull English studies have traditionally been. But it is good to recall that dull is not ridiculous, dull is not always irrelevant, dull is not intellectual manure cast into the void. The bad old days in English departments were mainly the dull old days, with more than enough pedants and dryasdusts to go round. But they did also produce a number of university teachers whose work reached beyond university walls and helped elevate the general culture: Jacques Barzun, Lionel Trilling, Ellen Moers, Walter Jackson Bate, Aileen Ward, Robert Penn Warren. The names from the bad new days seem to end with the entirely political Edward Said and Cornel West. What we have today in universities is an extreme reaction to the dullness of that time, and also to the sheer exhaustion of subject matter for English department scholarship. No further articles and books about Byron, Shelley, Keats, or Kafka, Joyce, and the two Eliots seemed possible (which didn't of course stop them from coming). The pendulum has swung, but with a thrust so violent as to have gone through the cabinet in which the clock is stored. From an academic novel I've not read called The Death of a Constant Lover (1999) by Lev Raphael, Professor Showalter quotes a passage that ends the novel on the following threnodic note: Whenever I'm chatting at conferences with faculty members from other universities, the truth comes out after a drink or two: Hardly any academics are happy where they are, no matter how apt the students, how generous the salary or perks, how beautiful the setting, how light the teaching load, how lavish the re-search budget. I don't know if it's academia itself that attracts misfits and malcontents, or if the overwhelming hypocrisy of that world would have turned even the von Trapp family sullen. My best guess is that it's a good bit of both. Universities attract people who are good at school. Being good at school takes a real enough but very small talent. As the philosopher Robert Nozick once pointed out, all those A's earned through their young lives encourage such people to persist in school: to stick around, get more A's and more degrees, sign on for teaching jobs. When young, the life ahead seems glorious. They imagine themselves inspiring the young, writing important books, living out their days in cultivated leisure. But something, inevitably, goes awry, something disagreeable turns up in the punch bowl. Usually by the time they turn 40, they discover the students aren't sufficiently appreciative; the books don't get written; the teaching begins to feel repetitive; the collegiality is seldom anywhere near what one hoped for it; there isn't any good use for the leisure. Meanwhile, people who got lots of B's in school seem to be driving around in Mercedes, buying million-dollar apartments, enjoying freedom and prosperity in a manner that strikes the former good students, now professors, as not only unseemly but of a kind a just society surely would never permit. Now that politics has trumped literature in English departments the situation is even worse. Beset by political correctness, self-imposed diversity, without leadership from above, university teachers, at least on the humanities and social-science sides, knowing the work they produce couldn't be of the least possible interest to anyone but the hacks of the MLA and similar academic organizations, have more reason than ever to be unhappy. And so let us leave them, overpaid and underworked, surly with alienation and unable to find any way out of the sweet racket into which they once so ardently longed to get. Joseph Epstein is a contributing editor to The Weekly Standard. From checker at panix.com Tue May 3 22:14:37 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:14:37 -0400 (EDT) Subject: [Paleopsych] Nation: (Einstein) The Other 1905 Revolution Message-ID: The Other 1905 Revolution http://www.thenation.com/docprint.mhtml?i=20050516&s=foer by JOSHUA FOER Einstein 1905: The Standard of Greatness by John S. Rigden The Born-Einstein Letters, 1916-1955: Friendship, Politics and Physics in Uncertain Times by Irene Born, trans.; with Introduction by Werner Heisenberg and Foreword by Bertrand Russell [from the May 16, 2005 issue] In his 1902 book Science and Hypothesis, the French mathematician and physicist Henri Poincar? surveyed the landscape of modern physics and found three fundamental conundrums bedeviling his field: the chaotic zigzagging of small particles suspended in liquid, known as Brownian motion; the curious fact that metals emit electrons when exposed to ultraviolet light, known as the photoelectric effect; and science's failure to detect the ether, the invisible medium through which light waves were thought to propagate. In 1904 a 25-year-old Bern patent clerk named Albert Einstein read Poincar?'s book. Nothing the young physicist had done with his life until that point foreshadowed the cerebral explosion he was about to unleash. A year later, he had solved all three of Poincar?'s problems. "A storm broke loose in my mind," Einstein would later say of 1905, the annus mirabilis, which John S. Rigden calls "the most productive six months any scientist ever enjoyed." Between March and September, he published five seminal papers, each of which transformed physics. Three were Nobel Prize material; another, his thesis dissertation, remains one of the most cited scientific papers ever; and the fifth, a three-page afterthought, derived the only mathematical equation you're likely to find on a pair of boxer shorts, E = mc2. Rigden's short book Einstein 1905 is a tour through each of those landmark papers, beginning with the only one that Einstein was willing to call "revolutionary." That first paper, which would earn him the Nobel Prize sixteen years later, was titled "On a Heuristic Point of View About the Creation and Conversion of Light." It could just as easily have been called "Why Everything You Think You Know About Light Is Wrong." In 1905 most scientists were certain that light traveled in waves, just like sound. Though it troubled few others, Einstein was deeply perturbed by the notion that energy could flow in continuous waves whereas matter was made up of discrete particles. To paraphrase Bertrand Russell, why should one aspect of the universe be molasses when the other part is sand? When Einstein tried to imagine a universe in which everything, including light, was made up of particles, he realized the simple conceptual shift could explain a lot, including the mysterious photoelectric effect. This was typical of how Einstein thought, argues Rigden. He saw fundamental contradictions in the generalizations that others had made before him and then followed the trail of logic to unexpected conclusions. In some cases it took years before his ideas could be experimentally verified. His theory of light wasn't widely accepted for two decades. The second paper of the year, completed in April, is the least well remembered, even though its many practical applications have made it one of Einstein's most cited works. In that paper, Einstein suggested a way of calculating the size of molecules in a liquid based on measurements of how the liquid behaves. The paper relied on more mathematical brute force and was less graceful than the other four of the year, but it was important nonetheless. Because it showed how to measure the size of otherwise unobservable atoms, it helped nail the coffin shut on the few lingering skeptics, like Ernst Mach, who still did not buy into the atomic theory of matter. Even more damning for those atomic skeptics was Einstein's May paper on Brownian motion, which explained the unpredictable dance of pollen grains in water. The reason for the pollen's erratic behavior, Einstein demonstrated, is that it is being constantly bombarded by water molecules. Most of the time, that bombardment occurs equally from all angles, so the net effect on a grain of pollen is zero. But sometimes, statistical fluctuations conspire so that more molecules are pushing in one direction than another, causing a grain to zip through the water. Even though atoms are invisible, Einstein had figured out a way to see them at work. "A few scientific papers, not many, seem like magic," Rigden writes. "Einstein's May paper is magic." Having dispatched two of Poincar?'s conundrums, Einstein next turned his attention to the undetected ether; his June paper ended up being the most earth-shattering of the bunch. It demolished two pillars of Newtonian physics, the notions of absolute space and absolute time. In their place, Einstein constructed the special theory of relativity, which held that time appears to stretch and space appears to shrink at velocities approaching the speed of light. The paper had no citations, as if Einstein owed a debt to no one. In fact, that wasn't the case. "Much of his source material was 'in the air' among scientists in 1905," notes Rigden, "and some of these ideas had been published." Physics was on the verge of something big at the turn of the century. It took an Einstein to pull it all together, to ask the big question in the right way. The final paper, published in September, might as well have been an addendum to the June paper. The profoundly simple equation he derived in three pages, E = mc2, was a logical consequence of the special theory of relativity. Equating energy and mass, it explained why the sun shines and why Hiroshima was leveled. More than anything else Einstein produced, it has come to symbolize his genius. A half century after his miracle year, in the final sentence of his final letter to his friend and intellectual sparring partner, the physicist Max Born, a dying Albert Einstein wrote, "In the present circumstances, the only profession I would choose would be one where earning a living had nothing to do with the search for knowledge." And so the man whose thought experiments revolutionized science concluded his life posing a thought experiment about himself: Where would we be if Einstein had become a "plumber or peddler," jobs he once rhetorically suggested he'd prefer, instead of a physicist? One place to look to start answering that question is the science itself, which is where Rigden's book begins. Another is the man himself, whose personality is abundantly on display in the letters he exchanged with Born between 1916 and 1955. Those letters, which first appeared in German in 1969 and in English two years later, have now been republished along with Born's commentary, Werner Heisenberg's original introduction and a useful new preface by Diana Buchwald and Kip Thorne. The Einstein that comes through in the letters is self-aware, philosophical, politically conscious (if sometimes na?ve), modest, generous, an aesthete and--in his exchanges with Born's wife, Hedi--an occasional flirt. From these epistolary glimpses of Einstein the person it's possible to see how his science, which "seems to be so far removed from all things human," is nonetheless, as Heisenberg writes in his introduction, "fundamentally determined by philosophical and human attitudes." By the time Einstein began corresponding with Born in 1916, his best work was behind him, and he was already an international celebrity. Their letters document the final chapter of Einstein's career, the forty years during which he was an outsider to the quantum physics revolution and alone in his pursuit of a single unified theory capable of explaining all of physics. Ironically, it was at the height of his fame that Einstein was furthest from the scientific mainstream. The aging revolutionary never ceased to be a radical. Like Einstein, Born was an assimilated German Jew who fled the country's rising anti-Semitism in the early 1930s. Many of their letters from that period concern the deteriorating political situation in Europe and attempts to arrange teaching posts for exiled German scientists. But unlike Einstein, who perceived an inveterate savagery at the heart of German culture and never again set foot on German soil, Born was more forgiving. After sojourning in Edinburgh during World War II, he returned to G?ttingen in 1953. They also differed on their shared Jewish heritage. While Einstein was a moderate Zionist, Born saw no difference between Jewish nationalism and all other embodiments of nationalism that he despised. Their political differences, though, were nowhere near as deep as their scientific disagreements. Einstein considered Born and himself "Antipodean in our scientific expectations." Born was a leading proponent of quantum theory and was awarded the 1954 Nobel Prize for his work establishing the theory's mathematical basis. Einstein was quantum theory's foremost critic. Even though his 1905 paper on the photoelectric effect helped create the field of quantum mechanics, Einstein could never reconcile himself to its nondeterministic implications. He was adamant that the theory provided only a superficial explanation of the universe, and that a deeper theory would someday be found. This conviction was based almost entirely in aesthetic instincts about what the laws of physics ought to look like. "Quantum mechanics is certainly imposing," he famously told Born. "But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one.' I, at any rate, am convinced that He is not playing at dice." Einstein believed that there had to be an "objective reality" at the heart of the universe. If quantum mechanics proved correct, he wrote, again teasing with one of his occupational counterfactuals, "I would rather be a cobbler, or even an employee in a gaming-house, than a physicist." Their quarrel over quantum theory dragged out for more than three decades, but the content of their arguments changed little from the first letters they exchanged on the subject in 1919 right up until Einstein's death. In a 1953 letter Born declares, "I hope to be able to convince you at last that quantum mechanics is complete and as realistic as the facts permit." His attempt to persuade his friend after all those years seems almost comic. He goes on to call Einstein's stubbornness on the subject "quite unbearable." Einstein's letters tend to be half as long as Born's and twice as pithy, and are almost always prefaced with an apology for having not written back sooner. Though Born and Einstein only met in person once, they grew to address each other in the tone of lifelong friends. There's no shortage of tough honesty in the letters. There's even the occasional spat. Several correspondences are consumed by discussion over whether Einstein should grant a journalist permission to publish a book called Conversations With Einstein. Born and his wife were concerned that the author would depict Einstein unflatteringly. "Your own jokes will be smilingly thrown back at you," Hedi Born warns. "This book will constitute your moral death sentence for all but four or five of your friends." Her husband pleads with Einstein, "You do not understand this, in these matters you are a little child." Einstein replied, "The whole affair is a matter of indifference to me, as is all the commotion, and the opinion of each and every human being." Nonetheless, Einstein tried and failed to stop the publication of the book, which even Born later admitted wasn't nearly as bad as he had feared. Einstein's detachment is a persistent theme throughout the letters. He tells Born, "I hibernate like a bear in its cave," and in the same letter he off-handedly informs Born of his wife's death, which he describes as just one more thing accentuating his bearish feeling. Einstein's seeming indifference to worldly things leads Born to comment that "for all his kindness, sociability and love of humanity, he was nevertheless totally detached from his environment and the human beings included in it." Ironically, the vague constellation of traits that, according to Rigden, stimulated Einstein's early discoveries may also help explain why he spent the second half of his career as an outsider to the quantum revolution. The same aesthetic instincts that led him to recognize the inelegance of the old theories about light and space may have blinded him to the decidedly unbeautiful reality of quantum mechanics. The same "stubbornness of a mule" that kept him on the trail of the general theory of relativity for a decade may also have kept him on less fruitful paths later in his career. And the same self-confidence that gave the 26-year-old patent clerk the audacity to challenge the central precepts of classical physics may have prevented him from recognizing his own failure of imagination with regard to quantum mechanics. Heisenberg writes in his introduction, "In the course of scientific progress it can happen that a new range of empirical data can be completely understood only when the enormous effort is made to...change the very structure of the thought processes. In the case of quantum mechanics, Einstein was apparently no longer willing to take this step, or perhaps no longer able to do so." But another explanation is possible. Einstein always held that posterity would value his ideas more than his peers did. He was right. Again and again, work that was at first deemed loopy has been vindicated. The quest for a unified theory, once an emblem of Einstein's isolation, has become contemporary physics' Holy Grail. It's possible that Einstein's greatest intellectual gamble, his repudiation of quantum theory, may yet prove as prescient. Indeed, though they are a minority, many highly regarded scientists still harbor the deep discomfort that Einstein felt about quantum theory. In a 1944 letter to Born on the subject, Einstein wrote, "No doubt the day will come when we will see whose instinctive attitude was the correct one." That day may yet be some time off. From checker at panix.com Tue May 3 22:14:52 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:14:52 -0400 (EDT) Subject: [Paleopsych] Commentary: Book Review: Jared Diamond: Scorched Earth Collapse Message-ID: Book Review: Jared Diamond: Scorched Earth Collapse: How Societies Choose to Fail or Succeed http://www.commentarymagazine.com/article.asp?aid=11904087_1 Viking. 575 pp. $29.95 Reviewed by Kevin Shapiro When the ancient Greeks happened upon ruins whose origins they could not fathom, they called them "Hebrews' castles"--a nod to the Hebrew Bible as the oldest available source of recorded history. In reality, the sites belonged not to the Hebrews but to earlier Aegean societies like the Myceneans and the Minoans. Regional powers in their day, those societies had disappeared, leaving the Greeks to wonder about their fate. Were they conquered or enslaved, stricken by plague or by famine, by earthquake or by flood? Even today, the desolate places of the world are littered with "Hebrews' castles." We gaze in wonder at, among others, the Anasazi pueblos of the American Southwest (Anasazi being the Navajo word for "the ancients"), the monumental statues of Easter Island, and the grand cities of the Maya entombed in the YucatE1n jungle. Aided by the tools of modern archaeology, from the analysis of midden heaps and pollen grains to radiocarbon dating and even more sophisticated physical methods, we often are able to know a good deal about the people responsible for these artifacts. In some cases, like that of the long-deserted Viking settlement in Greenland, detailed written records exist alongside the stone shells of churches, barns, and great houses. But none of the records, written or material, speaks directly of the final moments of their authors. We are left, like the Greeks, to puzzle over the reasons these castles were abandoned, what became of their erstwhile inhabitants--and whether a similar fate might one day befall us. This puzzle is Jared Diamond's subject in Collapse: How Societies Choose to Fail or Succeed. In his best-known previous book, Guns, Germs, and Steel (1997), Diamond--a physiologist by training and a professor of geography at UCLA--sought to explain why the peoples of Europe succeeded in outpacing all others in technology and exploration, leaving their mark on the entire modern world. In Collapse, he turns his attention to the opposite extreme: societies that appear to have experienced spectacular crashes. His thesis is that collapse is a consequence of "ecocide"--environmental damage caused by deforestation, intensive agriculture, and the destruction of local flora and fauna. Diamond begins by considering the land and people of Montana, often regarded as one of the few remaining unspoiled corners of the United States. As he tells it, however, Montana is in fact a microcosm of collapse, or at least of major social change, driven by environmental problems. Logging and mining, the traditional pillars of the economy, have declined as the state has become increasingly deforested and polluted. Soil that once supported apple orchards is nearly gone, and so are the glaciers for which Montana is famous. At the same time, a burgeoning population in the Blackroot Valley has put a strain on the state's water supply and job market. Although environmental damage is a nearly ubiquitous corollary of human activity, what makes certain societies vulnerable to ecocide, Diamond argues, is the combination of particularly fragile ecosystems with particularly destructive land-use practices. Like Montana, societies that have collapsed in the past have been situated in areas marginal for agriculture, with climates unfavorable to farming and tree growth. On both Easter Island and Greenland, for example, trees grow slowly and topsoil is relatively poor; their former inhabitants cut down the available trees without realizing, apparently, that more would not soon grow to replace them. Similarly, the Anasazi of Chaco Canyon (in present-day New Mexico) prospered in wet years by employing innovative methods of irrigation, but they grew so numerous that they could not sustain their population in years of drought. What might such societies have done differently, and how have societies in similar straits managed to survive? The key, Diamond finds in each case, is successful adaptation to the fragility of the local environment. The Inuit of Greenland subsist on fish, whales, and seals, at least some of which are present even in periods of cold. The Japanese, who came perilously close to de-forestation in the 17th century, instituted a strict system of tree management under the early Tokugawa shoguns, regulating the use of literally every tree on the main islands of Honshu, Kyushu, and Shikoku. Most radical of all were the measures taken by the inhabitants of Tikopia Island in the Pacific, who planted every inch of their land with edible trees and roots, eliminated their pigs, and adopted stringent population controls, including abortion and infanticide. All of these successful societies recognized the dangers they were about to face, Diamond argues, and changed their behavior accordingly. Collapse provides a series of such vignettes, rendered in meticulous detail. Although Diamond admits to having set out with the notion that all collapses are brought about by ecocide alone, he recognized early on that this was never the whole story. The Easter Islanders, for example, may have denuded their homeland of palm trees and depleted its fisheries, but their problems were compounded by the rivalries of competing tribal chieftains, who sought to outdo each other by erecting more and bigger statues, thus consuming enormous quantities of wood and food. The Tikopians, by contrast, have a history of weak chiefs and little internecine competition. Still, despite Diamond's repeated bows to the complex interaction of such other factors as history, political economy, and social structure, it is clear that, to his mind, the overriding cause of social collapse remains ecocide, which he also considers the major threat to the survival of civilization on earth today. Indeed, contemporary tales of social change brought about by ecological damage bracket his discussion of the past and constitute the larger portion of the book. Diamond portrays the genocide in Rwanda in 1994, for example, as a classic Malthusian crisis: too many people, too little food and land. Tensions between Hutus and Tutsis were real, to be sure, and were exploited by Rwandan politicians for their own purposes. But Diamond believes that the scale of the bloodbath can only be explained by the inability of Rwandans to support themselves on small farms. Haiti offers a similar cautionary story, having long been the basketcase of the Western hemisphere because of almost total deforestation and agricultural insufficiency. Diamond sees the early stages of ecological collapse in two larger nations as well--where he also finds signs of hope. China faces problems of soil erosion, desertification, urbanization, and rapid industrialization, all of which contribute to rapid ecological destruction and the overuse of resources. But the Chinese have taken steps to preserve their remaining forests and to limit their population. In Australia, the government is rethinking its historic support for industries like sheep-herding and wheat cultivation--both poorly suited to Australia's ecosystem--and is embarking on projects to restore the continent's native flora and manage its scarce water supplies. Such reforms suggest to Diamond that some of us may yet be able to save ourselves from the fate of the Easter Islanders. Collapse is not light reading. Each of Diamond's vignettes is laden with facts and statistics--in one paragraph, for example, he lists the common and scientific names of fourteen different plants harvested by the Tikopians, followed by a description of the agronomy of Tikopian swamps. Such dry fare is not made easier to digest by the prose style, which tends to be ponderous and repetitive. All the same, Collapse is an impressively researched and keenly argued book. The fatal weakness of Collapse is Diamond's constant overreaching. In trying to apply the lessons of the past to the present, he wanders down several paths with no obvious connection to the main point of the book. His opening section on Montana, for instance, is interesting in its own right, but the problems faced by Montanans are similar only superficially to the problems that were faced by Easter Islanders or Mayans. The fact that Montana's traditional industries have turned out to be unprofitable and impractical is perhaps unfortunate for long-time residents, but it hardly seems to qualify as a historic catastrophe. Other applications of Diamond's thesis to the modern world are even more far-fetched. He repeatedly alludes to the dangers of globalization, suggesting that it makes the entire planet more vulnerable to the collapse of a single nation. This might be a reasonable conjecture in the short term, but in the long term it seems more likely that globalization would act to insulate the world from such collapse, since resources that formerly would have had to be provided by a single country could now eventually be supplied by another. The unstated premise of Collapse seems to be that the entire planet is headed for a Malthusian crisis, which can be staved off only by extreme measures like China's one-child policy. But is this view defensible? Diamond takes no note of the extraordinary increases in food production achieved in recent decades; nor does he consider the likelihood that crises in places like Rwanda owe more to poor land management than to a shortage of farmland as such. As for the population controls Diamond seems to endorse, he says nothing about their unhappy practical consequences--including the sort of intensive urban development he decries--let alone their questionability on moral grounds. Indeed, as Collapse progresses, Diamond's arguments grow increasingly one-sided. The entire final chapter is a discursive screed on the need for urgent environmental action, accompanied by a series of rather unconvincing potshots at those who are skeptical of such measures. To his credit, Diamond takes pains to avoid what he rightly calls "environmental determinism." But although he recognizes the role played by social and cultural factors, he does not seem to appreciate how such recognition serves again and again to undermine his emphasis on ecocide, making his thesis seem arbitrary, if not ideologically motivated. For many of the societies Diamond discusses, it is not even clear that environmental damage was a major determinant of collapse. Thus, the Vikings certainly were not helped in the long run by the rapid deforestation of Greenland, or by their concerted effort to maintain cattle in the face of unfavorable local conditions. But they were probably hurt even more by their rigid customs, which apparently included the avoidance of fish (as Diamond reports, fish bones are almost never found in their middens). Diamond himself notes that the inhabitants of Iceland, who faced similar problems, managed to survive by switching from an agrarian economy to one based on the production and export of salted cod. Their brethren on Greenland might have survived by similar means, if only they had eaten fish. A prominent theme of Collapse, but one which Diamond almost completely ignores, is that societies tend to do best if their decision-making is open and democratic. Many societies that failed--like Easter Island and the Mayan empire--were ruled by elites more concerned with self-aggrandizement than with the stewardship of natural resources for the common good. The Vikings maintained economically ruinous subsidies for cattle farms to serve the needs of rich landlords and foreign-born bishops. Societies that succeeded, by contrast, were often governed by some form of representative democracy. To this day Iceland has the world's oldest legislative body, and the Tikopian "government" (if one can call it that) resembles a condo association. Diamond cites these as examples of "bottom-up" management, but he also praises "top-down" successes, like Tokugawa Japan. Centralized rule, however, has been responsible for many of the worst ecological disasters of modern times, as in the industrial wastelands of the former Soviet Union and Eastern bloc. Judging from Diamond's examples of successful bottom-up societies, and of corporations that have found it in their financial interest to adopt ecologically friendly policies, our best course of action is the one exemplified by the state of Montana: governance at the local level based on democratic values and economic realities. It is, in other words, the course we are already following. Kevin Shapiro is a research fellow in neuroscience and a student at Harvard Medical School. From checker at panix.com Tue May 3 22:15:06 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:15:06 -0400 (EDT) Subject: [Paleopsych] TLS: (Tom Friedman) Confusing Columbus Message-ID: Confusing Columbus http://www.economist.com/books/displayStory.cfm?story_id=3809512 5.3.31 The World Is Flat By Thomas L. Friedman. Farrar, Straus and Giroux; 488 pages; $27.50. Penguin/Allen Lane; GBP20 THE term "populariser" is often used to sneer at writers who manage to reach a wide audience by those who don't. But not all popularisers are guilty of sensationalising or over-simplifying serious topics. There is a sense in which everyone in modern societies, even the most earnest or intellectually gifted, relies on the popularisation of ideas or information, if that term is understood to mean the making of complex issues comprehensible to the non-specialist. Achieving this is admirable. In the field of international affairs one of America's most prominent popularisers is Thomas Friedman, the leading columnist on the subject for the NEW YORK TIMES. Mr Friedman constantly travels the world, interviewing just about everyone who matters. He has won three Pulitzer prizes. If anyone should be able to explain the many complicated political, economic and social issues connected to the phenomenon of globalisation, it should be him. What a surprise, then, that his latest book is such a dreary failure. Mr Friedman's book is subtitled "A Brief History of the Twenty-First Century", but it is not brief, it is not any recognisable form of history--except perhaps of Mr Friedman's own wanderings around the world--and the reference to our new, baby century is just gratuitous. Even according to Mr Friedman's own account, the world has been globalising since 1492. This kind of imprecision--less kind readers might even use the word "sloppiness"--permeates Mr Friedman's book. It begins with an account of Christopher Columbus, who sets out to find India only to run into the Americas. Mr Friedman claims that this proved Columbus's thesis that the world is round. It did nothing of the kind. Proof that the world is round came only in 1522, when the sole surviving ship from Ferdinand Magellan's little fleet returned to Spain. Undaunted by this fact, Mr Friedman portrays himself as a modern-day Columbus. Like the Italian sailor, he also makes a startling discovery--this time on a trip to India--though it turns out to be just the opposite of Columbus's. An entrepreneur in Bangalore tells him that "the playing field is being levelled" between competitors there and in America by communications technology. The phrase haunts Mr Friedman. He chews it over, and over, and over. And then it comes to him: "My God, he's telling me the world is flat!" Of course, the entrepreneur, even by Mr Friedman's own account, said nothing of the kind. But Mr Friedman has discovered his metaphor for globalisation, and now nothing will stop him. He shows his readers no mercy, proceeding to flog this inaccurate and empty image to death over hundreds of pages. In his effort to prove that the world is flat (he means "smaller"), Mr Friedman talks to many people and he quotes at length lots of articles by other writers, as well as e-mails, official reports, advertising jingles, speeches and statistics. His book contains a mass of information. Some of it is relevant to globalisation. Like many journalists, he is an inveterate name-dropper, but he does also manage to interview some interesting and knowledgeable people. Mr Friedman's problem is not a lack of detail. It is that he has so little to say. Over and over again he makes the same few familiar points: the world is getting smaller, this process seems inexorable, many things are changing, and we should not fear this. Rarely has so much information been collected to so little effect. A number of truly enlightening books have been published recently which not only support globalisation, but answer its critics and explain its complexities to the general reader--most notably Jagdish Bhagwati's "In Defence of Globalisation" and Martin Wolf's "Why Globalisation Works". Because of Mr Friedman's fame as a columnist, his book will probably far outsell both of these. That is a shame. Anyone tempted to buy "The World is Flat" should hold back, and purchase instead Mr Bhagwati's book or Mr Wolf's. From checker at panix.com Tue May 3 22:15:24 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:15:24 -0400 (EDT) Subject: [Paleopsych] Salon: Don't kill your television Message-ID: Don't kill your television http://www.salon.com/books/review/2005/05/01/johnson/print.html 5.5.1 Far from making us stupid, violent and lazy, TV and video games are as good for us as spinach, says an engaging new book by Steven Johnson. By Farhad Manjoo Pop culture, like fast food, gets a bad rap. It's perfectly understandable: Because we consume so much of the stuff -- we watch so much TV, pack away so many fries -- and because the consumption is so intimate, it's natural to look to our indulgence as the cause of all that ails us. Let's face it, we Americans are fat and lazy and simple-minded; we yell a lot and we've got short attention spans and we're violent and promiscuous and godless; and when we're not putting horndogs into office we're electing dumb guys who start too many wars and can't balance the budget and ... you know what I mean? You are what you eat. The output follows from the input. When you look around and all you see is Ronald McDonald and Ryan Seacrest, it seems natural to conclude that junk food and junk culture are responsible for a large chunk of the mess we're in. The other day, though, in an unbelievably delicious turn of events, the government reported that people who are overweight face a lower risk of death than folks who are thin. While the news didn't exactly exonerate junk food, it was a fitting prelude to the publication of Steven Johnson's new polemic "Everything Bad Is Good for You," which argues that what we think of as junk food for the mind -- video games, TV shows, movies and much of what one finds online -- is not actually junk at all. In this intriguing volume, Johnson marshals the findings of brain scientists and psychologists to examine the culture in which we swim, and he shows that contrary to what many of us assume, mass media is becoming more sophisticated all the time. The media, he says, shouldn't be fingered as the source of all our problems. Ryan Seacrest is no villain. Instead, TV, DVDs, video games and computers are making us smarter every day. "For decades, we've worked under the assumption that mass culture follows a steadily declining path towards lowest-common-denominator standards," Johnson writes. "But in fact, the exact opposite is happening: the culture is getting more intellectually demanding, not less." Johnson labels the trend "the Sleeper Curve," after the 1973 Woody Allen film that jokes that in the future, more advanced societies will come to understand the nutritional benefits of deep fat, cream pies and hot fudge. Indeed, at first, Johnson's argument does sound as shocking as if your doctor had advised you to eat more donuts and, for God's sake, to try and stay away from spinach. But Johnson is a forceful writer, and he makes a good case; his book is an elegant work of argumentation, the kind in which the author anticipates your silent challenges to his ideas and hospitably tucks you in, quickly bringing you around to his side. In making his case for pop culture, Johnson, who was a co-founder of the pioneering (and now-defunct) Web journal Feed, draws on research from his last book, "Mind Wide Open," which probed the mysteries of how our brains function. Johnson's primary method of analyzing media involves a concept he calls "cognitive labor." Instead of judging the value of a certain book, video game or movie by looking at its content -- at the snappy dialogue, or the cool graphics, or the objectives of the game -- Johnson says that we should instead examine "the kind of thinking you have to do to make sense of a cultural experience." Probed this way, the virtues of today's video games and TV shows become readily apparent, and the fact that people aren't reading long-form literature as much as they used to looks less than dire. "By almost all standards we use to measure reading's cognitive benefits -- attention, memory, following threads, and so on -- the non-literary popular culture has been steadily growing more challenging over the past thirty years," Johnson says. Moreover, non-literary media like video games, TV and the movies are also "honing different mental skills that are just as important as the ones exercised by reading books." Johnson adds that he's not offering a mere hypothesis for how video games and TV shows may affect our brains -- there's proof, he says, that society is getting smarter due to the media it consumes. In most developed countries, including the United States, IQs have been rising over the past half-century, a statistic that of course stands in stark contrast to the caricature of modern American idiocy. Johnson attributes intelligence gains to the increasing sophistication of our media, and writes that, in particular, mass media is helping us -- especially children -- learn how to deal with complex technical systems. Kids today, he points out, often master electronic devices in ways that their parents can't comprehend. They do this because their brains have been trained to understand complexity through video games and through TV; mass media, he says, prepares children for the increased difficulty that tomorrow's world will surely offer, and it does so in a way that reading a book simply cannot do. Still, at times Johnson protests too much, setting up what look like straw men defenders of old media so that he can expound on the greatness of the new. It's true that many oldsters continue to say a lot of silly things about the current media environment. Johnson quotes Steve Allen, George Will, the "Dr. Spock" child-care books and the Parents Television Council, all of whom think of modern media in the way former FCC chairman Newton Minow famously described the television landscape of the early 1960s -- as a "vast wasteland." (For good measure, Johnson could also have taken a stab at opportunistic politicians like Jennifer Granholm, the Democratic governor of Michigan, who's trying to pass a state ban on the sale of violent video games to minors, or misguided liberals like Kalle Lasn, who wants vigilantes to shut off your TV.) Yet, I suspect that most of Johnson's audience probably already gets it. I was tickled by much of what Johnson illustrates about how video games and TV affect your brain, and some of it surprised me, but I wasn't really skeptical in the first place. Most people my age -- kids who grew up at the altar of Nintendo and "Seinfeld" -- probably feel the same way. And this is to Johnson's credit: To young people, his take on media feels intuitively right. It's clear what he means when he says TV makes you think, and that video games require your brain. Indeed, if you've ever played a video game, Johnson pretty much has you at hello. That reading books is good for children is the most treasured notion in society's cabinet of received child-rearing wisdom, Johnson notes. Yet it's a pretty well established fact that kids today don't read as much as kids of yesterday -- at least, they're not reading books. (Few studies, Johnson points out, have taken note of the explosion of reading prompted by electronic media like the Web.) What are these children doing? They're playing video games. And other than praising games for building a kid's "hand-eye coordination," video games are, say child experts like Dr. Spock, a "colossal waste of time," leading us down the path to hell. What's best about Johnson's section arguing that video games are just as good for you as books are is his tone: He's breezy and funny, and for a while you forget that he's proposing the kind of idea that in earlier times may have ended with a sip of hemlock. As I say, I think most people will be with him from the start: Video games are better than we think? Sure, I'll buy that. But one still feels itchy under the collar when he starts comparing something as sacred as the bound book to the sacrilege that is "Grand Theft Auto." And when, in a short, satirical passage, he points out all the shortcomings of books in the same unfair way most people describe the shortcomings of video games, I'm sure he drives more than a few readers to go out in search of some hemlock. A sample: "Perhaps the most dangerous property of ... books is that they follow a fixed linear path. You can't control their narratives in any fashion -- you simply sit back and have the story dictated to you ... This risks instilling a general passivity in our children, making them feel as if they're powerless to change their circumstances. Reading is not an active, participatory process; it's a submissive one." Of course, Johnson makes clear, he loves books (they provide, for starters, his livelihood). Still, his criticism of books' lack of interactivity -- even if it's offered as a purposefully specious point -- is valid. Books may promote a wide range of mental exercises, and a certain book may send your mind skittering in a dozen euphoric directions, but there are things that a book simply will not, cannot, do. Books don't let you explore beyond the narrative. Their scenery is set, and what's there is all that's there. You may have liked to have visited some of Gatsby's neighbors, but you can't. Books also don't ask you to make decisions, and in a larger sense don't require you to participate. You sit back and watch a book unfold before you. The book's possibilities are limited; what will happen is what's written on the next page. Read it a thousand times, still Rabbit always runs. So this should be plain: Because they're interactive, video games promote certain mental functions that books do not. Specifically, video games exercise your brain's capacity to understand complex situations. That's because in most video games, the rules, and sometimes the objectives, aren't explicit. You fall into the sleazy urban landscape of "Grand Theft Auto" with no real idea of what you're supposed to do. Indeed, Johnson points out, much of the action in playing any video game is finding out how to play the game -- determining how your character moves, seeing which weapons do what, testing the physics of the place. If you fall from a building, does your character get hurt? What happens if you open this door? What kind of strategy can you plan to beat the monster on Level 3? The kind of probing gamers employ to determine what's going on in such simulated worlds, Johnson says, is very similar to the kind of probing scientists use to understand the natural world. Kids playing video games, in other words, are "learning the basic procedure of the scientific method." Because TV is more fun, Johnson's section on television is more engaging than his examination of video games, but its revelations also feel a bit more obvious. His main point -- you can see an extended version of it in this New York Times Magazine excerpt -- is that most modern TV shows exercise your brain in ways that old TV shows never dared. Today's shows, whether dramas or comedies, are multithreaded -- several subplots occur at the same time, and in the best shows (like "The Sopranos" or "The West Wing") the subplots often run into each other (there is one popular exception: "Law & Order."). Modern shows -- including, of course, reality shows -- also feature many more characters; only a handful of regulars graced "Dallas" every week, but there are dozens of people in "24." Today's TV shows are also far more willing to keep the viewer in the dark about what's going on in a certain scene, or to include allusions to other art forms, or previous years' episodes. Medical jargon has been written into just about every scene on "ER" specifically to keep you on your toes about what's happening. "Nearly every extended sequence in 'Seinfeld' or 'The Simpsons' ... will contain a joke that only makes sense if the viewer fills in supplementary information -- information that is deliberately withheld from the viewer," Johnson writes. "If you've never seen the 'Mulva' episode, or the name 'Art Vandelay' means nothing to you, then the subsequent references -- many of them occurring years after their original appearance -- will pass on by unappreciated." What all this amounts to, Johnson says, is work for your brain. Watching TV is not a passive exercise. When you're watching one of today's popular shows, even something as nominally silly as "Desperate Housewives," you're exercising your brain -- you're learning how to make sense of a complex narrative, you're learning how to navigate social networks, you're learning (through reality TV) about the intricacies of social intelligence, and a great deal more. What I wonder, though, is, Doesn't everyone know that today's TV is better than yesterday's TV? It's here that I think Johnson's too focused on straw men. Like most Americans, I've spent enough time watching television to have earned several advanced degrees in the subject. Yes, TV today is clogged with more sex and violence than TV of yesterday, but for all that, is there anyone in America who doesn't believe that on average, what we've seen on TV in the last decade has been more intricate, more complex and just plain smarter than the shows of the 1980s or the 1970s? Of course, there are exceptions; everyone can think of a great show from the 1970s that beats a middling show of today. ("The Jeffersons" kicks "According to Jim's" ass.) But I'm talking about apples-to-apples comparisons: Is there anyone who prefers "Hill Street Blues," which as Johnson points out was one of the best dramas of the 1980s, to "The West Wing" or "ER" or "The Sopranos"? I imagine only the very nostalgic would say they do. In the same way, I don't know how anyone couldn't see that "Seinfeld" is smarter than "Cheers," or that "Survivor" is more arresting than "Family Feud," or that "American Idol" clobbers "Star Search." When I say that the new shows are better, I mean in the same ways that Johnson argues -- not based on content, but on brain work. Today's shows tease your brain in ways that the old shows do not, and you are aware of the difference. We may not have plotted out the shows' mechanism as well as Johnson has -- we can't say precisely why "ER" is completely different from "St. Elsewhere" -- but to me, at least, the difference is clear enough that Johnson's Sleeper Curve is unsurprising. As I see it, then, the most interesting question about Johnson's theory is not whether it's accurate. It's why it's happening -- why is media getting smarter, and why are we flocking to media that actually makes us smarter? Johnson examines the question at some length, and he fingers two usual suspects: technology (the VCR, TiVo, DVDs, ever more powerful game systems) and economics (the increasing importance of the syndication market). But I like the third part of his answer best -- our media's getting smarter, he says, because the brain craves intelligent programming. The dynamic is that of a feedback loop: Today's media is smarter because yesterday's media made us smart to begin with. "Dragnet" prepares you for "Starsky and Hutch," which prepares you for "Hill Street Blues," which begets "ER," "The West Wing" and "The Sopranos." If we'd seen "The West Wing" in the 1980s, we wouldn't have known what to do with it. Indeed, many people didn't know what to do with "Hill Street Blues" when it debuted, in the same way that all path-breaking media confound viewers at first. Few people understood the early years of "Seinfeld," and, today, only a small crew can appreciate the genius of "Arrested Development." The amazing thing -- and the most hopeful thing in Johnson's book, and about culture in general -- is that the mind challenges itself to understand what's just out of its reach. After three years of watching "Seinfeld" the nation more or less collectively began to understand the thing. In no time, then, the show lodged itself into the cultural landscape. No longer, after that, could you remark on someone's sexuality without adding, "Not that there's anything wrong with that." And, whatever else you may have heard, this tells us, once and for all, that we are not stupid. Farhad Manjoo is a Salon staff writer. From checker at panix.com Tue May 3 22:15:41 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:15:41 -0400 (EDT) Subject: [Paleopsych] Wilson Q.: Review of Robert Fogel, The Escape from Hunger and Premature Death, 1700-2100 Message-ID: Review of The Escape from Hunger and Premature Death, 1700-2100 The Wilson Quarterly, Wntr 2005 v29 i1 p119(2) The Escape from Hunger and Premature Death, 1700-2100: Europe, America, and the Third World. (Book Review) Robert J. Samuelson. THE ESCAPE FROM HUNGER AND PREMATURE DEATH, 1700-2100: Europe, America, and the Third World. By Robert William Fogel. Cambridge Univ. Press. 191 pp. $70 (hardcover), $23.99 (paper) >From our present perch of affluence, we forget the abject misery, malnutrition, and starvation that most people endured for most of recorded history. In a fact-filled book geared toward scholars, Nobel Prize-winning economist Robert Fogel of the University of Chicago reminds us of the tinge strides in conquering widespread hunger and of the immense economic and social consequences of that achievement. It may shock modern readers to learn how poorly fed and sickly most people were until 100 or 150 years ago, even in advanced countries. In 1750, life expectancy at birth was 37 years in Britain and 26 in France. Even by 1900, life expectancy was only 48 in Britain and 46 in France. With more fertile land, the United States fared slightly better, with a life expectancy that was greater than Britain's in 1750 (51) but identical to it in 1900 (48). Urbanization and industrialization in the 19th century actually led to setbacks. As Americans moved from place to place, they spread "cholera, typhoid, typhus ... and other major killer diseases," Fogel writes. Urban slums abetted sickness and poor nutrition. Fogel questions whether rising real wages in much of the 19th century signaled genuine advances in well-being. "Is it plausible," he asks, "that the overall standard of living of workers was improving if their nutritional status and life expectancy were declining?" By contrast, life expectancy in advanced countries is now in the high 70s (77 in the United States). Compared with those of the early 1700s, diets are 50 percent higher in calories in Britain and more than 100 percent higher in France. Summarizing his and others' research, Fogel calls this transformation "technophysio evolution." It has had enormous side effects. First, we've gotten taller. A typical American man in his 30s now stands 5 feet 10 inches, almost five inches taller than his English counterpart in 1750. (Societies offset food scarcities in part by producing shorter people, who need less food.) Second, we've gotten healthier. Although Fogel concedes that advances in public health (better water and sewage systems, for instance) and medicine (vaccines, antibiotics) have paid huge dividends, he argues that much of the gain in life expectancy stems from better nutrition. With better diets, people become more resistant to disease--their immune systems work better and their body tissue is stronger--and they have healthier babies. Finally, better diets have made economic growth possible. An overlooked cause of the meager growth before 1800, Fogel argues, is that many people were too weak to work. In the late 1700s, a fifth of the populations of England and France were "effectively excluded from the labor force." As people are better and lived longer, they worked harder. Fogel attributes 30 percent of Britain's economic growth since 1790 to better diets. This conclusion seems glib. After all, better diets came from technology that enabled more productive agriculture--better cultivation techniques, better seeds, more specialization. What, specifically, were these advances? Fogel doesn't say. His overwhelming focus on scholarly research on diets also makes his comments on the Third World an elaboration of the obvious (in effect: lots of people are still hungry), with little in the way of recommendations for what could be done. Fogel is always illuminating and, in his omissions, often frustrating. From checker at panix.com Tue May 3 22:15:53 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:15:53 -0400 (EDT) Subject: [Paleopsych] More-Than-Humanism: Ramez Naam In Conversation with R.U. Sirius Message-ID: More-Than-Humanism: Ramez Naam In Conversation with R.U. Sirius http://www.life-enhancement.com/neofiles/neofile_print.asp?id=61 5.4.25 Ramez Naam's recently published book, More That Human: Embracing the Promise of Biological Enhancement provides a well-researched, detail-oriented argument in favor of embracing technological advances that will likely increase our lifespans, our intelligence, allow us greater control over our mind states, and allow us to communicate brain-to-brain. The book is so cogent, even arch technophobe Bill McKibben offered these words of praise: "Ramez Naam provides a reliable and informed cook's tour of the world we might choose if we decide that we should fast-forward evolution. I disagree with virtually all his enthusiasms, but I think he has made his case cogently and well." I chatted with Ramez by email and, in a (literally) fevered state, ribbed him a bit about his seemingly unqualified optimism. NEOFILES: Describe your personal evolution. How did you come to realize that you were in favor of more-than-humanism? Also, do you call yourself a transhumanist and what are some of your thoughts on that movement or [2]memeplex? RAMEZ NAAM: I'm a geek. I've always been a geek. Growing up I loved science fiction. In particular I loved stories that showed characters that were more than human in some way super powers, super brains, immortality, etc. ... I also love science. I subscribe to scientific journals for the fun of it. I'll happily while away an evening reading Science or Nature and just soaking up all this incredible research that's going on. One day in 1999 I saw a paper that looked more like science fiction than science a group led by [3]John Chapin and Miguel Nicolelis had put electrodes into the brain of a live rat, and through that, had given the rat control over a robot arm. I was floored. I had thought that sort of integration of brains and computers was the realm of science fiction, not science fact. That was my first conscious realization that science was starting to sound a lot like science fiction. I suppose I've always been in favor of humans enhancing themselves I just didn't consider it a real possibility until that time. Does that make me a transhumanist? Yes, it does. But I think the term itself is a bit empty. In my mind, most of the citizens of the western world are transhumanists. Every woman who uses a birth control pill is altering her biology in fundamental ways to get the result she wants. Every person wearing glasses or contact lenses, everyone who puts a cell phone to their ear, everyone who pops a multi-vitamin, or drinks a cup of coffee to wake up in the morning or stay awake on a long drive they're all transhumanists. We are, as a rule, interested in products and technologies that expand our capabilities, that give us control over our world and minds and bodies. The only thing that separates self-identified transhumanists from the rest of consumer society is their enthusiasm about technologies that are still speculative. The self-identified folks are the enthusiasts and early adopters. The rest of society will follow when the benefits are more concrete, the safety's there, and the price is right. NF: There have been numbers of books written around the themes of life extension and transhumanism over the last few years. You seem to have organized a lot of details on activity and research in a lot of different areas. Would you describe the three developments or projects that you find the most exciting? RN:Only three? There are so many! But if I have to ... First, I'd say, is our growing power to alter our minds chemically and genetically. Back in 1999, Joe Tsien from Princeton made the cover of Time magazine with his [4]Doogie mice. These were mice he'd genetically engineered to have a slightly different structure of a particular chemical receptor in the brain. The gene he tweaked is called NR2B. Basically he gave them an extra copy of NR2B, which meant that their neurons were more sensitive to certain signals involved in learning, and would learn things more quickly. They were testing this as a potential technique against Alzheimer's disease or other age-related memory loss, thinking that doing this in older people could keep their memories from decaying. But what they found is that these mice didn't just stay mentally sharp to a greater age. They actually ended up smarter or at least able to learn more quickly than normal mice. And it was by a large margin. In some tests, like navigating a maze, the Doogie mice would learn the task in half the time it took the normal mice. Since '99, a lot of other teams have produced similar results, including one led by [5]Eric Kandel, who won the Nobel prize in Medicine in 2000 for his work on memory. At least half a dozen companies [6]including Kandel's are trying to bring this technology to market now, in the form of a pill that you can swallow. They'll target use in people with memory problems, but you can pretty well assume that there will be off-label use among people just looking to improve their memories. Second, I'd say, is the tremendous progress being made in extending lifespans. Until about 1990, if you asked geneticists about altering aging, they'd tell you that you'd have to make thousands of genetic changes to see any extension of creature's lifespan. Then in '90, a professor at the University of Colorado [7]Tom Johnson published a paper where he showed that by tweaking one gene, he could double the life span of a species of worm. Well, the scientific community did not take well to that. Johnson was called a charlatan and people whispered that he was faking his data. Then a few years later, a more famous researcher named Cynthia Kenyon discovered a second gene that had a similar effect in the same species. Since then, researchers have found dozens of these genes that can slow aging, in everything from yeast to worms to fruit flies to mice. The amazing thing is that pretty much the same genes have the same effect in all these species. You and I are much more genetically similar to mice than mice are to yeasts. And since the same genes can extend life in both mice and yeast, there's a good chance they'll work for humans as well. Third, I'll say, is the whole field of brain computer interfaces. Today there are human trials going on of brain implants that allow paralyzed people to control computers or robot arms just by thinking about it. There are blind patients receiving retinal implants and visual cortex implants that take signals from a video camera and feed them into the brain, and the patients can actually see out of these things. [8]DARPA has invested 10s of millions of dollars in this field. They're after technologies that can let fighter pilots control their planes by thinking about it, that can let commanders on the battlefield beam 3D maps into the minds of their soldiers real cyberpunk sort of stuff. To me that's incredibly exciting because these are actually communication scenarios. They're ways to get information in and out of our brains or from one person's brain to another more quickly or efficiently or with greater clarity. What if, instead of using a drawing program, I could just hold an image in my mind and beam it to you? What if we could hold that in a shared mental space on a computer, perhaps and work on it together? What if you could record the feeling of writing a book, or seeing a fantastic band, or having an incredible erotic experience, and let people play it back for themselves? Some of the most revolutionary technologies have been communication technologies the printing press, radio, the internet. Brain-computer interfaces just might be the ultimate communication tech. NF: You focus quite a bit on population and the tendency of technology to "trickle down." I thought your analysis was pretty on target. Can you give our readers a brief synopsis of your view of why post-humanity will be more distributed and less likely to create population problems than many people suspect? RN: Sure. I think the socioeconomic issues are quite important, which is why I spend two whole chapters on them in the book. There are really two specific questions that come up frequently: "Who will be able to afford these technologies?" and "Won't the population explode if we lengthen human life?" On the population question, it turns out that the major driver of population growth is really fertility rather than the death rate. If you look around the world, the countries with the longest life expectancies Japan, Sweden are actually shrinking in population. As these countries have gotten rich, people particularly women have decided that they want fewer children. On the other hand, the countries that are rapidly grown Indonesia, Nigeria, Pakistan have relatively low life expectancies. People die early there, but those who survive have big families. On the other hand, over the next 50 years, the UN projects that 3.7 billion people are going to die on this planet, while another 6.6 billion will be born. That'll take global population to about 9 billion people. Of the 3.7 billion who are projected to die in in the next 50 years, less than 2 billion of them will die of age-related causes. So even if we cured aging completely tomorrow, and magically delivered the cure to the entire world, the largest possible impact would be about 2 billion lives over 50 years. That would increase global population in 2050 from about 9 billion to about 11 billion a big change, but not as radical as the more than doubling that happened between 1950 and 2000. In any case, aging isn't going to be cured tomorrow. I walk through some calculations that if you could raise global life expectancy to 120 years by 2050 almost twice what it is today you would raise the 2050 population from the current projection of 8.9 billion people to 9.4 billion people. That's a good sized increase, but as a percentage of population, it's actually smaller than the change that occurred between 1970 and 1973. The takeaway, for me, is that life extension isn't going to have any radical effect on population in the next few decades. The question of economic access is a little more complex. People do worry that when these enhancement technologies come out, only the rich will have access to them. And they're right at the very beginning, only the rich will be able to afford some of these techniques. What it helps to realize, though, is that most of these enhancement techniques are really information goods. They cost a huge amount to develop, but almost nothing to manufacture. The same thing is true in general of pharmaceuticals today. Viagra costs about $15 per pill, but only a few cents of that is production cost. Mostly it's Pfizer bringing in profit or paying off the $1 billion price tag of developing a new drug. Pfizer can charge that much because the drug is patented. By law, no one else can manufacture it without Pfizer's consent. But in 2012, the patent expires. At that point, any generic manufacturer can make the drug. The more suppliers you have, the more price competition sets in. The more consumers you have, the more incentive there is for suppliers to enter the market. The net effect is that, the more desired any information good is, the cheaper it will be to acquire. You can see this when you look at drugs that are commonly used today. Penicillin was absolutely priceless when first introduced to the market. But now it costs less than one cent per dose. The same inverted supply and demand even applies to non-drug techniques. LASIK cost $5,000 per eye when it first came out now you can get it for $299. As more and more people wanted LASIK, more doctors started offering it. And the more doctors there are offering it, the most they have to compete with each other on price. The absolute worst thing you can do if you want these technologies equally available to poor and rich is to ban them. Prohibition would create a black market with worse safety, higher prices, and no scientific tracking of what's going on. Viagra and cocaine cost roughly the same per gram at the moment. In a decade, Viagra will be much cheaper but cocaine will be the same price it is now. I think we'd rather our enhancements follow prescription drug economics rather than illegal drug economics. And even if governments could implement perfect bans, that wouldn't stop people from using these technologies. Asia is much more receptive to biotech than the US and Europe. If a rich couple can't get the genetic treatments they want here, they can absolutely fly to Singapore or Thailand and have it done there. The poor or middle class couple doesn't have the same options. If anything, where I'd like to see government intervene is in the opposite direction investing in those who can't afford these technologies themselves. We already spend a large amount of money enhancing our children. We have free grade schools and high schools, free vaccinations for poor children, guaranteed student loans. And those things pay dividends. Every 1% decrease in health care costs saves the country $10 Billion a year. Every 1% increase in productivity makes the country $100 Billion richer in a year, or a $1 Trillion richer over a decade. That money comes from innovation architects designing better buildings, engineers making better cars, coders putting out better software, scientists inventing entirely new things we haven't conceived of. And that's why we invest in things like education because we know they pay dividends later on. Biotech enhancements have the same potential. Maybe someday we'll have government personal enhancement loans and scholarships. I can dream. NF: You mention that Asian countries have less of a prejudice against genetic manipulation than Americans do. On the downside, could that be because they have less of a sense of individual autonomy? In other words, a human born with her germ-line engineered to produce certain qualities has no choice in the matter. Do you see a line between biological manipulation and personal autonomy? RN: Of all the ethical issues I talk about in the book equality, safety, 'playing god," and so on the issues of parents and children are the hardest for me. Most parents really want what's best for their kids, and parents are also generally pretty cautious they look for safety above all else. So I think for the most part, parents will make OK choices. And those choices are in many ways similar to choices they have to make today how to raise their child, what school to send her to, whether to let her watch TV, and what and how much. Parents make a huge number of choices today, and by and large we trust them to do so. Even so, it makes me uncomfortable if I think about, for example, highly religious parents genetically engineering their own children to be more religious. There's an aspect of parents wanting to control the behavior of their kids that is very tough to deal with. The good news is that it's likely to be extremely hard to do that. Most personality traits are between one half and one third correlated with a person's genes. And the remainder is typically what geneticists call "non-shared" environmental effects. The "non-shared" part means that it's not shared by two children growing up in the same household. It's something very unique about what happens to them when growing up, rather than something that parents can control environmentally. So even if you attempt to make your child more religious, for instance, an awful lot of how she turns out is going to depend on chance. You can make it more likely that she'll behave a certain way, but you will also runs the risk of overshooting. If you try to produce a kid who's more assertive, you might end up with an aggressive monster. If you try to alter your child to be more polite, you might get a doormat. Put these points together, and it's going to be hard for parents to really control the personality of their children. Individuality is going to be around for a long time. And a lot of these kids, by the time they're 20, are going to find that there are effective personality alterations they can make to themselves that are much more effective than those that were available to their parents before they were born. So whatever starting personality you may try to instill in a child, they're going to grow up to find a world full of options for altering themselves. NF: You focus on the future brought about by biological enhancement. Do you think that evolutions in [9]nanotechnology might alter this picture? If not, why not? If so, how? RN: Nanotech is such a big word it means a lot of things to a lot of people. The kind people most frequently bring up in this context is the model of tiny nano-robots that can precisely re-organize matter on a molecular scale. Given that technology, we would be able to augment human abilities in amazing ways making far bigger changes than the ones I write about in the book. But I suspect general purpose nanotech of that sort is still a long ways off. There are lots of questions no one has answered to my satisfaction about how you build these devices, let alone how to program and control them. I don't mean to be downbeat. There are great thinking going on in the field, but I suspect that what we'll really see in the next few decades are more narrow applications of nanotech in areas like chip design, sensors, materials, and relevant to human enhancement areas of biotech like drug delivery, genetic engineering, and gene sequencing. Biology already is nanotechnology, after all. Every cell in your body is an incredibly complex nanomachine. The small-molecule drugs we use to treat disease plug into interfaces that already exist on these nanomachines. The viruses we use to deliver new genes are their own kind of nanomachine, evolved to this specific purpose. So while there are a lot of ideas on how to build new classes of machines from scratch, I suspect things will progress much faster in areas where people take these existing designs our genome and cell biology and neural architecture and make incremental changes to them with the best tools at hand. NF: Referring back to your first answer, lots of transgeeks reference superheroes in comics and SF. But one of the aspects of comic superheroes that appeal to young imaginations is the fact that the superheroes have powers that everybody else doesn't have! What about the possibility that individuals and societies might try to sabotage the enhancements of the enemy? In other words, what about criminal social competition and war? In 10 years, instead of worrying about the North Koreans getting the nuke, we might be worrying about them getting the latest upgrade of supersoldier. RN: To the extent that this technology is employed by the military, you're sure to see one side try to one-up the other. The US military wants to have the most capable soldiers possible, which means having the best training, best gear, and the best enhancements. They're going to try to keep that edge through a combination of outspending the competition, keeping some technologies secret or restricted, and trying to deny competitors their capabilities on the battlefield. At the same time, we're never going to be as worried about souped up soldiers as we are about nukes or infectious bio-weapons. The scale of the damage you can do is different by orders of magnitude. All told though, I think the majority of investment into these technologies is going to be consumer driven first in the medical realm and then in the self-improvement realm. If you think about it, the US military spends a huge amount on computers, physical fitness, and skills training for its soldiers. But that's still a drop in the bucket compared to the overall size of the computer industry, the amount consumers spend on gym memberships and exercise videos, or the total education spending in this country. NF: I guess what I'm trying to do is get underneath your apparently unfailingly bright view of the human species (just 'cause I'm in a mood). I'm generally trans-positive, but in the light of human history, nightmare scenarios seem at least as plausible. So what makes you so damn upbeat, Ramez? RN: You know, these technologies will definitely cause problems. There's just no way around that. Every technology that's really mattered has had some sort of unexpected consequence on society. Cars lead to highway fatalities and smog. Antibiotics contributed to the population explosion of the last century. The internet makes it easier to transmit child porn. And every really powerful technology is employed for violent military uses. Trains, automobiles, and planes civilian transport technologies make armies more powerful and more deadly. Radio allows the coordination of larger groups of violent men. Even agriculture one of the most basic technologies we have helped usher in organized warfare by increasing population densities and allowing the creation of a soldier class. So I'm not blind to the fact that there will be problems. But when I follow the course of human history, even with the period atrocities and downturns, the world seems to be steadily becoming a better place. At the turn of the 20th century, average life expectancy was less than 40 years. Today it's 66. It's 66 years in India one of the poorest nations on earth. That's twice the life expectancy that the Romans enjoyed at the height of their empire. And the developing world is actually catching up with the rich world in life expectancy. The gap is closing every year. You can see the same thing in the amount of violence in society. We have this romantic notion of peaceful life among the hunter-gatherers, but anthropologists have documented that in hunter-gatherer tribes, warfare routinely accounted for twenty, thirty, forty percent of all male deaths. In the 20th century, by contrast, less than one percent of male deaths have come from warfare, even when you include the world wars. And then there's personal freedom. Even as recently as a couple decades ago, people's choices were narrower. We live in a world where more and more people are being educated, more are able to choose how they spend their lives, more people have access to a wider variety of information and goods than ever before. People around the world on average have a higher standard of living than ever before. I don't think those trends are just coincidental. I think they're an emergent property of human societies particularly free human societies. When you put millions or billions of people together and give them freedom to choose how they'll spend their time, energy, and money, a higher level intelligent behavior emerges from the whole. I'm not saying this in any sort of mystical way just a pragmatic one. Many people working together seem to make far better choices than any individual even the smartest individual could possibly make. And one of the fundamental trends I see in the world is this movement towards increasing the ability of individuals to interact, share information, and communicate with one another. Or maybe everything I just said is a rationalization, and I'm actually upbeat because of some random variation in my serotonin receptor genes. References 1. http://www.morethanhuman.org/ 2. http://www.life-enhancement.com/le/neofiles/default.asp?ID=13 3. http://www.downstate.edu/pharmacology/chapin.htm 4. http://news.bbc.co.uk/1/hi/sci/tech/435816.stm 5. http://en.wikipedia.org/wiki/Eric_R._Kandel 6. http://www.memorypharma.com/a_advisoryboard.html 7. http://www.cbsnews.com/stories/2003/08/04/health/main566593.shtml 8. http://www.darpa.mil/ 9. http://www.life-enhancement.com/le/neofiles/default.asp?ID=20 From checker at panix.com Tue May 3 22:16:06 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:16:06 -0400 (EDT) Subject: [Paleopsych] GovtExec: DHS chief floats idea of collecting private citizens' information Message-ID: DHS chief floats idea of collecting private citizens' information http://www.govexec.com/story_page.cfm?articleid=31124&printerfriendlyVers=1& DAILY BRIEFING April 29, 2005 DHS chief floats idea of collecting private citizens' information By Siobhan Gorman, [2]National Journal Call it Total Information Awareness, homeland-style. Homeland Security Secretary Michael Chertoff this week floated an idea to start a nonprofit group that would collect information on private citizens, flag suspicious activity, and send names of suspicious people to his department. The idea, which Chertoff tossed out at an April 27 meeting with security-industry officials, is reminiscent of the Defense Department's now-dead Total Information Awareness program that sought to sift though heaps of foreign intelligence information to root out potential terrorist activity. According to one techie who attended the April 27 meeting, Chertoff told the group, "Maybe we can create a nonprofit and track people's activities, and an algorithm could red-flag individuals. Then, the nonprofit could give us the names." Chertoff also suggested that private industry form a group to collect proprietary information about cyber- and other infrastructure-security breaches from companies; scrub it of identifying information; aggregate it; and pass it along to the department. The financial services industry already has such a group. "The secretary was responding to a hypothetical question with a hypothetical answer," said Homeland Security Department press secretary Brian Roehrkasse. "He did not offer specific programmatic content or discuss any specific proposed approach. Rather, he was discussing, in general terms, the importance of this issue of balancing security and privacy." Harris Miller, president of the Information Technology Association of America, organized the gathering of about 50 security-industry executives from companies such as Microsoft, Oracle, and Verizon. Reached by phone at the meeting, he characterized the event as "an organizational meeting to discuss how the [information-technology] industry can work more effectively with each other" and with the Homeland Security Department. Because the meeting was closed to the press, Miller would not discuss Chertoff's comments.One meeting participant said that Chertoff told the group that having a nonprofit collect names rather than the government "would alleviate some of the concerns people have." Not so for this participant: "This is what made me sort of shift in my seat. It sounds like investigating every person for no reason." He was particularly concerned that an unknown formula created by this new group would determine the red flags. From checker at panix.com Tue May 3 22:16:18 2005 From: checker at panix.com (Premise Checker) Date: Tue, 3 May 2005 18:16:18 -0400 (EDT) Subject: [Paleopsych] NYT: Chimeras on the Horizon, but Don't Expect Centaurs Message-ID: ---------- Forwarded message ---------- Date: Tue, 3 May 2005 10:49:46 -0400 (EDT) From: Premise Checker To: Transhuman Tech Subject: NYT: Chimeras on the Horizon, but Don't Expect Centaurs Chimeras on the Horizon, but Don't Expect Centaurs New York Times, 5.5.3 http://www.nytimes.com/2005/05/03/science/03chim.html By [1]NICHOLAS WADE Common ground for ethical research on human embryonic stem cells may have been laid by the National Academy of Sciences in the well-received guidelines it proposed last week. But if research on human embryonic stem cells ever gets going, people will be hearing a lot more about chimeras, creatures composed of more than one kind of cell. The world of chimeras holds weirdnesses that may require some getting used to. The original chimera, a tripartite medley of lion, goat and snake, was a mere monster, but mythology is populated with half-human chimeras - centaurs, sphinxes, werewolves, minotaurs and mermaids, and the gorgon Medusa. These creatures hold generally sinister powers, as if to advertise the pre-Darwinian notion that species are fixed and penalties are severe for transgressing the boundaries between them. Biologists have been generating chimeras for years, though until now of a generally bland variety. If you mix the embryonic cells of a black mouse and a white mouse, you get a patchwork mouse, in which the cells from the two donors contribute to the coat and to tissues throughout the body. Cells can also be added at a later stage to specific organs; people who carry pig heart valves are, at least technically, chimeric. The promise of embryonic stem cells is that since all the tissues of the body are derived from them, they are a kind of universal clay. If biologists succeed in learning how to shape the clay into specific organs, like pancreas glands, heart muscle or kidneys, physicians may be able to provide replacement parts on demand. Developing these new organs, and testing them to the standards required by the Food and Drug Administration, will require growing human organs in animals. Such creations - of pigs with human hearts, monkeys with human larynxes - are likely to be unsettling to many. "I think people would be horrified," said Dr. William Hansen, an expert in mythology at Indiana University. Chimeras grip the imagination because people are both fascinated and repulsed by the defiance of natural order. "They promote a sense of wonder and awe and for many of us that is an enjoyable feeling; they are a safe form of danger as in watching a scary movie," Dr. Hansen said. From the biologists' point of view, animals made to grow human tissues do not really raise novel issues because they can be categorized as animals with added human parts. Biologists are more concerned about animals in which human cells have become seeded throughout the system. "The mixing of species is something people do worry about and their fears need to be addressed," said Dr. Richard O. Hynes of the Massachusetts Institute of Technology, the co-chairman of the National Academy of Sciences committee that issued the research guidelines. Foreseeing the need for chimeras if stem cell research gets near to therapy, Dr. Hynes's committee delved into the ethics of chimera manufacture, defining the two cases in which human-animal chimeras could raise awkward issues. One involves incorporating human cells into the germ line; the other is involves using a human brain, creating a human or half human mind imprisoned in an animal body. In the case of human cells' invading the germ line, the chimeric animals might then carry human eggs and sperm, and in mating could therefore generate a fertilized human egg. Hardly anyone would desire to be conceived by a pair of mice. To forestall such discomforting possibilities, the committee ruled that chimeric animals should not be allowed mate. Still, there may in the future be good reason to generate mice that produce human oocytes, as the unfertilized egg is called. Tissues made from embryonic stem cells are likely to be perceived as foreign by the patient's immune system. One way around this problem is to create the embryonic stem cells from a patient's own tissues, by transferring a nucleus from the patient's skin cell into a human oocyte whose own nucleus has been removed. These nuclear transfers, which are also the way that cloned animals are made, are at present highly inefficient and require some 200 oocytes for each successful cloning. Acquiring oocytes from volunteers is not a trivial procedure, and the academy's recommendation that women who volunteer should not be paid is unlikely to increase supply. Chimeric mice that make human oocytes could be the answer. There are also sound scientific reasons for creating mice with human brain cells, an experiment that has long been contemplated by Dr. Irving Weissman of Stanford. Many serious human diseases arise through the loss of certain types of brain cell. To test if these can be replaced with human neural stem cells, Dr. Weissman injected human brain cells into a mouse embryo and showed that they followed the rules for mouse neural stem cells, migrating to the olfactory bulb to create a regular stream of new odor-detecting neurons. The mice may have been perplexed by their deficient sense of smell but probably not greatly so because human cells constituted less than 1 percent of their brain. Dr. Weissman decided it would be useful to have a mouse with a much larger percentage of human brain cells, but he sought ethical guidance before trying the experiment. He plans to let such mice develop as fetuses and to curtail the experiment before birth, to see if their human brains cells have arranged themselves in the architecture of a mouse brain or human brain. Given the nine months it takes for a human brain to be constructed, it seems unlikely that the developmental program of the human neurons would have time to unfold very far in the 20-day gestation of a mouse. Contrary to the plot of every good horror movie, the biologists' chimera cookbook contains only recipes of medical interest. But if there were no limits, could they in fact turn chimeras of myth into reality? That depends on the creature. If embryonic cells from human and horse were mixed together, the cells of each species would try to contribute to each part of the body, as in the patchwork mouse, but in this case with goals so incompatible it is hard to see any viable creature being formed. Centaurs, in any case, have six limbs, and that would be fine for an insect but violates the standard mammalian body plan. A much greater chance of creating a viable chimeric creature would come from injecting human embryonic stem cells into a monkey or ape. For this reason the academy committee has firmly ruled out such experiments as unethical. But to continue a little on the path of fantasy, humans are still very similar to chimpanzees, their closest surviving cousins, and an embryo constructed of cells from each may be viable enough to be born. This chimerical creature would probably not be as enjoyable as the chimeras of mythology but more of a problem human - a Caliban-like personage with bad manners and difficult habits. "If something were half human and half animal, what would our moral responsibilities be?" says Richard Doerflinger of the United States Conference of Catholic Bishops. "It might be immoral to kill such a creature. It's wrong to create creatures whose moral stature we are perplexed about." Evidently the first rule of chimeric chemistry is not to make creatures whose behavior straddles the perceived division between the human and animal worlds. References 1. http://query.nytimes.com/search/query?ppds=bylL&v1=NICHOLAS%20WADE&fdq=19960101&td=sysdate&sort=newest&ac=NICHOLAS%20WADE&inline=nyt-per From waluk at earthlink.net Wed May 4 00:32:27 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Tue, 03 May 2005 17:32:27 -0700 Subject: [Paleopsych] child's play In-Reply-To: <20050503190556.13160.qmail@web30802.mail.mud.yahoo.com> References: <20050503190556.13160.qmail@web30802.mail.mud.yahoo.com> Message-ID: <4278181B.1060400@earthlink.net> Michael Christopher wrote: >>--That's an odd list... smiling is lumped in with wrestling? And with no distinction among levels of roughness in play fighting? I doubt there's any "politically correct" movement to ban smiling on the playground. >> Maybe not. Could be that Anthony Pellegrini as a professor in early childhood development thinks all children need a broad range of activity to experience the entire range of activity from fighting to fleeing. Only by knowing these activities will children grow into adults who can handle both emotional extremes, depending on the circumstances. Isn't this similar to gambling at poker and knowing when to hold 'em and when to show 'em? As the song continues....know when to walk away...know when to run. Regards, Gerry Reinhart-Waller From shovland at mindspring.com Wed May 4 04:41:07 2005 From: shovland at mindspring.com (Steve Hovland) Date: Tue, 3 May 2005 21:41:07 -0700 Subject: [Paleopsych] Future shape of media- presentation from ACRL 2005 Message-ID: <01C55028.D1061D80.shovland@mindspring.com> http://www.robinsloan.com/epic/ Click on: Go to a random mirror [you may have to keep trying if there's an error page) This is pretty neat flash work Steve Hovland www.stevehovland.net From HowlBloom at aol.com Wed May 4 06:02:23 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Wed, 4 May 2005 02:02:23 EDT Subject: [Paleopsych] instant evolution in societies of genes Message-ID: Note the following quote in the article below: ?These genes?are changing more swiftly than would be expected through random mutation alone.? The genes in question are genes that code for learning, genes that code for adaptive intelligence. These genes outpace the others in humans and chimps. The research outlined below indicates that these genes are first in the race to reorganize and upgrade themselves?they outspeed other genes in evolution. What do these fast-track genes have in common? They are the genes of the immune system and the genes of apoptosis?the genes of pre-programmed cell suicide. Pre-programmed cell suicide determines which cells we need and which we don?t. It resculpts the body to fit the exigencies of the moment. More important, the genes of pre-programmed cell suicide determine which 50% of the cerebral neurons we?re born with will live and which will die. In this harsh process of judgement, apoptosis shapes the brain to live in the society we?re a part of and to deal with the problems that society demands we help solve. Pre-programmed cell death, I suspect, also shapes our body to fit the demands of our physical environment. It expands the size of our lungs if we grow up in the Andes Mountains, where the air is thin. It makes sure that we don?t waste energy and materiel on oversized lungs if we?re born and raised near sea level (which 60% of us humans are). Then there?s the immune system, a learning mesh, a creative web, a neural-net-like community of nodes, of modules. The immune system is, in its own way, nearly as smart as the brain. The brain?s advantages: a brain brings multiple intelligences to work on a problem?seven of them if you go by Howard Gardner. I suspect the brain has more than that mere seven if you count the many forms of conscious reason, the many forms of intuition, the many forms of muscular metaphor, the many systems that keep us walking while we?re thinking or talking, our sensory systems, and the autonomous systems that take care of functions we seldom have to be aware of?heartbeat, digestion, and shunting blood to the place where it?s most needed at the moment. The genes of the immune system and of apoptosis. These are the genes of what Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century calls ?inner-judges? and of what The Lucifer Principle: A Scientific Expedition Into the Forces of History calls ?self-destruct mechanisms?. According to these two books, the genes of the immune system and of apoptosis are the genes that turn us into modules of a larger collective learning machine, a neural net that wires our subcultures, our nations, and our global societies into a massive, creative computational engine, a thinking, dreaming, reperceiving, and invention machine. The genes of the immune system and of apoptosis are the non-stop sharpeners of learning?s cutting edge. And the genes of the immune system and of apoptosis don?t lazily await random mutation to adapt. They take adaptation into their own hands, into their own c?s, a?s, g?s, and t?s, into their own thinking mesh. I suspect they also pull off what Jeff Hawkins talks about in his On Intelligence: they feed their output back into their input. They experiment with adjustments in our phenotype, in our bodies and our minds. They test their experiments in our social and physical environment. They incorporate what works and toss out what doesn?t?even if that means tossing out you and me. Which means that like Eshel Ben-Jacob?s creative webs of bacteria, the genes of the immune system and of apoptosis, the genes of instant evolution, may be able to spot problems, generate potential solutions, then respond to the success or failure of these hypotheses. The bottom line is this: Communities of genes?the community of 35,000 in a human genome, the community of 3.5 quadrillion (3,500,000,000,000,000) in a single human being, or the community of 3.5 septillion (3,500,000,000,000,000,000,000,000) in a society the size of China--are much more nimble than we think. Howard Retrieved May 3, 2005, from the World Wide Web _http://www.newscientist.com/article.ns?id=dn7335_ (http://www.newscientist.com/article.ns?id=dn7335) HOME |NEWS |EXPLORE BY SUBJECT |LAST WORD |SUBSCRIBE |SEARCH |ARCHIVE |RSS |JOBS Click to PrintFastest-evolving genes in humans and chimps revealed 18:37 03 May 2005 NewScientist.com news service Jennifer Viegas The most comprehensive study to date exploring the genetic divergence of humans and chimpanzees has revealed that the genes most favoured by natural selection are those associated with immunity, tumour suppression [hb: the immune system, like the brain, is one of our swiftest learning machines], and programmed cell death [hb: programmed cell death shapes our morphology to fit the shifts in our environment?especially the shifts in human culture. In other words, apoptosis is also a learning mechanism, part of what makes the connectionist machine work.]. These genes show signs of positive natural selection in both branches of the evolutionary tree and are changing more swiftly than would be expected through random mutation alone. Lead scientist Rasmus Nielsen and colleagues at the University of Copenhagen, Denmark, examined the 13,731 chimp genes that have equivalent genes with known functions in humans. Research in 2003 revealed that genes involved with smell, hearing, digestion, long bone growth, and hairiness are undergoing positive natural selection in chimps and humans. The new study has found that the strongest evidence for selection is related to disease defence and apoptosis - or programmed cell death - which is linked to sperm production. Plague and HIV Nielsen, a professor of bioinformatics, believes immune and defence genes are involved in ?an evolutionary arms race with pathogens?. ?Viruses and other pathogens evolve very fast, and the human immune system is constantly being challenged by the emergence of new pathogenic threats,? he told New Scientist. ?The amount of selection imposed on the human population by pathogens - such as the bubonic plague or HIV - is enormous. It is no wonder that the genes involved in defence against such pathogens are evolving very fast.? Harmit Singh Malik, a researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, US, agrees. Both Malik and Nielsen, however, expressed surprise over the findings concerning tumour suppression, which is linked to apoptosis - or programmed cell death - which can reduce the production of healthy, mature sperm. Selfish mutation The discovery by Nielsen that genes involved in apoptosis show strong evidence for positive natural selection may be due, in part, to the evolutionary drive for sperm cells to compete. Cells carrying genes that hinder apoptosis have a greater chance of producing mature sperm cells, so Nielsen believes these genes can become widespread in populations over time. But because primates also use apoptosis to eliminate cancerous cells, positive selection in this case may not be favourable for the mature animal: ?The selfish mutations that cause apoptosis avoidance may then also reduce the organism?s ability to fight cancer,? Nielsen explains. Journal reference: Public Library of Science Biology (vol 3, issue 6) Related Articles Life's top 10 greatest inventions http://www.newscientist.com/article.ns?id=mg18624941.700 09 April 2005 Sleeping around boosts evolution http://www.newscientist.com/article.ns?id=mg18424731.500 13 November 2004 Genetically-modified virus explodes cancer cells http://www.newscientist.com/article.ns?id=dn5056 01 June 2004 Weblinks Rasmus Nielsen, University of Copenhagen http://www.binf.ku.dk/users/rasmus/webpage/ras.html Harmit Singh Malik?s lab, Fred Hutchinson Cancer Research Center http://www.fhcrc.org/labs/malik/ Public Library of Science Biology http://biology.plosjournals.org/perlserv/?request=index-html&issn=1545-7885 Close this window Printed on Tue May 03 23:54:15 BST 2005 ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From waluk at earthlink.net Wed May 4 23:04:17 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 04 May 2005 16:04:17 -0700 Subject: [Paleopsych] Re: child's play In-Reply-To: <20050504185955.77455.qmail@web30804.mail.mud.yahoo.com> References: <20050504185955.77455.qmail@web30804.mail.mud.yahoo.com> Message-ID: <427954F1.6040108@earthlink.net> Firstly, please let me change the repetitive phrase "broad range of activity" to the single word "play" so that the sentence in fact reads ".... early childhood development thinks all children need a broad range of activity to experience playing roles as diverse as fighting and fleeing." Whether one is acting the role of trespasser or one who is being trespassed against, all roles in effect turn out to be play through the eyes of William Shakespeare's "the world is a stage.....". Whether or not wrestling, backyard or in the ring, is considered dangerous depends on what it is being compared to. In my opinion wrestling is less dangerous than boxing, soccer, or football but that's my two cent's worth. Could be that you have a different idea in mind. This is why I continue saying that "truth lies in the eyes of the beholder". Regards, Gerry Reinhart-Waller Michael Christopher wrote: >>>Could be that Anthony Pellegrini as a professor in >>> >>> >early childhood development thinks all children need a >broad range of activity to experience the entire range >of activity from fighting to fleeing.<< > >--I think most people would agree, the question is, >"when is fighting more than play". His list didn't >make that distinction, "fighting to fleeing" could >include anything from backyard wrestling (quite >dangerous) to "running and jumping" (totally normal). > >Michael > > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com > > > From anonymous_animus at yahoo.com Wed May 4 23:09:29 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Wed, 4 May 2005 16:09:29 -0700 (PDT) Subject: [Paleopsych] reality shows In-Reply-To: <200505041802.j44I2lR03171@tick.javien.com> Message-ID: <20050504230929.26536.qmail@web30812.mail.mud.yahoo.com> Gerry says: >>Rather than placing blame, we should instead focus on our ideals and goals that contribute to a group rather than only to personal satisfaction. Incidently this seems to be the thrust of the t.v. show "Extreme Makeover, Home ! Edition" where a group of volunteers designs, builds, and decorates a home for worthy clients. It's not much, but it's a beginning.<< --It's interesting to contrast those shows with the cutthroat ones. I wonder which way our culture will head as a whole? Dog-eat-dog, or community? Michael Discover Yahoo! Find restaurants, movies, travel and more fun for the weekend. Check it out! http://discover.yahoo.com/weekend.html From shovland at mindspring.com Wed May 4 23:12:11 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 4 May 2005 16:12:11 -0700 Subject: [Paleopsych] The Society of Organelles Message-ID: <01C550C4.07DE2BE0.shovland@mindspring.com> http://www.winterwren.com/apbio/cellorganelles/cells.html Nonmembrane Bound Organelles Ribosomes Centrioles Microtubules Membrane Bound Organelles Organelles made-up of single membranes Vacuoles Lysosomes Vesicles Endoplasmic reticulum Golgi Apparatus Peroxisomes Endomembrane System Organelles made-up of double membranes Mitochondria Chloroplasts From shovland at mindspring.com Wed May 4 23:23:18 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 4 May 2005 16:23:18 -0700 Subject: [Paleopsych] The Language of Enzymes Message-ID: <01C550C5.95C237C0.shovland@mindspring.com> http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/E/Enzymes.html Enzymes are catalysts. Most are proteins <../P/Proteins.html>. (A few ribonucleoprotein <../R/R.html> enzymes have been discovered and, for some of these, the catalytic activity is in the RNA part rather than the protein part. Link to discussion of these ribozymes <../R/Ribozymes.html>.) From waluk at earthlink.net Wed May 4 23:39:02 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 04 May 2005 16:39:02 -0700 Subject: [Paleopsych] reality shows In-Reply-To: <20050504230929.26536.qmail@web30812.mail.mud.yahoo.com> References: <20050504230929.26536.qmail@web30812.mail.mud.yahoo.com> Message-ID: <42795D16.202@earthlink.net> Could be that group dynamics are beginning to take center stage and the "selfish" generation will shortly find itself in the ebb. This could signal a change in paradigm but for how long is uncertain....group flow will only last as long as it takes before the tide begins to ebb and the "me" generation remakes itself into something more powerful than before. Gerry Michael Christopher wrote: >Gerry says: > > >>>Rather than placing blame, we should instead focus >>> >>> >on our ideals and goals that contribute to a group >rather than only to personal satisfaction. Incidently >this seems to be the thrust of the t.v. show "Extreme >Makeover, Home ! Edition" where a group of volunteers >designs, builds, and decorates a home for worthy >clients. It's not much, but it's a beginning.<< > >--It's interesting to contrast those shows with the >cutthroat ones. I wonder which way our culture will >head as a whole? Dog-eat-dog, or community? > >Michael > > > >Discover Yahoo! >Find restaurants, movies, travel and more fun for the weekend. Check it out! >http://discover.yahoo.com/weekend.html > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From checker at panix.com Thu May 5 14:55:40 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 10:55:40 -0400 (EDT) Subject: [Paleopsych] MSNBC: Human evolution at the crossroads Message-ID: Human evolution at the crossroads Genetics, cybernetics complicate forecast for species http://www.msnbc.msn.com/id/7103668/ 5.5.2 By Alan Boyle Science editor Scientists are fond of running the evolutionary clock backward, using DNA analysis and the fossil record to figure out when our ancestors stood erect and split off from the rest of the primate evolutionary tree. But the clock is running forward as well. So where are humans headed? Evolutionary biologist Richard Dawkins says it's the question he's most often asked, and "a question that any prudent evolutionist will evade." But the question is being raised even more frequently as researchers study our past and contemplate our future. Paleontologists say that anatomically modern humans may have at one time shared the Earth with as many as three other closely related types - Neanderthals, Homo erectus and the dwarf hominids whose remains were discovered last year in Indonesia. Does evolutionary theory allow for circumstances in which "spin-off" human species could develop again? Some think the rapid rise of genetic modification could be just such a circumstance. Others believe we could blend ourselves with machines in unprecedented ways - turning natural-born humans into an endangered species. Present-day fact, not science fiction Such ideas may sound like little more than science-fiction plot lines. But trend-watchers point out that we're already wrestling with real-world aspects of future human development, ranging from stem-cell research to the implantation of biocompatible computer chips. The debates are likely to become increasingly divisive once all the scientific implications sink in. "These issues touch upon religion, upon politics, upon values," said Gregory Stock, director of the Program on Medicine, Technology and Society at the University of California at Los Angeles. "This is about our vision of the future, essentially, and we'll never completely agree about those things." The problem is, scientists can't predict with precision how our species will adapt to changes over the next millennium, let alone the next million years. That's why Dawkins believes it's imprudent to make a prediction in the first place. Others see it differently: In the book "Future Evolution," University of Washington paleontologist Peter Ward argues that we are making ourselves virtually extinction-proof by bending Earth's flora and fauna to our will. And assuming that the human species will be hanging around for at least another 500 million years, Ward and others believe there are a few most likely scenarios for the future, based on a reading of past evolutionary episodes and current trends. Where are humans headed? Here's an imprudent assessment of five possible paths, ranging from homogenized humans to alien-looking hybrids bred for interstellar travel. Unihumans: Will we all be assimilated? Biologists say that different populations of a species have to be isolated from each other in order for those populations to diverge into separate species. That's the process that gave rise to 13 different species of "Darwin's Finches" in the Galapagos Islands. But what if the human species is so widespread there's no longer any opening for divergence? Evolution is still at work. But instead of diverging, our gene pool has been converging for tens of thousands of years - and Stuart Pimm, an expert on biodiversity at Duke University, says that trend may well be accelerating. "The big thing that people overlook when speculating about human evolution is that the raw matter for evolution is variation," he said. "We are going to lose that variability very quickly, and the reason is not quite a genetic argument, but it's close. At the moment we humans speak something on the order of 6,500 languages. If we look at the number of languages we will likely pass on to our children, that number is 600." Cultural diversity, as measured by linguistic diversity, is fading as human society becomes more interconnected globally, Pimm argued. "I do think that we are going to become much more homogeneous," he said. Ken Miller, an evolutionary biologist at Brown University, agreed: "We have become a kind of animal monoculture." Is that such a bad thing? A global culture of Unihumans could seem heavenly if we figure out how to achieve long-term political and economic stability and curb population growth. That may require the development of a more "domesticated" society - one in which our rough genetic edges are smoothed out. But like other monocultures, our species could be more susceptible to quick-spreading diseases, as last year's bird flu epidemic illustrated. "The genetic variability that we have protects us against suffering from massive harm when some bug comes along," Pimm said. "This idea of breeding the super-race, like breeding the super-race of corn or rice or whatever - the long-term consequences of that could be quite scary." Environmental pressures wouldn't stop Even a Unihuman culture would have to cope with evolutionary pressures from the environment, the University of Washington's Peter Ward said. Some environmentalists say toxins that work like estrogens are already having an effect: Such agents, found in pesticides and industrial PCBs, have been linked to earlier puberty for women, increased incidence of breast cancer and lower sperm counts for men. "One of the great frontiers is going to be trying to keep humans alive in a much more toxic world," he observed from his Seattle office. "The whales of Puget Sound are the most toxic whales on Earth. Puget Sound is just a huge cesspool. Well, imagine if that goes global." Global epidemics or dramatic environmental changes represent just two of the scenarios that could cause a Unihuman society to crack, putting natural selection - or perhaps not-so-natural selection - back into the evolutionary game. Then what? Survivalistians: Coping with doomsday Surviving doomsday is a story as old as Noah's Ark, and as new as the post-bioapocalypse movie "28 Days After." Catastrophes ranging from super-floods to plagues to nuclear war to asteroid strikes erase civilization as we know it, leaving remnants of humanity who go their own evolutionary ways. The classic Darwinian version of the story may well be H.G. Wells' "The Time Machine," in which humanity splits off into two species: the ruthless, underground Morlock and the effete, surface-dwelling Eloi. At least for modern-day humans, the forces that lead to species spin-offs have been largely held in abeyance: Populations are increasingly in contact with each other, leading to greater gene-mixing. Humans are no longer threatened by predators their own size, and medicine cancels out inherited infirmities ranging from hemophilia to nearsightedness. "We are helping genes that would have dropped out of the gene pool," paleontologist Peter Ward observed. But in Wells' tale and other science-fiction stories, a civilization-shattering catastrophe serves to divide humanity into separate populations, vulnerable once again to selection pressures. For example, people who had more genetic resistance to viral disease would be more likely to pass on that advantage to their descendants. If different populations develop in isolation over many thousands of generations, it's conceivable that separate species would emerge. For example, that virus-resistant strain of post-humans might eventually thrive in the wake of a global bioterror crisis, while less hardy humans would find themselves quarantined in the world's safe havens. Patterns in the spread of the virus that causes AIDS may hint at earlier, less catastrophic episodes of natural selection, said Stuart Pimm, a conservation biologist at Duke University: "There are pockets of people who don't seem to become HIV-positive, even though they have a lot of exposure to the virus - and that may be because their ancestors survived the plague 500 years ago." Evolution, or devolution? If the catastrophe ever came, could humanity recover? In science fiction, that's an intriguingly open question. For example, Stephen Baxter's novel "Evolution" foresees an environmental-military meltdown so severe that, over the course of 30 million years, humans devolve into separate species of eyeless mole-men, neo-apes and elephant-people herded by their super-rodent masters. Even Ward gives himself a little speculative leeway in his book "Future Evolution," where a time-traveling human meets his doom 10 million years from now at the hands - or in this case, the talons - of a flock of intelligent killer crows. But Ward finds it hard to believe that even a global catastrophe would keep human populations isolated long enough for our species to split apart. "Unless we totally forget how to build a boat, we can quickly come back," Ward said. Even in the event of a post-human split-off, evolutionary theory dictates that one species would eventually subjugate, assimilate or eliminate their competitors for the top job in the global ecosystem. Just ask the Neanderthals. "If you have two species competing over the same ecological niche, it ends badly for one of them, historically," said Joel Garreau, the author of the forthcoming book "Radical Evolution." The only reason chimpanzees still exist today is that they "had the brains to stay up in the trees and not come down into the open grasslands," he noted. "You have this optimistic view that you're not going to see speciation (among humans), and I desperately hope that's right," Garreau said. "But that's not the only scenario." Numans: Rise of the superhumans We've already seen the future of enhanced humans, and his name is Barry Bonds. The controversy surrounding the San Francisco Giants slugger, and whether steroids played a role in the bulked-up look that he and other baseball players have taken on, is only a foretaste of what's coming as scientists find new genetic and pharmacological ways to improve performance. Developments in the field are coming so quickly that social commentator Joel Garreau argues that they represent a new form of evolution. This radical kind of evolution moves much more quickly than biological evolution, which can take millions of years, or even cultural evolution, which works on a scale of hundreds or thousands of years. How long before this new wave of evolution spawns a new kind of human? "Try 20 years," Garreau told MSNBC.com. In his latest book, "Radical Evolution," Garreau reels off a litany of high-tech enhancements, ranging from steroid Supermen, to camera-equipped flying drones, to pills that keep soldiers going without sleep or food for days. "If you look at the superheroes of the '30s and the '40s, just about all of the technologies they had exist today," he said. Three kinds of humans Such enhancements are appearing first on the athletic field and the battlefield, Garreau said, but eventually they'll make their way to the collegiate scene, the office scene and even the dating scene. "You're talking about three different kinds of humans: the enhanced, the naturals and the rest," Garreau said. "The enhanced are defined as those who have the money and enthusiasm to make themselves live longer, be smarter, look sexier. That's what you're competing against." In Garreau's view of the world, the naturals will be those who eschew enhancements for higher reasons, just as vegetarians forgo meat and fundamentalists forgo what they see as illicit pleasures. Then there's all the rest of us, who don't get enhanced only because they can't. "They loathe and despise the people who do, and they also envy them," Garreau said. Scientists acknowledge that some of the medical enhancements on the horizon could engender a "have vs. have not" attitude. "But I could be a smart ass and ask how that's different from what we have now," said Brown University's Ken Miller. Medical advances as equalizers Miller went on to point out that in the past, "advances in medical science have actually been great levelers of social equality." For example, age-old scourges such as smallpox and polio have been eradicated, thanks to public health efforts in poorer as well as richer countries. That trend is likely to continue as scientists learn more about the genetic roots of disease, he said. "In terms of making genetic modifications to ourselves, it's much more likely we'll start to tinker with genes for disease susceptibility. ... Maybe there would be a long-term health project to breed HIV-resistant people," he said. When it comes to discussing ways to enhance humans, rather than simply make up for disabilities, the traits targeted most often are longevity and memory. Scientists have already found ways to enhance those traits in mice. Imagine improvements that could keep you in peak working condition past the age of 100. Those are the sorts of enhancements you might want to pass on to your descendants - and that could set the stage for reproductive isolation and an eventual species split-off. "In that scenario, why would you want your kid to marry somebody who would not pass on the genes that allowed your grandchildren to have longevity, too?" the University of Washington's Peter Ward asked. But that would require crossing yet another technological and ethical frontier. Instant superhumans - or monsters? To date, genetic medicine has focused on therapies that work on only one person at a time. The effects of those therapies aren't carried on to future generations. For example, if you take muscle-enhancing drugs, or even undergo gene therapy for bigger muscles, that doesn't mean your children will have similarly big muscles. In order to make an enhancement inheritable, you'd have to have new code spliced into your germline stem cells - creating an ethical controversy of transcendent proportions. Tinkering with the germline could conceivably produce a superhuman species in a single generation - but could also conceivably create a race of monsters. "It is totally unpredictable," Ward said. "It's a lot easier to understand evolutionary happenstance." Even then, there are genetic traits that are far more difficult to produce than big muscles or even super-longevity - for instance, the very trait that defines us as humans. "It's very, very clear that intelligence is a pretty subtle thing, and it's clear that we don't have a single gene that turns it on or off," Miller said. When it comes to intelligence, some scientists say, the most likely route to our future enhancement - and perhaps our future competition as well - just might come from our own machines. Cyborgs: Merging with the machines Will intelligent machines be assimilated, or will humans be eliminated? Until a few years ago, that question was addressed only in science-fiction plot lines, but today the rapid pace of cybernetic change has led some experts to worry that artificial intelligence may outpace Homo sapiens' natural smarts. The pace of change is often stated in terms of Moore's Law, which says that the number of transistors packed into a square inch should double every 18 months. "Moore's Law is now on its 30th doubling. We have never seen that sort of exponential increase before in human history," said Joel Garreau, author of the book "Radical Evolution." In some fields, artificial intelligence has already bested humans - with Deep Blue's 1997 victory over world chess champion Garry Kasparov providing a vivid example. Three years later, computer scientist Bill Joy argued in an influential Wired magazine essay that we would soon face challenges from intelligent machines as well as from other technologies ranging from weapons of mass destruction to self-replicating nanoscale "gray goo." Joy speculated that a truly intelligent robot may arise by the year 2030. "And once an intelligent robot exists, it is only a small step to a robot species - to an intelligent robot that can make evolved copies of itself," he wrote. Assimilating the robots To others, it seems more likely that we could become part-robot ourselves: We're already making machines that can be assimilated - including prosthetic limbs, mechanical hearts, cochlear implants and artificial retinas. Why couldn't brain augmentation be added to the list? "The usual suggestions are that we'll design improvements to ourselves," said Seth Shostak, senior astronomer at the SETI Institute. "We'll put additional chips in our head, and we won't get lost, and we'll be able to do all those math problems that used to befuddle us." Shostak, who writes about the possibilities for cybernetic intelligence in his book "Sharing the Universe," thinks that's likely to be a transitional step at best. "My usual response is that, well, you can improve horses by putting four-cylinder engines in them. But eventually you can do without the horse part," he said. "These hybrids just don't strike me as having a tremendous advantage. It just means the machines aren't good enough." Back to biology University of Washington paleontologist Peter Ward also believes human-machine hybrids aren't a long-term option, but for different reasons. "When you talk to people in the know, they think cybernetics will become biology," he said. "So you're right back to biology, and the easiest way to make changes is by manipulating genomes." It's hard to imagine that robots would ever be given enough free rein to challenge human dominance, but even if they did break free, Shostak has no fear of a "Terminator"-style battle for the planet. "I've got a couple of goldfish, and I don't wake up in the morning and say, 'I'm gonna kill these guys.' ... I just leave 'em alone," Shostak said. "I suspect the machines would very quickly get to a level where we were kind of irrelevant, so I don't fear them. But it does mean that we're no longer No. 1 on the planet, and we've never had that happen before." Astrans: Turning into an alien race If humans survive long enough, there's one sure way to grow new branches on our evolutionary family tree: by spreading out to other planets. Habitable worlds beyond Earth could be a 23rd century analog to the Galapagos Islands, Charles Darwin's evolutionary laboratory: just barely close enough for travelers to get to, but far enough away that there'd be little gene-mixing with the parent species. "If we get off to the stars, then yes, we will have speciation," said University of Washington paleontologist Peter Ward. "But can we ever get off the Earth?" Currently, the closest star system thought to have a planet is Epsilon Eridani, 10.5 light-years away. Even if spaceships could travel at 1 percent the speed of light - an incredible 6.7 million mph - it would take more than a millennium to get there. Even Mars might be far enough: If humans established a permanent settlement there, the radically different living conditions would change the evolutionary equation. For example, those who are born and raised in one-third of Earth's gravity could never feel at home on the old "home planet." It wouldn't take long for the new Martians to become a breed apart. As for distant stars, the SETI Institute's Seth Shostak has already been thinking through the possibilities: # Build a big ark: Build a spaceship big enough to carry an entire civilization to the destination star system. The problem is, that environment might be just too unnatural for natural humans. "If you talk to the sociologists, they'll say that it will not work. ... You'll be lucky if anybody's still alive after the third generation," Shostak said. # Go to warp speed: Somehow we discover a wormhole or find a way to travel at relativistic speeds. "That sounds OK, except for the fact that nobody knows how to do it," Shostak said. # Enter the Astrans: Humans are genetically engineered to tolerate ultra long-term hibernation aboard robotic ships. Once the ship reaches its destination, these "Astrans" are awakened to start the work of settling a new world. "That's one possibility," Shostak said. The ultimate approach would be to send the instructions for making humans rather than the humans themselves, Shostak said. "We're not going to put anything in a rocket, we're just going to beam ourselves to the stars," he explained. "The only trouble is, if there's nobody on the other end to put you back together, there's no point." So are we back to square one? Not necessarily, Shostak said. Setting up the receivers on other stars is no job for a human, "but the machines could make it work." In fact, if any other society is significantly further along than ours, such a network might be up and running by now. "The machines really could develop large tracts of galactic real estate, whereas it's really hard for biology to travel," Shostak said. It all seems inconceivable, but if humans really are extinction-proof - if they manage to survive global catastrophes, genetic upheavals and cybernetic challenges - who's to say what will be inconceivable millions of years from now? Two intelligent species, human and machine, just might work together to spread life through the universe. "If you were sufficiently motivated," Shostak said, "you could in fact keep it going forever." From checker at panix.com Thu May 5 14:56:01 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 10:56:01 -0400 (EDT) Subject: [Paleopsych] NYT: Chimeras on the Horizon, but Don't Expect Centaurs Message-ID: Chimeras on the Horizon, but Don't Expect Centaurs New York Times, 5.5.3 http://www.nytimes.com/2005/05/03/science/03chim.html By [1]NICHOLAS WADE Common ground for ethical research on human embryonic stem cells may have been laid by the National Academy of Sciences in the well-received guidelines it proposed last week. But if research on human embryonic stem cells ever gets going, people will be hearing a lot more about chimeras, creatures composed of more than one kind of cell. The world of chimeras holds weirdnesses that may require some getting used to. The original chimera, a tripartite medley of lion, goat and snake, was a mere monster, but mythology is populated with half-human chimeras - centaurs, sphinxes, werewolves, minotaurs and mermaids, and the gorgon Medusa. These creatures hold generally sinister powers, as if to advertise the pre-Darwinian notion that species are fixed and penalties are severe for transgressing the boundaries between them. Biologists have been generating chimeras for years, though until now of a generally bland variety. If you mix the embryonic cells of a black mouse and a white mouse, you get a patchwork mouse, in which the cells from the two donors contribute to the coat and to tissues throughout the body. Cells can also be added at a later stage to specific organs; people who carry pig heart valves are, at least technically, chimeric. The promise of embryonic stem cells is that since all the tissues of the body are derived from them, they are a kind of universal clay. If biologists succeed in learning how to shape the clay into specific organs, like pancreas glands, heart muscle or kidneys, physicians may be able to provide replacement parts on demand. Developing these new organs, and testing them to the standards required by the Food and Drug Administration, will require growing human organs in animals. Such creations - of pigs with human hearts, monkeys with human larynxes - are likely to be unsettling to many. "I think people would be horrified," said Dr. William Hansen, an expert in mythology at Indiana University. Chimeras grip the imagination because people are both fascinated and repulsed by the defiance of natural order. "They promote a sense of wonder and awe and for many of us that is an enjoyable feeling; they are a safe form of danger as in watching a scary movie," Dr. Hansen said. From the biologists' point of view, animals made to grow human tissues do not really raise novel issues because they can be categorized as animals with added human parts. Biologists are more concerned about animals in which human cells have become seeded throughout the system. "The mixing of species is something people do worry about and their fears need to be addressed," said Dr. Richard O. Hynes of the Massachusetts Institute of Technology, the co-chairman of the National Academy of Sciences committee that issued the research guidelines. Foreseeing the need for chimeras if stem cell research gets near to therapy, Dr. Hynes's committee delved into the ethics of chimera manufacture, defining the two cases in which human-animal chimeras could raise awkward issues. One involves incorporating human cells into the germ line; the other is involves using a human brain, creating a human or half human mind imprisoned in an animal body. In the case of human cells' invading the germ line, the chimeric animals might then carry human eggs and sperm, and in mating could therefore generate a fertilized human egg. Hardly anyone would desire to be conceived by a pair of mice. To forestall such discomforting possibilities, the committee ruled that chimeric animals should not be allowed mate. Still, there may in the future be good reason to generate mice that produce human oocytes, as the unfertilized egg is called. Tissues made from embryonic stem cells are likely to be perceived as foreign by the patient's immune system. One way around this problem is to create the embryonic stem cells from a patient's own tissues, by transferring a nucleus from the patient's skin cell into a human oocyte whose own nucleus has been removed. These nuclear transfers, which are also the way that cloned animals are made, are at present highly inefficient and require some 200 oocytes for each successful cloning. Acquiring oocytes from volunteers is not a trivial procedure, and the academy's recommendation that women who volunteer should not be paid is unlikely to increase supply. Chimeric mice that make human oocytes could be the answer. There are also sound scientific reasons for creating mice with human brain cells, an experiment that has long been contemplated by Dr. Irving Weissman of Stanford. Many serious human diseases arise through the loss of certain types of brain cell. To test if these can be replaced with human neural stem cells, Dr. Weissman injected human brain cells into a mouse embryo and showed that they followed the rules for mouse neural stem cells, migrating to the olfactory bulb to create a regular stream of new odor-detecting neurons. The mice may have been perplexed by their deficient sense of smell but probably not greatly so because human cells constituted less than 1 percent of their brain. Dr. Weissman decided it would be useful to have a mouse with a much larger percentage of human brain cells, but he sought ethical guidance before trying the experiment. He plans to let such mice develop as fetuses and to curtail the experiment before birth, to see if their human brains cells have arranged themselves in the architecture of a mouse brain or human brain. Given the nine months it takes for a human brain to be constructed, it seems unlikely that the developmental program of the human neurons would have time to unfold very far in the 20-day gestation of a mouse. Contrary to the plot of every good horror movie, the biologists' chimera cookbook contains only recipes of medical interest. But if there were no limits, could they in fact turn chimeras of myth into reality? That depends on the creature. If embryonic cells from human and horse were mixed together, the cells of each species would try to contribute to each part of the body, as in the patchwork mouse, but in this case with goals so incompatible it is hard to see any viable creature being formed. Centaurs, in any case, have six limbs, and that would be fine for an insect but violates the standard mammalian body plan. A much greater chance of creating a viable chimeric creature would come from injecting human embryonic stem cells into a monkey or ape. For this reason the academy committee has firmly ruled out such experiments as unethical. But to continue a little on the path of fantasy, humans are still very similar to chimpanzees, their closest surviving cousins, and an embryo constructed of cells from each may be viable enough to be born. This chimerical creature would probably not be as enjoyable as the chimeras of mythology but more of a problem human - a Caliban-like personage with bad manners and difficult habits. "If something were half human and half animal, what would our moral responsibilities be?" says Richard Doerflinger of the United States Conference of Catholic Bishops. "It might be immoral to kill such a creature. It's wrong to create creatures whose moral stature we are perplexed about." Evidently the first rule of chimeric chemistry is not to make creatures whose behavior straddles the perceived division between the human and animal worlds. References 1. http://query.nytimes.com/search/query?ppds=bylL&v1=NICHOLAS%20WADE&fdq=19960101&td=sysdate&sort=newest&ac=NICHOLAS%20WADE&inline=nyt-per From checker at panix.com Thu May 5 16:25:53 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:25:53 -0400 (EDT) Subject: [Paleopsych] NYT: Ugly Children May Get Parental Short Shrift Message-ID: Ugly Children May Get Parental Short Shrift New York Times, 5.5.3 http://www.nytimes.com/2005/05/03/health/03ugly.html [This is the most e-mailed article at the NYT today. Is this really surprising news?] By NICHOLAS BAKALAR Parents would certainly deny it, but Canadian researchers have made a startling assertion: parents take better care of pretty children than they do ugly ones. Researchers at the University of Alberta carefully observed how parents treated their children during trips to the supermarket. They found that physical attractiveness made a big difference. The researchers noted if the parents belted their youngsters into the grocery cart seat, how often the parents' attention lapsed and the number of times the children were allowed to engage in potentially dangerous activities like standing up in the shopping cart. They also rated each child's physical attractiveness on a 10-point scale. The findings, not yet published, were presented at the Warren E. Kalbach Population Conference in Edmonton, Alberta. When it came to buckling up, pretty and ugly children were treated in starkly different ways, with seat belt use increasing in direct proportion to attractiveness. When a woman was in charge, 4 percent of the homeliest children were strapped in compared with 13.3 percent of the most attractive children. The difference was even more acute when fathers led the shopping expedition - in those cases, none of the least attractive children were secured with seat belts, while 12.5 percent of the prettiest children were. Homely children were also more often out of sight of their parents, and they were more often allowed to wander more than 10 feet away. Age - of parent and child - also played a role. Younger adults were more likely to buckle their children into the seat, and younger children were more often buckled in. Older adults, in contrast, were inclined to let children wander out of sight and more likely to allow them to engage in physically dangerous activities. Although the researchers were unsure why, good-looking boys were usually kept in closer proximity to the adults taking care of them than were pretty girls. The researchers speculated that girls might be considered more competent and better able to act independently than boys of the same age. The researchers made more than 400 observations of child-parent interactions in 14 supermarkets. Dr. W. Andrew Harrell, executive director of the Population Research Laboratory at the University of Alberta and the leader of the research team, sees an evolutionary reason for the findings: pretty children, he says, represent the best genetic legacy, and therefore they get more care. Not all experts agree. Dr. Frans de Waal, a professor of psychology at Emory University, said he was skeptical. "The question," he said, "is whether ugly people have fewer offspring than handsome people. I doubt it very much. If the number of offspring are the same for these two categories, there's absolutely no evolutionary reason for parents to invest less in ugly kids." Dr. Robert Sternberg, professor of psychology and education at Yale, said he saw problems in Dr. Harrell's method and conclusions, for example, not considering socioeconomic status. "Wealthier parents can feed, clothe and take care of their children better due to greater resources," Dr. Sternberg said, possibly making them more attractive. "The link to evolutionary theory is speculative." But Dr. Harrell said the importance of physical attractiveness "cuts across social class, income and education." "Like lots of animals, we tend to parcel out our resources on the basis of value," he said. "Maybe we can't always articulate that, but in fact we do it. There are a lot of things that make a person more valuable, and physical attractiveness may be one of them." From checker at panix.com Thu May 5 16:26:22 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:26:22 -0400 (EDT) Subject: [Paleopsych] Wired News: Augmenting the Animal Kingdom Message-ID: Augmenting the Animal Kingdom http://wired.com/news/print/0,1294,67349,00.html By [21]Lakshmi Sandhana 02:00 AM May. 03, 2005 PT Natural evolution has produced the eye, butterfly wings and other wonders that would put any inventor to shame. But who's to say evolution couldn't be improved with the help of a little technology? So argues James Auger in his controversial and sometimes unsettling book, Augmented Animals. A designer and former research associate with MIT Media Lab Europe, [23]Auger envisions animals, birds, reptiles and even fish becoming appreciative techno-geeks, using specially engineered gadgets to help them overcome their evolutionary shortcomings, promote their chances of survival or just simply lead easier and more comfortable lives. On tap for the future: Rodents zooming around with night-vision survival goggles, squirrels hoarding nuts using GPS locators and fish armed with metal detectors to avoid the angler's hook. Auger's current ambitions are relatively modest. He's developing a LED light that aims to translate tail wagging into plain English. The device fits on a dog's tail, and flashes text messages when the tail waves through the air. He plans to have a working product on display at [26]Harrods in London by September. "I'm serious about the ideas behind the products," says Auger. "I think that the fact that some of them could be realized means that as concepts they tread the scary line between fact and fiction and therefore are taken a little more seriously. If one person in a hundred is inspired to think about the philosophical issues behind the ideas and the other 99 read it like Calvin and Hobbs, I'd consider that a success." Auger admits that his ideas are mostly conceptual in regard to animals living in the wild. But for tame and domesticated companions, some may not be so far-fetched. For example, a bird cage could be built using existing aerodynamic testing technology that might give captive birds the illusion of long-distance flight. And odor respirators could filter out undesirable smells for dogs and other animals with highly developed olfactory senses. Technology augmentations have already been tried in agribusiness, where an animal's happiness can lead directly to bigger profits. A few years ago, farm researchers tried fitting hens with [29]red plastic contact lenses to reduce aggression caused by tight caging and overcrowding. The idea was quickly [30]dropped when it was found to cause more problems than it solved. Future technologies, though, could yield fruit. For example, some theorists have floated a Matrix-like scenario that would use direct stimulation of the brain to fool livestock about the reality of their living conditions. "To offset the cruelty of factory-farming, routine implants of smart microchips in the pleasure centers may be feasible," says [31]David Pearce, associate editor of the [32]Journal of Evolution and Technology. "Since there is no physiological tolerance to pure pleasure, factory-farmed animals could lead a lifetime of pure bliss instead of misery. Unnatural? Yes, but so is factory farming. Immoral? No, certainly not compared to the terrible suffering we inflict on factory-farmed animals today." Not everyone agrees that fitting animals with invasive and experimental gadgetry is desirable, or even ethical. Jeffrey R. Harrow, author of the [33]The Harrow Technology Report doesn't think the idea of augmenting animals is a good one. "Any time we mess with nature's evolutionary process we run the very real risk of changing things for the worse since we have very limited scope in determining the longer term results," Harrow says. "With the possible exception of endangered species and probably not even those because our modifications would by definition change the species, we must be exceedingly careful or we might change our biosphere in ways later generations might abhor." If the debate over animal augmentation is still in its infancy, it will likely only grow along with advances in technology. Ultimately, some theorists argue, humans may have to decide whether they have a moral duty to help animals cross the divide that separates the species by giving them the ability to acquire higher mental functions -- a theme explored in apocalyptic films such as Planet of the Apes and The Day of the Dolphin. "With children, the insane and the demented we are obliged, when we can, to help these 'disabled citizens' to achieve or regain their full self-determination," says [34]Dr. James J. Hughes, executive director of the [35]Institute for Ethics and Emerging Technologies and author of Citizen Cyborg. "We have the same responsibility to enhance the intelligence and communication abilities of great apes, and possibly also of dolphins and elephants, when we have the means to do so. Once they are sufficiently enhanced, they can make decisions for themselves, including removing their augmentation." References 21. http://wired.com/news/feedback/mail/1,2330,0-603-67349,00.html 22. http://www.wired.com/news/technology/0,1282,67349,00.html 23. http://www.auger-loizeau.com/ 24. http://network.realmedia.com/RealMedia/ads/adstream_sx.ads/lycoswired/ron/ron/st/ss/a/137837941 at x08,x10,x24,x15,Position1,Top1!x15 25. http://network.realmedia.com/RealMedia/ads/click_nx.ads/lycoswired/ron/ron/st/ss/a/137837941 at x08,x10,x24,x15,Position1,Top1!x15 26. http://www.harrods.com/msib21 27. http://wired.com/news/print/0,1294,67349,00.html 28. http://wired.com/news/print/0,1294,67349,00.html 29. http://www.upc-online.org/RedLens.html 30. http://www.upc-online.org/s96redlens.html 31. http://www.hedweb.com/ 32. http://jetpress.org/ 33. http://www.TheHarrowGroup.com/ 34. http://www.changesurfer.com/Hughes.html 35. http://ieet.org/ From checker at panix.com Thu May 5 16:26:36 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:26:36 -0400 (EDT) Subject: [Paleopsych] NY Press: Matt Taibbi: Flathead: The peculiar genius of Thomas L. Friedman. Message-ID: New York's Premier Alternative Newspaper. Arts, Music, Food, Movies and Opinion http://www.nypress.com/18/16/news&columns/taibbi.cfm Vol 18 - Issue 18 - May 4-10, 2005 I think it was about five months ago that Press editor Alex Zaitchik whispered to me in the office hallway that Thomas Friedman had a new book coming out. All he knew about it was the title, but that was enough; he approached me with the chilled demeanor of a British spy who has just discovered that Hitler was secretly buying up the worlds manganese supply. Who knew what it meantbut one had to assume the worst "It's going to be called The Flattening," he whispered. Then he stood there, eyebrows raised, staring at me, waiting to see the effect of the news when it landed. I said nothing. It turned out Alex had bad information; the book that ultimately came out would be called The World Is Flat. It didn't matter. Either version suggested the same horrifying possibility. Thomas Friedman in possession of 500 pages of ruminations on the metaphorical theme of flatness would be a very dangerous thing indeed. It would be like letting a chimpanzee loose in the NORAD control room; even the best-case scenario is an image that could keep you awake well into your 50s. So I tried not to think about it. But when I heard the book was actually coming out, I started to worry. Among other things, I knew I would be asked to write the review. The usual ratio of Friedman criticism is 2:1, i.e., two human words to make sense of each single word of Friedmanese. Friedman is such a genius of literary incompetence that even his most innocent passages invite feature-length essays. I'll give you an example, drawn at random from The World Is Flat. On page 174, Friedman is describing a flight he took on Southwest Airlines from Baltimore to Hartford, Connecticut. (Friedman never forgets to name the company or the brand name; if he had written The Metamorphosis, Gregor Samsa would have awoken from uneasy dreams in a Sealy Posturepedic.) Here's what he says: I stomped off, went through security, bought a Cinnabon, and glumly sat at the back of the B line, waiting to be herded on board so that I could hunt for space in the overhead bins. Forget the Cinnabon. Name me a herd animal that hunts. Name me one. This would be a small thing were it not for the overall pattern. Thomas Friedman does not get these things right even by accident. It's not that he occasionally screws up and fails to make his metaphors and images agree. It's that he always screws it up. He has an anti-ear, and it's absolutely infallible; he is a Joyce or a Flaubert in reverse, incapable of rendering even the smallest details without genius. The difference between Friedman and an ordinary bad writer is that an ordinary bad writer will, say, call some businessman a shark and have him say some tired, uninspired piece of dialogue: Friedman will have him spout it. And that's guaranteed, every single time. He never misses. On an ideological level, Friedman's new book is the worst, most boring kind of middlebrow horseshit. If its literary peculiarities could somehow be removed from the equation, The World Is Flat would appear as no more than an unusually long pamphlet replete with the kind of plug-filled, free-trader leg-humping that passes for thought in this country. It is a tale of a man who walks 10 feet in front of his house armed with a late-model Blackberry and comes back home five minutes later to gush to his wife that hospitals now use the internet to outsource the reading of CAT scans. Man flies on planes, observes the wonders of capitalism, says we're not in Kansas anymore. (He actually says we're not in Kansas anymore.) That's the whole plot right there. If the underlying message is all that interests you, read no further, because that's all there is. It's impossible to divorce The World Is Flat from its rhetorical approach. It's not for nothing that Thomas Friedman is called "the most important columnist in America today." That it's Friedman's own colleague at the New York Times (Walter Russell Mead) calling him this, on the back of Friedman's own book, is immaterial. Friedman is an important American. He is the perfect symbol of our culture of emboldened stupidity. Like George Bush, he's in the reality-making business. In the new flat world, argument is no longer a two-way street for people like the president and the country's most important columnist. You no longer have to worry about actually convincing anyone; the process ends when you make the case. Things are true because you say they are. The only thing that matters is how sure you sound when you say it. In politics, this allows America to invade a castrated Iraq in self-defense. In the intellectual world, Friedman is now probing the outer limits of this trick's potential, and it's absolutely perfect, a stroke of genius, that he's choosing to argue that the world is flat. The only thing that would have been better would be if he had chosen to argue that the moon was made of cheese. And that's basically what he's doing here. The internet is speeding up business communications, and global labor markets are more fluid than ever. Therefore, the moon is made of cheese. That is the rhetorical gist of The World Is Flat. It's brilliant. Only an America-hater could fail to appreciate it. Start with the title. The book's genesis is conversation Friedman has with Nandan Nilekani, the CEO of Infosys. Nilekani causally mutters to Friedman: "Tom, the playing field is being leveled." To you and me, an innocent throwaway phrasethe level playing field being, after all, one of the most oft-repeated stock ideas in the history of human interaction. Not to Friedman. Ten minutes after his talk with Nilekani, he is pitching a tent in his company van on the road back from the Infosys campus in Bangalore: As I left the Infosys campus that evening along the road back to Bangalore, I kept chewing on that phrase: "The playing field is being leveled." What Nandan is saying, I thought, is that the playing field is being flattened... Flattened? Flattened? My God, he's telling me the world is flat! This is like three pages into the book, and already the premise is totally fucked. Nilekani said level, not flat. The two concepts are completely different. Level is a qualitative idea that implies equality and competitive balance; flat is a physical, geographic concept that Friedman, remember, is openly contrastingironically, as it werewith Columbus's discovery that the world is round. Except for one thing. The significance of Columbus's discovery was that on a round earth, humanity is more interconnected than on a flat one. On a round earth, the two most distant points are closer together than they are on a flat earth. But Friedman is going to spend the next 470 pages turning the "flat world" into a metaphor for global interconnectedness. Furthermore, he is specifically going to use the word round to describe the old, geographically isolated, unconnected world. "Let me... share with you some of the encounters that led me to conclude that the world is no longer round," he says. He will literally travel backward in time, against the current of human knowledge. To recap: Friedman, imagining himself Columbus, journeys toward India. Columbus, he notes, traveled in three ships; Friedman "had Lufthansa business class." When he reaches IndiaBangalore to be specifiche immediately plays golf. His caddy, he notes with interest, wears a cap with the 3M logo. Surrounding the golf course are billboards for Texas Instruments and Pizza Hut. The Pizza Hut billboard reads: "Gigabites of Taste." Because he sees a Pizza Hut ad on the way to a golf course, something that could never happen in America, Friedman concludes: "No, this definitely wasn't Kansas." After golf, he meets Nilekani, who casually mentions that the playing field is level. A nothing phrase, but Friedman has traveled all the way around the world to hear it. Man travels to India, plays golf, sees Pizza Hut billboard, listens to Indian CEO mutter small talk, writes 470-page book reversing the course of 2000 years of human thought. That he misattributes his thesis to Nilekani is perfect: Friedman is a person who not only speaks in malapropisms, he also hears malapropisms. Told level; heard flat. This is the intellectual version of Far Out Space Nuts, when NASA repairman Bob Denver sets a whole sitcom in motion by pressing "launch" instead of "lunch" in a space capsule. And once he hits that button, the rocket takes off. And boy, does it take off. Predictably, Friedman spends the rest of his huge book piling one insane image on top of the other, so that by the endand I'm not joking herewe are meant to understand that the flat world is a giant ice-cream sundae that is more beef than sizzle, in which everyone can fit his hose into his fire hydrant, and in which most but not all of us are covered with a mostly good special sauce. Moreover, Friedman's book is the first I have encountered, anywhere, in which the reader needs a calculator to figure the value of the author's metaphors. God strike me dead if I'm joking about this. Judge for yourself. After the initial passages of the book, after Nilekani has forgotten Friedman and gone back to interacting with the sane, Friedman begins constructing a monstrous mathematical model of flatness. The baseline argument begins with a lengthy description of the "ten great flatteners," which is basically a highlight reel of globalization tomahawk dunks from the past two decades: the collapse of the Berlin Wall, the Netscape IPO, the pre-Y2K outsourcing craze, and so on. Everything that would give an IBM human resources director a boner, that's a flattener. The catch here is that Flattener #10 is new communications technology: "Digital, Mobile, Personal, and Virtual." These technologies Friedman calls "steroids," because they are "amplifying and turbocharging all the other flatteners." According to the mathematics of the book, if you add an IPac to your offshoring, you go from running to sprinting with gazelles and from eating with lions to devouring with them. Although these 10 flatteners existed already by the time Friedman wrote The Lexus and the Olive Treea period of time referred to in the book as Globalization 2.0, with Globalization 1.0 beginning with Columbusthey did not come together to bring about Globalization 3.0, the flat world, until the 10 flatteners had, with the help of the steroids, gone through their "Triple Convergence." The first convergence is the merging of software and hardware to the degree that makes, say, the Konica Minolta Bizhub (the product featured in Friedman's favorite television commercial) possible. The second convergence came when new technologies combined with new ways of doing business. The third convergence came when the people of certain low-wage industrial countriesIndia, Russia, China, among otherswalked onto the playing field. Thanks to steroids, incidentally, they occasionally are "not just walking" but "jogging and even sprinting" onto the playing field. Now let's say that the steroids speed things up by a factor of two. It could be any number, but let's be conservative and say two. The whole point of the book is to describe the journey from Globalization 2.0 (Friedman's first bestselling book) to Globalization 3.0 (his current bestselling book). To get from 2.0 to 3.0, you take 10 flatteners, and you have them convergelet's say this means squaring them, because that seems to be the ideathree times. By now, the flattening factor is about a thousand. Add a few steroids in there, and we're dealing with a flattening factor somewhere in the several thousands at any given page of the book. We're talking about a metaphor that mathematically adds up to a four-digit number. If you're like me, you're already lost by the time Friedman starts adding to this numerical jumble his very special qualitative descriptive imagery. For instance: And now the icing on the cake, the ubersteroid that makes it all mobile: wireless. Wireless is what allows you to take everything that has been digitized, made virtual and personal, and do it from anywhere. Ladies and gentlemen, I bring you a Thomas Friedman metaphor, a set of upside-down antlers with four thousand points: the icing on your uber-steroid-flattener-cake! Let's speak Friedmanese for a moment and examine just a few of the notches on these antlers (Friedman, incidentally, measures the flattening of the world in notches, i.e. "The flattening process had to go another notch"; I'm not sure where the notches go in the flat plane, but there they are.) Flattener #1 is actually two flatteners, the collapse of the Berlin Wall and the spread of the Windows operating system. In a Friedman book, the reader naturally seizes up in dread the instant a suggestive word like "Windows" is introduced; you wince, knowing what's coming, the same way you do when Leslie Nielsen orders a Black Russian. And Friedman doesn't disappoint. His description of the early 90s: The walls had fallen down and the Windows had opened, making the world much flatter than it had ever beenbut the age of seamless global communication had not yet dawned. How the fuck do you open a window in a fallen wall? More to the point, why would you open a window in a fallen wall? Or did the walls somehow fall in such a way that they left the windows floating in place to be opened? Four hundred and 73 pages of this, folks. Is there no God? From checker at panix.com Thu May 5 16:26:48 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:26:48 -0400 (EDT) Subject: [Paleopsych] Slate: Robert Wright: (Tom Friedman: The Incredible Shrinking Planet Message-ID: Robert Wright: (Tom Friedman: The Incredible Shrinking Planet: What liberals can learn from Thomas Friedman's new book http://slate.msn.com/id/2116899/ Posted Monday, April 18, 2005, at 12:30 PM PT [23]Tom Friedman Is Right Again, Dammit! Tom Friedman What do you call it when multinational corporations scan the world for cheap labor, find poor people in developing nations, and pay them a fraction of America's minimum wage? A common answer on the left is "exploitation." For Thomas Friedman the answer is "collaboration"--or "empowering individuals in the developing world as never before." Friedman has written another destined-to-be-a-best-seller, destined-to-annoy-many-leftists-even-though-he's-a-liberal book, The World Is Flat. Readers of Friedman's 1998 The Lexus and the Olive Tree may ask: Why another best-selling, left-annoying Friedman book on globalization? Friedman argues that in the last few years, while we were distracted by Osama Bin Laden's transformation of the political landscape, a whole new phase of globalization was taking shape. Fueled by Internet-friendly software and cheap fiber optics, it features the fine-grained and far-flung division of data-related labor, often with little need for hierarchical, centralized control; and it subjects yesterday's powerhouses to competition from upstarts. "Globalization 3.0 is shrinking the world from a size small to a size tiny and flattening the playing field at the same time," bringing a "newfound power for individuals to collaborate and compete globally." This theme will get the book read in business class, but the reason leftists back in coach should read it has more to do with Osama Bin Laden's transformation of the political landscape. Islamist terrorism has been a godsend to the American right, especially in foreign policy. President Bush has sold a Manichaean master narrative that fuses neoconservativism with paleoconservative hawkism, the unifying upshot being the importance of invading countries and of disregarding, if not subverting, multilateral institutions. If the left is to develop a rival narrative, it will have to honestly address the realities of both globalization and terrorism. Friedman's book portrays both acutely--but that's not the only reason it's essential reading for the people it will most aggravate. It also contains the ingredients of a powerful liberal narrative, one that harnesses the logic of globalization to counter Bush's rhetoric in foreign and, for that matter, domestic policy. Part of this narrative Friedman develops, and part of it he leaves undeveloped and might even reject as too far left. But so what? In a flat world, Pulitzer Prize-winning New York Times columnists don't hand down stone tablets from mountaintops. They just start conversations that ripple through webzines and into the decentralized, newly influential blogosphere. It's kind of like open-source software, one of Friedman's examples of how easily divisions of digital labor can arise: Friedman writes some Friedman code, and left-of-Friedman liberals write some left-of-Friedman code, and eventually an open-source liberal narrative may coalesce. Feel empowered? Let's get cracking! These days hardly anyone accepts the label "anti-globalization." Most leftists now grant that you can't stop the globalization juggernaut; the best you can do is guide it. Friedman's less grim view suggests that, if you look at things from the standpoint of humanity as a whole--a standpoint many leftists purport to hold--globalization may actually be a good thing. He shows us some of globalization's beneficiaries--such as Indians who take "accent neutralization" classes and who, so far as I can tell, are as decent and worthy as the American airline reservation clerks and tech-support workers whose jobs they're taking (and who seem to prefer "exploitation" to nonexploitation). What's more, even as some Americans are losing, other Americans are winning, via cheaper airline tickets, more tech support, whatever. So, with net gains outweighing net losses, it's a non-zero-sum game, with a positive-sum outcome--a good thing on balance, at least from a global moral standpoint. (I've [27]argued that this is the basic story of history: Technological evolution allows the playing of more complex, more far-flung non-zero-sum games, and political structures adapt to this impetus.) Even globalization's downsides--such as displaced American workers--can have an upside for liberals in political terms. A churning workforce strengthens the case for the kind of safety net that Democrats champion and Republicans resist. (Globalization-induced jitters may help explain why President Bush's plan to make Social Security less secure hasn't captured the nation's imagination.) Friedman outlines an agenda of "compassionate flatism" that includes portable, subsidized health care, wage insurance, and subsidies for college and vocational school. You can argue about the details, and you can push them to the left. (He notes that corporations like to put offices and factories in countries with universal health care.) But this is clearly a Democratic agenda, and, as more and more white-collar jobs move abroad, its appeal to traditionally Republican voters should grow. Globalization's domestic disruptions can also be softened by global institutions. As the sociologist Douglas Massey argues in his just-published liberal manifesto Return of the L Word, the World Trade Organization, though reviled on the far left as a capitalist tool, could, with American leadership, use its clout to enforce labor standards abroad that are already embraced by the U.N.'s toothless International Labor Organization. For example: the right of workers everywhere to bargain collectively. (Workers of the world unite.) Friedman doesn't emphasize this sort of leftish global governance. Apparently he thinks Globalization 3.0 will enervate international institutions as much as national ones. The WTO will "become less important" because globalization will "be increasingly driven by the individuals who understand the flat world." Time will tell. My own view is that a flat world can help American liberals network with like-minded people in other countries to shape nascent international bodies. (Massey shows that the WTO, in response to left-wing feedback, has grown more receptive to environmentalist constraints on trade.) But the main leftward amendment to Friedman's source code I'd make is in a different realm of foreign policy. As Microsoft said of Sun's Java, I'd like to "embrace and extend" his belief that globalization is conducive to peace and freedom. Friedman persuasively updates his Lexus-and-the-Olive-Tree argument that economic interdependence makes war costlier for nations and hence less likely. He's heard the counterargument--"That's what they said before World War I!"--and he concedes that a big war could happen. But he shows that the pre-World War I era didn't have this kind of interdependence--the fine-grained and far-flung division of labor orchestrated by Toyota, Wal-Mart, et al. This is "supply chaining"--"collaborating horizontally--among suppliers, retailers, and customers--to create value." For example: The hardware in a Dell Inspiron 600m laptop comes from factories in the Philippines, Costa Rica, Malaysia, China, South Korea, Taiwan, Germany, Japan, Mexico, Thailand, Singapore, Indonesia, India, and Israel; the software is designed in America and elsewhere. The corporations that own or operate these factories are based in the United States, China, Taiwan, Germany, South Korea, Japan, Ireland, Thailand, Israel, and Great Britain. And Michael Dell personally knows their CEOs--a kind of relationship that, multiplied across the global web of supply chains, couldn't hurt when tensions rise between, say, China and the United States. Friedman argues plausibly that global capitalism dampened the India-Pakistan crisis of 2002, when a nuclear exchange was so thinkable that the United States urged Americans to leave India. Among the corporate feedback the Indian government got in midcrisis was a message from United Technologies saying that it had started looking for more stable countries in which to house mission-critical operations. The government toned down its rhetoric. Also plausibly, Friedman argues that Globalization 3.0 rewards inter-ethnic tolerance and punishes tribalism. "If you want to have a modern complex division of labor, you have to be able to put more trust in strangers." Certainly nations famous for fundamentalist intolerance--e.g., Saudi Arabia--tend not to be organically integrated into the global economy. Peace and universal brotherhood--it almost makes globalization sound like a leftist's dream come true. But enough embracing--it's time to extend! Time to use the logic of globalization to attack Bush's foreign policy. Like Friedman, I accept Bush's premise that spreading political freedom is both morally good and good for America's long-term national security. But is Bush's instinctive means to that end--invading countries that aren't yet free--really the best approach? Friedman's book fortified my belief that the answer is no. Friedman, unlike many liberals, has long appreciated that, more than ever, economic liberty encourages political liberty. As statist economies have liberalized, this linkage has worked faster in some cases (South Korea, Taiwan) than in others (China), but it works at some speed just about everywhere. And consider the counterexamples, the increasingly few nations that have escaped fine-grained penetration by market forces. They not only tend to be authoritarian; they often flout international norms, partly because their lack of economic engagement makes their relationship to the world relatively zero-sum, leaving them little incentive to play nicely. Friedman writes, "Since Iraq, Syria, south Lebanon, North Korea, Pakistan, Afghanistan, and Iran are not part of any major global supply chains, all of them remain hot spots that could explode at any time." That list includes the last country Bush invaded and the two countries atop his prospective invasions list. It makes you wonder: With all due respect for carnage, mightn't it be easier to draw these nations into the globalized world and let capitalism work its magic (while supplementing that magic by using nonmilitary policy levers to encourage democratic reform)? This is one paradox of "neoconservative" foreign policy: It lacks the conservative's faith in the politically redeeming power of markets. Indeed, Bush, far from trying to lure authoritarians into the insidiously antiauthoritarian logic of capitalism, has tried to exclude them from it. Economically, he's all stick and no carrot. (Of Iran he said, "We've sanctioned ourselves out of influence," oblivious to the fact that removing sanctions can be an incentive.) Of course, if you took this approach--used trade, aid, and other forms of what Joseph Nye calls "soft power" to globalize authoritarian nations and push them toward freedom--hyper-tyrannies like Saddam Hussein's Iraq would be the last dominoes to fall. More promising dominoes would include Egypt, even Saudi Arabia. But according to neocon reverse-domino theory, it only takes one domino. And it's true that in a "flattened" world, dominoes can fall fast once they get started. Internet and satellite TV let people anywhere see what people everywhere are doing without relying on their government's version of events. ("Peer-to-peer," you might call it.) Much of the inspiration for Lebanon's "cedar revolution" came from watching Georgia's Rose Revolution and then Ukraine's Orange Revolution (on Al Jazeera). And Palestinian aspirations to democracy were nourished by Israel's televised parliament--one reason the ground for democracy was fertile when Yasser Arafat died. So, was the Iraq invasion really an essential domino-feller, given the increasing contagion of liberty and the various nonmilitary levers with which we can encourage it? It would be one thing if Bush had tried those levers and failed--systematically deployed trade and aid and other tools against authoritarianism. But for him soft power was a convenient afterthought. He didn't renounce America's longstanding attraction to authoritarian stability and start nudging Egypt et al., toward democracy (as many liberals had long favored) until he needed a cosmic vision of global democracy to justify an unexpectedly messy war. Friedman, of course, supported the war. And that's one reason some leftists will resist using this book as food for thought. But he supported the war reluctantly, and he supported it for the best reason, the reason Bush settled on retrospectively after most of his other reasons had collapsed: to create a market democracy in the Arab world. Friedman has long seen, and highlights in this book, that the same microelectronic forces that empower Indian software writers and lubricate global supply chains also empower terrorists and strengthen their networks; and therefore that, 10 or 20 years down the road, we can't afford to have whole nations full of potential terrorists--young people with no legitimate outlet for their economic and political energies. Many liberals who opposed the Iraq war don't appreciate this fact. In the long run that's probably a deeper misjudgment than the one liberal Iraq hawks are accused of having made. (And I say that as one of their accusers.) Anyway, liberals who supported the Iraq war look less crazy today than they did three months ago. The key question now is which ones appreciate how technology is rendering such adventures less necessary (and more counterproductive--but don't get me started on that sermon). Friedman, during his recent Charlie Rose whistle-stop, noted the importance of Ukraine's example for Lebanon, a welcome corrective to the common Iraq-hawk line that good things in the Middle East flow exclusively from Iraq's elections. For this and other reasons I'm tentatively counting him in, hoping he'll sign onto this new source code: In a flat world, soft power is more powerful than ever. In any event, selling this lefty, peacenik message to Friedman isn't as improbable as selling it to some lefty peaceniks, because buying the message means coming fully to terms with globalization--not just granting its inevitability but appreciating its [28]potential. The Naderite left reviled The Lexus and the Olive Tree for what they took to be its Panglossian depiction of globalization as a force of nature. (In fact, the book spends lots of time on globalization's dark side, as does The World Is Flat). But, seven years later, Friedman's early depiction of globalization's power--good and bad--looks prescient. And with this book he's shown how and why globalization has now shifted into warp drive. Meanwhile, the main achievement of Naderite nationalists has been to put George Bush in the White House. If forced to choose between the two--and, in a sense, liberals are--where would you look for inspiration? Related in Slate _________________________________________________________________ In 2002, David Plotz [29]assessed Friedman, the columnist and presumptive diplomat-by-newsprint. Jacob Weisberg [30]reviewed Friedman's The Lexus and the Olive Tree when it arrived in 1999. In 2001, Robert Wright gave Slate readers [31]dispatches from Davos. In January of this year, Samuel Loewenberg delivered [32]dispatches from "the Anti-Davos." Robert Wright, a visiting fellow at Princeton University's Center for Human Values and a senior fellow at the New America Foundation, runs the Web site [33]meaningoflife.tv and is the author of [34]The Moral Animal and [35]Nonzero: The Logic of Human Destiny. Robert Wright References 23. http://slate.msn.com/id/2116914/ 25. http://slate.msn.com/id/2116899/#ContinueArticle 26. http://ad.doubleclick.net/jump/slate.homepage/slate;kw=slate;sz=300x250;ord=5647? 27. http://www.nonzero.org/index.htm 28. http://slate.msn.com/id/2116899/sidebar/2116900/ 29. http://slate.msn.com/id/2062905/ 30. http://slate.msn.com/id/25365/ 31. http://slate.msn.com/id/97787/entry/97788/ 32. http://slate.msn.com/id/2112679/entry/2112681/ 33. http://www.meaningoflife.tv/ 34. http://bn.bfast.com/booklink/click?sourceid=412995&ISBN=0679763996 35. http://www.nonzero.org/ From checker at panix.com Thu May 5 16:27:24 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:27:24 -0400 (EDT) Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Message-ID: Dome Improvement http://www.wired.com/wired/archive/13.05/flynn_pr.html First some remarks from From: Hal Finney Date: Tue, 3 May 2005 11:03:41 -0700 (PDT) To: extropy-chat at lists.extropy.org except that the article is now available: Wired magazine's new issue has an article on the Flynn Effect, which we have discussed here occasionally. This is probably my favorite Effect, so completely extropian and contradictory to the conventional wisdom. Curmudgeons throughout the ages have complained about the decay of society and how the younger generation is inferior in morals and intelligence to their elders. Likewise modern communications technology is derided: TV is a vast wasteland, video games and movies promote sex and violence. Yet Flynn discovered the astonishing and still little-known fact that intelligence scores have steadily increased for at least the past 100 years. And it's a substantial gain; people who would have been considered geniuses 100 years ago would be merely average today. Perhaps even more surprisingly, the gains cannot be directly attributed to improved education, as the greatest improvements are found in the parts of the test that directly measure abstract reasoning via visual puzzles, not concrete knowledge based on language or mathematical skills. The Wired article (which should be online in a few days) does not have much that is new, but one fact which popped out is that the Effect has not only continued in the last couple of generations, but is increasing. Average IQ gains were 0.31 per year in the 1950s and 60s, but by the 1990s had grown to 0.36 per year. Explanations for the Effect seem to be as numerous as people who have studied it. Flynn himself does not seem to believe that it is real, in the sense that it actually points to increased intelligence. I was amused by economist David Friedman's suggestion that it is due to the increased use of Caesarian deliveries allowing for larger head sizes! The Wired article focuses on increased visual stimulation as the catalyst, which seems plausible as part of the story. The article then predicts that the next generation, exposed since babyhood to video games with demanding puzzle solving, mapping and coordination skills, will see an even greater improvement in IQ scores. Sometimes I wonder if the social changes we saw during the 20th century may have been caused or at least promoted by greater human intelligence. It's a difficult thesis to make because you first have to overcome the conventional wisdom that says that the 1900s were a century of human depravity and violence. But if you look deeper and recognize the tremendous growth of morality and ethical sensitivity in this period (which is what makes us judge ourselves so harshly), you have to ask, maybe it is because people woke up, began to think for themselves, and weren't willing to let themselves be manipulated and influenced as in the past? If so, then this bodes well for the future. --------------now the article: Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop reading the great authors and start playing Grand Theft Auto.) By Steven Johnson Twenty-three years ago, an American philosophy professor named James Flynn discovered a remarkable trend: Average IQ scores in every industrialized country on the planet had been increasing steadily for decades. Despite concerns about the dumbing-down of society - the failing schools, the garbage on TV, the decline of reading - the overall population was getting smarter. And the climb has continued, with more recent studies showing that the rate of IQ increase is accelerating. Next to global warming and Moore's law, the so-called Flynn effect may be the most revealing line on the increasingly crowded chart of modern life - and it's an especially hopeful one. We still have plenty of problems to solve, but at least there's one consolation: Our brains are getting better at problem-solving. Unless you happen to think the very notion of IQ is bunk. Anyone who has read Stephen Jay Gould's The Mismeasure of Man or Howard Gardner's work on multiple intelligences or any critique of The Bell Curve is liable to dismiss IQ as merely phrenology updated, a pseudoscience fronting for a host of racist and elitist ideologies that dare not speak their names. These critics attack IQ itself - or, more precisely, what intelligence scholar Arthur Jensen called g, a measure of underlying "general" intelligence. Psychometricians measure g by performing a factor analysis of multiple intelligence tests and extracting a pattern of correlation between the measurements. (IQ is just one yardstick.) Someone with greater general intelligence than average should perform better on a range of different tests. Unlike some skeptics, James Flynn didn't just dismiss g as statistical tap dancing. He accepted that something real was being measured, but he came to believe that it should be viewed along another axis: time. You can't just take a snapshot of g at one moment and make sense of it, Flynn says. You have to track its evolution. He did just that. Suddenly, g became much more than a measure of mental ability. It revealed the rising trend line in intelligence test scores. And that, in turn, suggested that something in the environment - some social or cultural force - was driving the trend. Significant intellectual breakthroughs - to paraphrase the John Lennon song - are what happen when you're busy making other plans. So it was with Flynn and his effect. He left the US in the early 1960s to teach moral philosophy at the University of Otaga in New Zealand. In the late '70s, he began exploring the intellectual underpinnings of racist ideologies. "And I thought: Oh, I can do a bit about the IQ controversies," he says. "And then I saw that Arthur Jensen, a scholar of high repute, actually thought that blacks on average were genetically inferior - which was quite a shock. I should say that Jensen was beyond reproach - he's certainly not a racist. And so I thought I'd better look into this." This inquiry led to a 1980 book, Race, IQ, and Jensen, that posited an environmental - not genetic - explanation for the black-white IQ gap. After finishing the book, Flynn decided that he would look for evidence that blacks were gaining on whites as their access to education increased, and so he began studying US military records, since every incoming member of the armed forces takes an IQ test. Sure enough, he found that blacks were making modest gains on whites in intelligence tests, confirming his environmental explanation. But something else in the data caught his eye. Every decade or so, the testing companies would generate new tests and re-normalize them so that the average score was 100. To make sure that the new exams were in sync with previous ones, they'd have a batch of students take both tests. They were simply trying to confirm that someone who tested above average on the new version would perform above average on the old, and in fact the results confirmed that correlation. But the data also brought to light another pattern, one that the testing companies ignored. "Every time kids took the new and the old tests, they did better on the old ones," Flynn says. "I thought: That's weird." The testing companies had published the comparative data almost as an afterthought. "It didn't seem to strike them as interesting that the kids were always doing better on the earlier test," he says. "But I was new to the area." He sent his data to the Harvard Educational Review, which dismissed the paper for its small sample size. And so Flynn dug up every study that had ever been done in the US where the same subjects took a new and an old version of an IQ test. "And lo and behold, when you examined that huge collection of data, it revealed a 14-point gain between 1932 and 1978." According to Flynn's numbers, if someone testing in the top 18 percent the year FDR was elected were to time-travel to the middle of the Carter administration, he would score at the 50th percentile. When Flynn finally published his work in 1984, Jensen objected that Flynn's numbers were drawing on tests that reflected educational background. He predicted that the Flynn effect would disappear if one were to look at tests - like the Raven Progressive Matrices - that give a closer approximation of g, by measuring abstract reasoning and pattern recognition and eliminating language altogether. And so Flynn dutifully collected IQ data from all over the world. All of it showed dramatic increases. "The biggest of all were on Ravens," Flynn reports with a hint of glee still in his voice. The trend Flynn discovered in the mid-'80s has been investigated extensively, and there's little doubt he's right. In fact, the Flynn effect is accelerating. US test takers gained 17 IQ points between 1947 and 2001. The annual gain from 1947 through 1972 was 0.31 IQ point, but by the '90s it had crept up to 0.36. Though the Flynn effect is now widely accepted, its existence has in turn raised new questions. The most fundamental: Why are measures of intelligence going up? The phenomenon would seem to make no sense in light of the evidence that g is largely an inherited trait. We're certainly not evolving that quickly. The classic heritability research paradigm is the twin adoption study: Look at IQ scores for thousands of individuals with various forms of shared genes and environments, and hunt for correlations. This is the sort of chart you get, with 100 being a perfect match and 0 pure randomness: The same person tested twice: 87 Identical twins raised together: 86 Identical twins raised apart: 76 Fraternal twins raised together: 55 Biological siblings: 47 Parents and children living together: 40 Parents and children living apart: 31 Adopted children living together: 0 Unrelated people living apart: 0 After analyzing these shifting ratios of shared genes and the environment for several decades, the consensus grew, in the '90s, that heritability for IQ was around 0.6 - or about 60 percent. The two most powerful indications of this are at the top and bottom of the chart: Identical twins raised in different environments have IQs almost as similar to each other as the same person tested twice, while adopted children living together - shared environment, but no shared genes - show no correlation. When you look at a chart like that, the evidence for significant heritability looks undeniable. Four years ago, Flynn and William Dickens, a Brookings Institution economist, proposed another explanation, one made apparent to them by the Flynn effect. Imagine "somebody who starts out with a tiny little physiological advantage: He's just a bit taller than his friends," Dickens says. "That person is going to be just a bit better at basketball." Thanks to this minor height advantage, he tends to enjoy pickup basketball games. He goes on to play in high school, where he gets excellent coaching and accumulates more experience and skill. "And that sets up a cycle that could, say, take him all the way to the NBA," Dickens says. Now imagine this person has an identical twin raised separately. He, too, will share the height advantage, and so be more likely to find his way into the same cycle. And when some imagined basketball geneticist surveys the data at the end of that cycle, he'll report that two identical twins raised apart share an off-the-charts ability at basketball. "If you did a genetic analysis, you'd say: Well, this guy had a gene that made him a better basketball player," Dickens says. "But the fact is, that gene is making him 1 percent better, and the other 99 percent is that because he's slightly taller, he got all this environmental support." And what goes for basketball goes for intelligence: Small genetic differences get picked up and magnified in the environment, resulting in dramatically enhanced skills. "The heritability studies weren't wrong," Flynn says. "We just misinterpreted them." Dickens and Flynn showed that the environment could affect heritable traits like IQ, but one mystery remained: What part of our allegedly dumbed-down environment is making us smarter? It's not schools, since the tests that measure education-driven skills haven't shown the same steady gains. It's not nutrition - general improvement in diet leveled off in most industrialized countries shortly after World War II, just as the Flynn effect was accelerating. Most cognitive scholars remain genuinely perplexed. "I find it a puzzle and don't have a compelling explanation," wrote Harvard's Steven Pinker in an email exchange. "I suspect that it's either practice at taking tests or perhaps a large number of disparate factors that add up to the linear trend." Flynn has his theories, though they're still speculative. "For a long time it bothered me that g was going up without an across-the-board increase in other tests," he says. If g measured general intelligence, then a long-term increase should trickle over into other subtests. "And then I realized that society has priorities. Let's say we're too cheap to hire good high school math teachers. So while we may want to improve arithmetical reasoning skills, we just don't. On the other hand, with smaller families, more leisure, and more energy to use leisure for cognitively demanding pursuits, we may improve - without realizing it - on-the-spot problem-solving, like you see with Ravens." When you take the Ravens test, you're confronted with a series of visual grids, each containing a mix of shapes that seem vaguely related to one another. Each grid contains a missing shape; to answer the implicit question posed by the test, you need to pick the correct missing shape from a selection of eight possibilities. To "solve" these puzzles, in other words, you have to scrutinize a changing set of icons, looking for unusual patterns and correlations among them. This is not the kind of thinking that happens when you read a book or have a conversation with someone or take a history exam. But it is precisely the kind of mental work you do when you, say, struggle to program a VCR or master the interface on your new cell phone. Over the last 50 years, we've had to cope with an explosion of media, technologies, and interfaces, from the TV clicker to the World Wide Web. And every new form of visual media - interactive visual media in particular - poses an implicit challenge to our brains: We have to work through the logic of the new interface, follow clues, sense relationships. Perhaps unsurprisingly, these are the very skills that the Ravens tests measure - you survey a field of visual icons and look for unusual patterns. The best example of brain-boosting media may be videogames. Mastering visual puzzles is the whole point of the exercise - whether it's the spatial geometry of Tetris, the engineering riddles of Myst, or the urban mapping of Grand Theft Auto. The ultimate test of the "cognitively demanding leisure" hypothesis may come in the next few years, as the generation raised on hypertext and massively complex game worlds starts taking adult IQ tests. This is a generation of kids who, in many cases, learned to puzzle through the visual patterns of graphic interfaces before they learned to read. Their fundamental intellectual powers weren't shaped only by coping with words on a page. They acquired an intuitive understanding of shapes and environments, all of them laced with patterns that can be detected if you think hard enough. Their parents may have enhanced their fluid intelligence by playing Tetris or learning the visual grammar of TV advertising. But that's child's play compared with Pok?mon. Contributing editor Steven Johnson (stevenberlinjohnson at earthlink.net) is the author of Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter. From checker at panix.com Thu May 5 16:27:39 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:27:39 -0400 (EDT) Subject: [Paleopsych] NYT: Programs That Start When XP Does Message-ID: Programs That Start When XP Does New York Times, 5.5.5 http://www.nytimes.com/2005/05/05/technology/circuits/05askk.html [This is very valuable to anyone who has a computer that has slowed to a crawl. I'll be working on eliminating programs that hog RAM and do nothing for me.] By J.D. BIERSDORFER Programs That Start When XP Does Q. My Windows XP machine takes forever to start up. How can I tell what programs are loading during the start-up process? A. One way to sneak a peek at what programs start up when you turn on your computer is to use the System Configuration Utility that comes with Windows XP. To get to it, go to the Start menu and select Run. In the Run box, type "msconfig" (without the quotation marks) to start the utility. Click the Startup tab in the System Configuration box to see the list of programs that open when Windows starts up, along with check boxes to turn each program off or on during the next start-up. If you aren't sure what some of the listed programs actually do for your PC, you can look them up at the Windows Startup Online Search page ([1]www.windowsstartup.com/wso/search.php) before deciding if you want a program to start up automatically. [2]Microsoft has a page of information about using the System Configuration Utility to troubleshoot start-up problems with your PC at [3]support.microsoft.com/kb/310560/EN-US. Other Web pages offering help for using the utility to sort out your start-up woes include [4]www.netsquirrel.com/msconfig and [5]vlaurie.com/computers2/Articles/startup.htm. A File by Any Name May Not Copy Q. My wife gave me a small U.S.B. flash drive a while back to transport files between my Mac at home and my office computer. The U.S.B. drive seems to be pretty finicky, though, and won't let me use file names with certain characters like slashes. Why is this? A. To make them compatible with both Windows computers and Macintosh systems, just about every U.S.B. flash drive sold these days is formatted with the FAT32 file system. FAT32 is one of the file systems Windows uses to keep track of data stored on a drive, but Macs can also understand it and will display the stored folders and documents when you plug in the drive. [6]Apple's [7]iPod Shuffle music player, which can also function as a U.S.B. drive for toting large files, also comes formatted right out of the box in the FAT32 system. Using certain typographical characters in file names, however, is one thing you can do on a Mac but not on Windows. You can't name files with slashes, brackets, colons, semicolons, asterisks, periods, commas and a few other characters on the FAT32 system. You may get error messages if you try to copy files with such characters in the names from your Mac to the U.S.B. drive - or if you do successfully copy them, the file names may be changed on the U.S.B. drive. If you're using the U.S.B. drive to transfer files between Macs, you can get around finicky FAT32 by rounding up the files you want to transfer from the Mac and creating an archive file with them. In Mac OS X 10.3, click on each file to select it, then go to the File menu and select Create Archive. Give the archived file a simple name like Files and let the Mac create it. Then copy the Files.zip archive to the U.S.B. drive. You can also use utility programs like Stuffit to create archive files on older Mac systems. Useful Add-Ons For Firefox Q. Where can I find extension programs for the Firefox Web browser that do things like display the current weather in the browser window? A. Firefox, a free Web browser that comes in versions for Windows, Macintosh and Linux systems, can be easily customized with small extension programs that do things like add dictionary search tools to the browser window or provide controls for your computer's digital-audio software so you can control your music while you surf. The Forecastfox extension, which displays the current weather in the corner of the Firefox window, can be found at [8]forecastfox.mozdev.org. The latest version of Firefox itself can be downloaded at [9]www.mozilla.org/products/firefox, where there's also a link to more Firefox extensions. Circuits invites questions about computer-based technology, by e-mail to QandA at nytimes.com. This column will answer questions of general interest, but letters cannot be answered individually. References 1. http://www.windowsstartup.com/wso/search.php 2. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=MSFT 3. http://support.microsoft.com/kb/310560/EN-US 4. http://www.netsquirrel.com/msconfig 5. http://vlaurie.com/computers2/Articles/startup.htm 6. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=AAPL 7. http://tech2.nytimes.com/gst/technology/techsearch.html?st=p&cat=&query=ipod&inline=nyt-classifier 8. http://forecastfox.mozdev.org/ 9. http://www.mozilla.org/products/firefox From checker at panix.com Thu May 5 16:28:00 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:28:00 -0400 (EDT) Subject: [Paleopsych] Global Die-Off: The profound mysanthropy of the deep ecologists Message-ID: ---------- Forwarded message ---------- Date: Thu, 5 May 2005 08:43:02 -0400 From: "Hughes, James J." Reply-To: World Transhumanist Association Discussion List This piece makes very clear the horrific worldview of the deep ecologists, which accepts the inevitability and desirability of a "global die-off". This is the direct parallel to the Christian Right's view that accepts that everything has to go to shite and then be destroyed in a firy conflagration before we get back to the Kingdom. Even if "global die-off" was a serious risk, these folks aren't really interested in mobilizing prophylactic answers, because that would just be putting off our inescapable punishment for hubris/sin. - J. ------------------------------------- >From - http://www.dissidentvoice.org/Apr05/Bageant0429.htm Background reading - http://dieoff.org/page125.htm Back to the Ancient Future: Chewing Raw Grubs with the "Nutcracker Man" by Joe Bageant www.dissidentvoice.org April 29, 2005 I spent the middle weekend in April with a group of artists and thinkers called the April Fools Group. Put together by Brad Blanton, psychotherapist and creator of "radical honesty" politics and therapy, the three-day meeting was set on a farm down the Shenandoah Valley amid the battlefields and rolling countryside of Newmarket, Virginia. Brad, a world famous redneck headshrinker, had put together old hippies, theoreticians, musicians, young anarchists, beautiful brilliant women and aging writers to yap, drink and plot against the Bush administration. So when I pulled into Brad's driveway to find him and a fellow named Hank parked in lawn chairs up on the roof with a bottle of bourbon I knew this thing was off to a good start. The gathering was an organizational meeting for Brad Blanton's independent run for the Virginia Seventh District U.S. House of Representatives. Blanton's working slogan is "America needs a good psychiatrist." And we got a lot accomplished in that direction, despite my intellectual flatulence and Brad's orneriness. Any psychotherapist who actually gets people to pay for advice such as "Fuck'em if they can't take a joke" must be called ornery at the very least. And any politician who thinks he can get elected on the basis of extreme honesty, well... Anyway, I came away from the meeting deeply struck by one thing. Every person there seemed to understand and acknowledge the coming global human "die-off." The one that has already begun in places like Africa and will grow into a global event sometime within our lifetimes and/or those of our children. The one that will kill millions of white people. That's right, clean pink little Western World white people like you and me. Nobody in the U.S. seems to be able to deal with or even think about this near certainty, and the few who do are written off as nutcases by the media and the public. Mostly though, it goes unacknowledged. All of which drives me nuts because the now nearly visible end of civilization strikes me as worthy of at least modest discussion. You'd think so. But the mention of it causes my wife to go into, "Oh Joe, can't we talk about something more pleasant?" And talk about causing weird stares and dropped jaws at the office water cooler. Here's the short course: Global die-off of mankind will occur when we run out of energy to support the complex technological grid sustaining modern industrial human civilization. In other words, when the electricity goes out, we are back in the Dark Age, with the Stone Age grunting at us from just around the corner. This will likely happen in 100 years or less, assuming the ecosystem does not collapse first. And you are thinking, "Well ho ho ho! Any other good news Bageant? And how the fock do you know this anyway?" For those willing to contemplate the subject, there is a scientifically supported model of the timeline of our return to Stone Age tribal units. A roadmap to the day when we will be cutting up dog meat with a sharpened cd rom disc in some toxic future canyon. It is called the Olduvai Theory. The Olduvai theory* was first introduced in a scientific paper by petroleum geologist/engineer/anthropologist Richard C. Duncan titled The Peak Of World Oil Production And The Road To The Olduvai Gorge. Duncan ("Dunk") chose the name Olduvai because, among other reasons, "...it is a good metaphor for the Stone Age way of life." It also sounded cool, he confesses. The Olduvai Gorge is a deep cleft in the Serengeti steppe of Tanzania, where Louis and Mary Leakey found the remains of prehistoric hominids, some up to two million years old, and along with the first stone tools, other things such as the skulls of sheep big as draft horses and pigs the size of hippos. Also the skull of "Nutcracker Man (Australopithecus boisei), so named because of a set of powerful choppers, teeth so strong they could bust the lug nuts off a truck tire, were he around today to work at the Goodyear Tire Center. As to Nutcracker's "lifestyle" (and we are using the term most generously for a style that had more than adequate pork resources but had not developed a decent pinot grigio to serve with it, or even barbecue sauce for that matter) Dunk says "the Olduvai way of life was and still is a sustainable one -- local, tribal, and solar -- and, for better or worse, our ancestors practiced it for millions of years." Dunk's Olduvai theory provides a modern database support structure for the Malthusian argument. The Olduvai theory uses only a single metric, as defined by "White's Law," and deals with electricity as the most vital expression of other forms of energy such as crude oil or coal. The theory is an inductive one based on world energy and population data, so elegantly simple that any 12th grader can do it, assuming he or she can do multiplication (a risky assumption now that no child has been left behind by our great ownership society). In the Olduvai schema permanent blackouts will occur worldwide around 2030. Industrial Civilization ends when energy availability falls to its 1930 level. Measured as energy use -- energy expended or consumed -- our industrial civilization can be described as a one-time phenomenon, a single pulse waveform of limited duration which flashed out from the caves to outer space, then back to the caves over approximately 100 years. So when was the highpoint of the flash? On the average, world per capita energy use crested around 1977. That was the same year John Travolta made "Saturday Night Fever," which few of us consider much of a highpoint. To make a long story short, there are three intervals of decline in the Olduvai schema: slope, slide and cliff -- each steeper than the previous. Right now we are in the slide headed for the cliff (see http://dieoff.org/page125.htm). After more than a decade no scientist has been able to refute it and even given the flexibility and bias inherent in what passes for common sense in this country, it's still pretty damned hard to argue with. When we do go off the cliff, the Big Die-off will play no favorites, and will happen everywhere more or less simultaneously. But there are some particularly lousy places to be when permanent worldwide electrical blackouts happen. In or near a big city is the worst. You can imagine the, uh, "discomfort" of billions when the electrical grids die and power goes out across the densely packed high-rise buildings surrounded by a million acres of asphalt. People with no work, no heat, no air conditioning, no food, no water. Put on yer Adidas, it is migration time. Wherein bankers, skinheads, little old ladies and taxi drivers swarm like insects toward whatever passes for the countryside by then. Looks like all those survivalists up in North Idaho and Oregon may be right. Personally, I wouldn't want to be in New York or Bombay, or even Toledo when the deal goes down, and in fact want to be as distant from a city as one can get without having to be too far into the woods (of which there will de damned few) to eat my daily requirement of tree bark. Americans busily expanding their lard content to fit the contours of their air-conditioned SUVs are among the chief accelerators of the Big Die-off. However, people worldwide assume that the average American is a blind dickhead who wouldn't acknowledge the ecological price of his/her lifestyle if it were branded on their forehead. That assumption is correct. Americans for the most part don't give a twit what kind of world their own children inherit, much less about dolphins, Hottentots, Frenchmen and the approaching desertification of distant places like Kansas. Still, it is reasonable to believe that many powerful people and organizations with all the research capability in the world at their fingertips must understand the future before us. In fact, I am sure some in industry do because even 10 years ago when I used to deal with chemical executives at Monsanto, Zeneca, Dow and other corporations, it was discussed and acknowledged a couple of times over cocktails, and even discussed how to profit from it through genetically engineered non-reproducing seeds that eliminate all the native crops around them. One might also guess that the U.S. president and his cabinet know, and that their solution is to fight for more oil and higher profits, given its increasing scarcity. Even the superficial whoring media is broaching the topic of "peak oil," though mostly for shock entertainment value. I heard an "expert" say the other day that science will solve the peak oil problem, probably through nuclear energy, as if that did not have its own awful implications. Sure buddy. Just like the "Green Revolution" solved the world's agricultural foods problem by poisoning the earth with pesticides and burning two gallons of oil to produce a pint of milk. There is the myth left hanging out there from the old scientific paradigm that science and technology are somehow going to snatch us from edge of species die-off just in time. Yes, we will be saved by the very science and technology that evolved from, and is completely dependent upon, an energy source that will no longer exist. I think the pundit probably understands that, but like all media and political people, safely assumes the public has the critical thinking capability of a jar of fruit flies. Shut up and watch Survivor! Well hell then. What does keep the American people from looking around them and seeing the obvious? That the earth is a finite thing being used up at exponential rates? Answer: The Spectacle. American capitalism's "media hologram." We no longer have a country, but the artificial spectacle of one. We have a global corporation masquerading electronically, digitally, financially, and legally and every other way as a nation called the "United States of America." The corporation now animates most of us from within through management of the need hierarchy of goods and information. We no longer have citizens. We have consumers, "purchase decision makers" whose most influential act in life consists of choosing a mortgage banker and an NFL team. And a car. The majority of modernized technical humans, Digitus Cathodus Americanus, cannot perceive the hologram because their self-identities were generated by it. It's "reality" to them -- the only one they will know until the hologram collapses with their electrical industrial civilization. By design or not, the hologram's primary effect has been to induce the illusion of a national "value system" through hypnotic repetition of images. Thus profit-seeking enterprises are legitimized as the animating spirit of our identities as individuals and as a nation. The end result of course is the mass replication of millions of uniform "market segmented consumer identities." Individuality is circumscribed by brand identification. The overall aggregate of brand identification groups is interpreted to be an inherently superior race or nation (worth fighting for to expand the resource base and markets.) We no longer have lives, just lifestyles that are defined and expressed through ever expanding (and more profitable) consumption. Net result: The legions of humanity toil to generate the trucks and tofu, munitions, missiles, newspapers, petrochemicals and pizza and millions of tons of ground up cattle sold to fire the furnace of an economic engine that has taken on a life of its own. One that must grow exponentially, devouring everything just to survive. Just to keep from collapsing. And people are taught that it is called "human progress." This mass hallucination generated by this totalist capitalist system, the state as engine of profit, is one thing. Life on a real planet made of dirt and water and flesh both warm and cold-blooded is quite another. Viewed from outside the web of Western illusions, say, an Iraqi citizen or a Filipino Moro, one finds the economic engine to be driven by unseen death and war and the pillaging of the weak by the powerful. All this is set against the backdrop of explosive human disease, growing starvation, the impending failure of the environment and petroleum based civilization, resulting in the greatest mass extinction event in the history of this planet. The Big Die-off. And in your very lifetime too. Admission is not only free, it is compulsory. One of the hologram's great illusions is that Industrial Civilization is evolutionary -- that it advances forever. Industrial civilization does not evolve. In the overall history of man it is extremely short and completely unsustainable. It is a one-time biological drama that rapidly consumes the necessary physical prerequisites for its own existence, the ecology and resources of the planetary gravity well in which it is trapped. Any good news, for Chrissake? Sort of. We may not become completely extinct. It looks like the earth's immune system is beginning to shake off its infection by the human virus through what appears -- to we virus at least -- as environmental collapse. But for the sake of discussion, let's assume that extinction through nuclear war and ecological collapse is somehow avoided (nowadays, we're allowed to assume anything we want, regardless of the evidence around us. Just ask any U.S. capitalist free market economist). If what is left after the Big Die-off can still be called a human society, it will be bottomed out at the subsistence level of energy-use. Now that is one ugly booger of a notion to contemplate. What is subsistence level energy-use? In all likelihood it has to do with shitting in the winter darkness at a sustainable 45 indoor degrees. Meanwhile, a cockroach watches, thinking to himself, "What a shame, because at the height of their culture these guys made a damned good peanut butter sandwich." Your attention please. This is your pilot. We have crested in our evolutionary journey and are beginning our descent. Please lock your folding trays, put you seats in an upright position and enjoy the landing. (Captain! Why are there no lights down there at the airport?) It was a helluva crest, that spurt of technological jism by the industrial state toward outer space and impregnation of the moon. What with Neil Armstrong bouncing around in the lunar dust in his high tech Pillsbury doughboy outfit and all, it added up to about one week of attention by the masses and a lot of dough for government contractors. But as one who within my lifetime witnessed the entire evolution of the space program, and its accompanying nationalistic hoopla about beating the Russians at being the first to fart in the vacuum of space, I am somehow unconvinced it was worth it. I dunno. Maybe my wife is right. Maybe I'm just a goddam crab. Maybe I'm a little resentful because, thanks to the big American suckdown of the planet, I will never have grandchildren. My kids are among that portion of their generation who understand what their lifetimes hold and are not remotely interested in adding to the problem. We weren't always like that. Right after World War II and the advent of the atomic bomb a majority of Americans (67% of those surveyed by Gallup) wanted a cooperative one-world government with all nuclear weapons put under the control of the United Nations. Now you cannot get an American to turn off a light switch to save human civilization. As a friend from Cape Verde once remarked, "Just watching Americans consume things gives me a headache." As for the weak ploy I used to slip into this screed -- Brad Blanton's April Fools Group meeting -- that too left me with a headache. After nearly a fifth of Maker's Mark bourbon on the third night I was hugging everybody in sight and had entered into an agreement with a jazz piano player and an inventor from Wisconsin to start a free love commune together right there in that beautiful valley. (Honest to god, I am not joking. I wish I were.) When I woke up next morning and looked into the mirror at eyes like two bloody pissholes in a snowbank ... and wondering who let that dog crap in my mouth ... well ... let's just say I wasn't experiencing the same sense of brotherly love as the night before. Rather than go into the wretchedness of the next day's grisly recovery, or contemplate what we might possibly find to drink while living in shipping containers during the next Olduvai period, let me share my favorite hangover remedy as a way out of this little box I've written myself into. OK? Bye! Uncle Joe's Big Die-off Hangover Cure: * Empty two cans of sardines (skinless packed in water) into a bowl. * Add two medium size habanera peppers. * One squirt of mustard. * One dash of Tabasco. * Blend coarsely in a blender. (Cover the blender with six bath towels to keep the noise from cracking your brain and teeth) * Spread on toast or crackers and eat. Your lungs may or may not collapse briefly and there may be temporary blindness. Not to worry. After your eyes quit watering enough to see, either endorphins associated with hot peppers will kick in, or subsequent fiery bites of the cure will be enough to distract your from the headache until they do. Joe Bageant is a writer and magazine editor living in Winchester, Virginia. He may be contacted at bageantjb at netscape.net. Free downloadable pdf files of his works are archived at www.coldtype.net. FOOTNOTE * The Olduvai theory postulates that electricity is the essence of Industrial Civilization. World energy production per capita increased strongly from 1945 to its all-time peak in 1979. Then from 1979 to 1999 -- for the first time in history -- it decreased from 1979-1999 at a rate of 0.33 %/year (called the Olduvai "slope"). Next from 2000 to 2011, according to the Olduvai schema, world energy production per capita will decrease by about 0.70 %/year (the Olduvai "slide"). Then around year 2012 there will be a rash of permanent electrical blackouts -- worldwide. These blackouts, along with other factors, will cause energy production per capita by 2030 to fall to 3.32 b/year, the same value it had in 1930. The rate of decline from 2012 to 2030 is 5.44 %/year (the Olduvai "cliff"). Thus, by definition, the duration of Industrial Civilization is less than or equal to 100 years. (Richard Duncan at: http://dieoff.org/page125.htm) From checker at panix.com Thu May 5 16:28:14 2005 From: checker at panix.com (Premise Checker) Date: Thu, 5 May 2005 12:28:14 -0400 (EDT) Subject: [Paleopsych] LRC: George Crispin: Are We Running Out of Oil? Message-ID: George Crispin: Are We Running Out of Oil? http://www.lewrockwell.com/crispin/crispin12.html 5.5.3 It is a complicated subject, but now I am familiar with the two theories of the origin of petroleum, the conventional one that assumes that oil is biogenic, originating as plant and animal matter and the other, that it is abiogenic, it or its raw material having been formed with the earth when it was formed 4.5 billion years ago. My learning curve began several years ago with a short article that described the oil field beneath the Eugene 330 oil platform in the Gulf of Mexico. I lost the clipping but did not forget the story of a dried up oil well that was refilling itself. This spring when I found a new discussion of Eugene 330, essentially describing the same conditions, my original conclusions of a mantle filled with or manufacturing petroleum were reinforced, leading to the conclusion that the worlds supply of oil was essentially limitless But by now the Peak Oil people were out in force and desperate to prove that we are due to run out of oil soon, and must prepare ourselves for war and/or starvation. This led me to: * C. Maurice and C. Smithson, Doomsday Mythology: "Every ten or fifteen years since the late 1800s (when we began using petroleum) experts have predicted that oil reserves would last only ten more years. These experts have predicted nine of the last zero oil-reserve exhaustions." * Sheik Yamani, one time oil minister to Saudi Arabia, who stated in a speech to Europeans, "The stone age ended, but not because of any lack or stones. Undoubtedly the oil age will end the same way." * Jean Whelan, a geochemist and senior researcher with the Woods Hole Oceanographic Institute assigned to study the Eugene field. Becoming familiar with the phenomenon, she said " . . .. I believe there is a huge system of oil just migrating deep underground" * Dr. Thomas Golds book [9]The Deep Hot Biosphere, in which he theorizes that our oil or the methane from which it could evolve, was formed 4.5 billion years ago when the earth began, that it is not a fossil fuel but picks up traces of fossils as it works its way upwards. This theory leaves the earth with a huge supply of oil unlike the fossil theory, which assumes oil to be the result of a one-time dying off of animals and plants. * The Russian Ukrainian Deep Abiotic Theory (except for Dr Gold, virtually unknown in the West) has long gone beyond theory, the Russians having brought in several fields producing abiotic oil using super deep drilling technology. By 1946 their production had dropped off. Now, they, along with Saudi Arabia and ourselves are one of the three largest producers in the world. This, plus our oil shale, and Canadas tar sands, plus as yet unimagined technologies, plus the fact that oil is only a part of the total energy picture make it seem highly unlikely the world will ever run out of oil. One clincher in the debate is that Peak Oil writings are terribly muddled. The other is the suspicion that its adherents seem to fall into Professor Kuhns description of people who cannot accept anything outside their conventional world, people who cannot be scientifically critical, whose every belief must fit their current paradigm. George Crispin [[10]send him mail] is a retired businessman who heads a Catholic homeschooling cooperative in Auburn, Alabama. [11]George Crispin Archives References 9. http://www.amazon.com/exec/obidos/ASIN/0387985468/lewrockwell/ 10. mailto:crispin73 at charter.net 11. http://www.lewrockwell.com/crispin/crispin-arch.html From ross.buck at uconn.edu Thu May 5 16:59:48 2005 From: ross.buck at uconn.edu (Buck, Ross) Date: Thu, 5 May 2005 12:59:48 -0400 Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Message-ID: Again the notion of heritability is being presented as a meaningful measure of genetic-versus-environmental influence. Most monozygotic twins are monochorionic, sharing the same choroid plexus and therefore the same blood supply in the womb. A minority are dichorionic, with identical genes but a different intrauterine blood supply. Davis, Phelps and Bracha (Schizophrenia Bulletin, 1995, 21, 357-366) investigated concordance of schizophrenia in monochorionic and dichorionic monozygotic twins, and found that while the concordance rate for MC MZ twins was 60% (i.e., if one twin is schizophrenic there is a 60% chance the other will be as well), the concordance rate of the DC MZ twins (with identical genes) was 10.7%. Environmental influences are overwhelming, and they begin at conception: the genes do nothing without environmental influences turning them on and off. The Flynn effect suggests that the vast media wasteland may actually function as a vast brain playground. Cheers, Ross Ross Buck, Ph. D. Professor of Communication Sciences and Psychology Communication Sciences U-1085 University of Connecticut Storrs, CT 06269-1085 860-486-4494 fax 860-486-5422 Ross.buck at uconn.edu http://www.coms.uconn.edu/docs/people/faculty/rbuck/index.htm -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Premise Checker Sent: Thursday, May 05, 2005 12:27 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Dome Improvement http://www.wired.com/wired/archive/13.05/flynn_pr.html First some remarks from From: Hal Finney Date: Tue, 3 May 2005 11:03:41 -0700 (PDT) To: extropy-chat at lists.extropy.org except that the article is now available: Wired magazine's new issue has an article on the Flynn Effect, which we have discussed here occasionally. This is probably my favorite Effect, so completely extropian and contradictory to the conventional wisdom. Curmudgeons throughout the ages have complained about the decay of society and how the younger generation is inferior in morals and intelligence to their elders. Likewise modern communications technology is derided: TV is a vast wasteland, video games and movies promote sex and violence. Yet Flynn discovered the astonishing and still little-known fact that intelligence scores have steadily increased for at least the past 100 years. And it's a substantial gain; people who would have been considered geniuses 100 years ago would be merely average today. Perhaps even more surprisingly, the gains cannot be directly attributed to improved education, as the greatest improvements are found in the parts of the test that directly measure abstract reasoning via visual puzzles, not concrete knowledge based on language or mathematical skills. The Wired article (which should be online in a few days) does not have much that is new, but one fact which popped out is that the Effect has not only continued in the last couple of generations, but is increasing. Average IQ gains were 0.31 per year in the 1950s and 60s, but by the 1990s had grown to 0.36 per year. Explanations for the Effect seem to be as numerous as people who have studied it. Flynn himself does not seem to believe that it is real, in the sense that it actually points to increased intelligence. I was amused by economist David Friedman's suggestion that it is due to the increased use of Caesarian deliveries allowing for larger head sizes! The Wired article focuses on increased visual stimulation as the catalyst, which seems plausible as part of the story. The article then predicts that the next generation, exposed since babyhood to video games with demanding puzzle solving, mapping and coordination skills, will see an even greater improvement in IQ scores. Sometimes I wonder if the social changes we saw during the 20th century may have been caused or at least promoted by greater human intelligence. It's a difficult thesis to make because you first have to overcome the conventional wisdom that says that the 1900s were a century of human depravity and violence. But if you look deeper and recognize the tremendous growth of morality and ethical sensitivity in this period (which is what makes us judge ourselves so harshly), you have to ask, maybe it is because people woke up, began to think for themselves, and weren't willing to let themselves be manipulated and influenced as in the past? If so, then this bodes well for the future. --------------now the article: Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop reading the great authors and start playing Grand Theft Auto.) By Steven Johnson Twenty-three years ago, an American philosophy professor named James Flynn discovered a remarkable trend: Average IQ scores in every industrialized country on the planet had been increasing steadily for decades. Despite concerns about the dumbing-down of society - the failing schools, the garbage on TV, the decline of reading - the overall population was getting smarter. And the climb has continued, with more recent studies showing that the rate of IQ increase is accelerating. Next to global warming and Moore's law, the so-called Flynn effect may be the most revealing line on the increasingly crowded chart of modern life - and it's an especially hopeful one. We still have plenty of problems to solve, but at least there's one consolation: Our brains are getting better at problem-solving. Unless you happen to think the very notion of IQ is bunk. Anyone who has read Stephen Jay Gould's The Mismeasure of Man or Howard Gardner's work on multiple intelligences or any critique of The Bell Curve is liable to dismiss IQ as merely phrenology updated, a pseudoscience fronting for a host of racist and elitist ideologies that dare not speak their names. These critics attack IQ itself - or, more precisely, what intelligence scholar Arthur Jensen called g, a measure of underlying "general" intelligence. Psychometricians measure g by performing a factor analysis of multiple intelligence tests and extracting a pattern of correlation between the measurements. (IQ is just one yardstick.) Someone with greater general intelligence than average should perform better on a range of different tests. Unlike some skeptics, James Flynn didn't just dismiss g as statistical tap dancing. He accepted that something real was being measured, but he came to believe that it should be viewed along another axis: time. You can't just take a snapshot of g at one moment and make sense of it, Flynn says. You have to track its evolution. He did just that. Suddenly, g became much more than a measure of mental ability. It revealed the rising trend line in intelligence test scores. And that, in turn, suggested that something in the environment - some social or cultural force - was driving the trend. Significant intellectual breakthroughs - to paraphrase the John Lennon song - are what happen when you're busy making other plans. So it was with Flynn and his effect. He left the US in the early 1960s to teach moral philosophy at the University of Otaga in New Zealand. In the late '70s, he began exploring the intellectual underpinnings of racist ideologies. "And I thought: Oh, I can do a bit about the IQ controversies," he says. "And then I saw that Arthur Jensen, a scholar of high repute, actually thought that blacks on average were genetically inferior - which was quite a shock. I should say that Jensen was beyond reproach - he's certainly not a racist. And so I thought I'd better look into this." This inquiry led to a 1980 book, Race, IQ, and Jensen, that posited an environmental - not genetic - explanation for the black-white IQ gap. After finishing the book, Flynn decided that he would look for evidence that blacks were gaining on whites as their access to education increased, and so he began studying US military records, since every incoming member of the armed forces takes an IQ test. Sure enough, he found that blacks were making modest gains on whites in intelligence tests, confirming his environmental explanation. But something else in the data caught his eye. Every decade or so, the testing companies would generate new tests and re-normalize them so that the average score was 100. To make sure that the new exams were in sync with previous ones, they'd have a batch of students take both tests. They were simply trying to confirm that someone who tested above average on the new version would perform above average on the old, and in fact the results confirmed that correlation. But the data also brought to light another pattern, one that the testing companies ignored. "Every time kids took the new and the old tests, they did better on the old ones," Flynn says. "I thought: That's weird." The testing companies had published the comparative data almost as an afterthought. "It didn't seem to strike them as interesting that the kids were always doing better on the earlier test," he says. "But I was new to the area." He sent his data to the Harvard Educational Review, which dismissed the paper for its small sample size. And so Flynn dug up every study that had ever been done in the US where the same subjects took a new and an old version of an IQ test. "And lo and behold, when you examined that huge collection of data, it revealed a 14-point gain between 1932 and 1978." According to Flynn's numbers, if someone testing in the top 18 percent the year FDR was elected were to time-travel to the middle of the Carter administration, he would score at the 50th percentile. When Flynn finally published his work in 1984, Jensen objected that Flynn's numbers were drawing on tests that reflected educational background. He predicted that the Flynn effect would disappear if one were to look at tests - like the Raven Progressive Matrices - that give a closer approximation of g, by measuring abstract reasoning and pattern recognition and eliminating language altogether. And so Flynn dutifully collected IQ data from all over the world. All of it showed dramatic increases. "The biggest of all were on Ravens," Flynn reports with a hint of glee still in his voice. The trend Flynn discovered in the mid-'80s has been investigated extensively, and there's little doubt he's right. In fact, the Flynn effect is accelerating. US test takers gained 17 IQ points between 1947 and 2001. The annual gain from 1947 through 1972 was 0.31 IQ point, but by the '90s it had crept up to 0.36. Though the Flynn effect is now widely accepted, its existence has in turn raised new questions. The most fundamental: Why are measures of intelligence going up? The phenomenon would seem to make no sense in light of the evidence that g is largely an inherited trait. We're certainly not evolving that quickly. The classic heritability research paradigm is the twin adoption study: Look at IQ scores for thousands of individuals with various forms of shared genes and environments, and hunt for correlations. This is the sort of chart you get, with 100 being a perfect match and 0 pure randomness: The same person tested twice: 87 Identical twins raised together: 86 Identical twins raised apart: 76 Fraternal twins raised together: 55 Biological siblings: 47 Parents and children living together: 40 Parents and children living apart: 31 Adopted children living together: 0 Unrelated people living apart: 0 After analyzing these shifting ratios of shared genes and the environment for several decades, the consensus grew, in the '90s, that heritability for IQ was around 0.6 - or about 60 percent. The two most powerful indications of this are at the top and bottom of the chart: Identical twins raised in different environments have IQs almost as similar to each other as the same person tested twice, while adopted children living together - shared environment, but no shared genes - show no correlation. When you look at a chart like that, the evidence for significant heritability looks undeniable. Four years ago, Flynn and William Dickens, a Brookings Institution economist, proposed another explanation, one made apparent to them by the Flynn effect. Imagine "somebody who starts out with a tiny little physiological advantage: He's just a bit taller than his friends," Dickens says. "That person is going to be just a bit better at basketball." Thanks to this minor height advantage, he tends to enjoy pickup basketball games. He goes on to play in high school, where he gets excellent coaching and accumulates more experience and skill. "And that sets up a cycle that could, say, take him all the way to the NBA," Dickens says. Now imagine this person has an identical twin raised separately. He, too, will share the height advantage, and so be more likely to find his way into the same cycle. And when some imagined basketball geneticist surveys the data at the end of that cycle, he'll report that two identical twins raised apart share an off-the-charts ability at basketball. "If you did a genetic analysis, you'd say: Well, this guy had a gene that made him a better basketball player," Dickens says. "But the fact is, that gene is making him 1 percent better, and the other 99 percent is that because he's slightly taller, he got all this environmental support." And what goes for basketball goes for intelligence: Small genetic differences get picked up and magnified in the environment, resulting in dramatically enhanced skills. "The heritability studies weren't wrong," Flynn says. "We just misinterpreted them." Dickens and Flynn showed that the environment could affect heritable traits like IQ, but one mystery remained: What part of our allegedly dumbed-down environment is making us smarter? It's not schools, since the tests that measure education-driven skills haven't shown the same steady gains. It's not nutrition - general improvement in diet leveled off in most industrialized countries shortly after World War II, just as the Flynn effect was accelerating. Most cognitive scholars remain genuinely perplexed. "I find it a puzzle and don't have a compelling explanation," wrote Harvard's Steven Pinker in an email exchange. "I suspect that it's either practice at taking tests or perhaps a large number of disparate factors that add up to the linear trend." Flynn has his theories, though they're still speculative. "For a long time it bothered me that g was going up without an across-the-board increase in other tests," he says. If g measured general intelligence, then a long-term increase should trickle over into other subtests. "And then I realized that society has priorities. Let's say we're too cheap to hire good high school math teachers. So while we may want to improve arithmetical reasoning skills, we just don't. On the other hand, with smaller families, more leisure, and more energy to use leisure for cognitively demanding pursuits, we may improve - without realizing it - on-the-spot problem-solving, like you see with Ravens." When you take the Ravens test, you're confronted with a series of visual grids, each containing a mix of shapes that seem vaguely related to one another. Each grid contains a missing shape; to answer the implicit question posed by the test, you need to pick the correct missing shape from a selection of eight possibilities. To "solve" these puzzles, in other words, you have to scrutinize a changing set of icons, looking for unusual patterns and correlations among them. This is not the kind of thinking that happens when you read a book or have a conversation with someone or take a history exam. But it is precisely the kind of mental work you do when you, say, struggle to program a VCR or master the interface on your new cell phone. Over the last 50 years, we've had to cope with an explosion of media, technologies, and interfaces, from the TV clicker to the World Wide Web. And every new form of visual media - interactive visual media in particular - poses an implicit challenge to our brains: We have to work through the logic of the new interface, follow clues, sense relationships. Perhaps unsurprisingly, these are the very skills that the Ravens tests measure - you survey a field of visual icons and look for unusual patterns. The best example of brain-boosting media may be videogames. Mastering visual puzzles is the whole point of the exercise - whether it's the spatial geometry of Tetris, the engineering riddles of Myst, or the urban mapping of Grand Theft Auto. The ultimate test of the "cognitively demanding leisure" hypothesis may come in the next few years, as the generation raised on hypertext and massively complex game worlds starts taking adult IQ tests. This is a generation of kids who, in many cases, learned to puzzle through the visual patterns of graphic interfaces before they learned to read. Their fundamental intellectual powers weren't shaped only by coping with words on a page. They acquired an intuitive understanding of shapes and environments, all of them laced with patterns that can be detected if you think hard enough. Their parents may have enhanced their fluid intelligence by playing Tetris or learning the visual grammar of TV advertising. But that's child's play compared with Pok?mon. Contributing editor Steven Johnson (stevenberlinjohnson at earthlink.net) is the author of Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter. From shovland at mindspring.com Fri May 6 01:12:47 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 5 May 2005 18:12:47 -0700 Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Message-ID: <01C5519E.0B1C8BF0.shovland@mindspring.com> What kind of environmental signals optimize the expression of human DNA? Steve Hovland www.stevehovland.net -----Original Message----- From: Buck, Ross [SMTP:ross.buck at uconn.edu] Sent: Thursday, May 05, 2005 10:00 AM To: The new improved paleopsych list Subject: RE: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Again the notion of heritability is being presented as a meaningful measure of genetic-versus-environmental influence. Most monozygotic twins are monochorionic, sharing the same choroid plexus and therefore the same blood supply in the womb. A minority are dichorionic, with identical genes but a different intrauterine blood supply. Davis, Phelps and Bracha (Schizophrenia Bulletin, 1995, 21, 357-366) investigated concordance of schizophrenia in monochorionic and dichorionic monozygotic twins, and found that while the concordance rate for MC MZ twins was 60% (i.e., if one twin is schizophrenic there is a 60% chance the other will be as well), the concordance rate of the DC MZ twins (with identical genes) was 10.7%. Environmental influences are overwhelming, and they begin at conception: the genes do nothing without environmental influences turning them on and off. The Flynn effect suggests that the vast media wasteland may actually function as a vast brain playground. Cheers, Ross Ross Buck, Ph. D. Professor of Communication Sciences and Psychology Communication Sciences U-1085 University of Connecticut Storrs, CT 06269-1085 860-486-4494 fax 860-486-5422 Ross.buck at uconn.edu http://www.coms.uconn.edu/docs/people/faculty/rbuck/index.htm -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Premise Checker Sent: Thursday, May 05, 2005 12:27 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Dome Improvement http://www.wired.com/wired/archive/13.05/flynn_pr.html First some remarks from From: Hal Finney Date: Tue, 3 May 2005 11:03:41 -0700 (PDT) To: extropy-chat at lists.extropy.org except that the article is now available: Wired magazine's new issue has an article on the Flynn Effect, which we have discussed here occasionally. This is probably my favorite Effect, so completely extropian and contradictory to the conventional wisdom. Curmudgeons throughout the ages have complained about the decay of society and how the younger generation is inferior in morals and intelligence to their elders. Likewise modern communications technology is derided: TV is a vast wasteland, video games and movies promote sex and violence. Yet Flynn discovered the astonishing and still little-known fact that intelligence scores have steadily increased for at least the past 100 years. And it's a substantial gain; people who would have been considered geniuses 100 years ago would be merely average today. Perhaps even more surprisingly, the gains cannot be directly attributed to improved education, as the greatest improvements are found in the parts of the test that directly measure abstract reasoning via visual puzzles, not concrete knowledge based on language or mathematical skills. The Wired article (which should be online in a few days) does not have much that is new, but one fact which popped out is that the Effect has not only continued in the last couple of generations, but is increasing. Average IQ gains were 0.31 per year in the 1950s and 60s, but by the 1990s had grown to 0.36 per year. Explanations for the Effect seem to be as numerous as people who have studied it. Flynn himself does not seem to believe that it is real, in the sense that it actually points to increased intelligence. I was amused by economist David Friedman's suggestion that it is due to the increased use of Caesarian deliveries allowing for larger head sizes! The Wired article focuses on increased visual stimulation as the catalyst, which seems plausible as part of the story. The article then predicts that the next generation, exposed since babyhood to video games with demanding puzzle solving, mapping and coordination skills, will see an even greater improvement in IQ scores. Sometimes I wonder if the social changes we saw during the 20th century may have been caused or at least promoted by greater human intelligence. It's a difficult thesis to make because you first have to overcome the conventional wisdom that says that the 1900s were a century of human depravity and violence. But if you look deeper and recognize the tremendous growth of morality and ethical sensitivity in this period (which is what makes us judge ourselves so harshly), you have to ask, maybe it is because people woke up, began to think for themselves, and weren't willing to let themselves be manipulated and influenced as in the past? If so, then this bodes well for the future. --------------now the article: Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop re ading the great authors and start playing Grand Theft Auto.) By Steven Johnson Twenty-three years ago, an American philosophy professor named James Flynn discovered a remarkable trend: Average IQ scores in every industrialized country on the planet had been increasing steadily for decades. Despite concerns about the dumbing-down of society - the failing schools, the garbage on TV, the decline of reading - the overall population was getting smarter. And the climb has continued, with more recent studies showing that the rate of IQ increase is accelerating. Next to global warming and Moore's law, the so-called Flynn effect may be the most revealing line on the increasingly crowded chart of modern life - and it's an especially hopeful one. We still have plenty of problems to solve, but at least there's one consolation: Our brains are getting better at problem-solving. Unless you happen to think the very notion of IQ is bunk. Anyone who has read Stephen Jay Gould's The Mismeasure of Man or Howard Gardner's work on multiple intelligences or any critique of The Bell Curve is liable to dismiss IQ as merely phrenology updated, a pseudoscience fronting for a host of racist and elitist ideologies that dare not speak their names. These critics attack IQ itself - or, more precisely, what intelligence scholar Arthur Jensen called g, a measure of underlying "general" intelligence. Psychometricians measure g by performing a factor analysis of multiple intelligence tests and extracting a pattern of correlation between the measurements. (IQ is just one yardstick.) Someone with greater general intelligence than average should perform better on a range of different tests. Unlike some skeptics, James Flynn didn't just dismiss g as statistical tap dancing. He accepted that something real was being measured, but he came to believe that it should be viewed along another axis: time. You can't just take a snapshot of g at one moment and make sense of it, Flynn says. You have to track its evolution. He did just that. Suddenly, g became much more than a measure of mental ability. It revealed the rising trend line in intelligence test scores. And that, in turn, suggested that something in the environment - some social or cultural force - was driving the trend. Significant intellectual breakthroughs - to paraphrase the John Lennon song - are what happen when you're busy making other plans. So it was with Flynn and his effect. He left the US in the early 1960s to teach moral philosophy at the University of Otaga in New Zealand. In the late '70s, he began exploring the intellectual underpinnings of racist ideologies. "And I thought: Oh, I can do a bit about the IQ controversies," he says. "And then I saw that Arthur Jensen, a scholar of high repute, actually thought that blacks on average were genetically inferior - which was quite a shock. I should say that Jensen was beyond reproach - he's certainly not a racist. And so I thought I'd better look into this." This inquiry led to a 1980 book, Race, IQ, and Jensen, that posited an environmental - not genetic - explanation for the black-white IQ gap. After finishing the book, Flynn decided that he would look for evidence that blacks were gaining on whites as their access to education increased, and so he began studying US military records, since every incoming member of the armed forces takes an IQ test. Sure enough, he found that blacks were making modest gains on whites in intelligence tests, confirming his environmental explanation. But something else in the data caught his eye. Every decade or so, the testing companies would generate new tests and re-normalize them so that the average score was 100. To make sure that the new exams were in sync with previous ones, they'd have a batch of students take both tests. They were simply trying to confirm that someone who tested above average on the new version would perform above average on the old, and in fact the results confirmed that correlation. But the data also brought to light another pattern, one that the testing companies ignored. "Every time kids took the new and the old tests, they did better on the old ones," Flynn says. "I thought: That's weird." The testing companies had published the comparative data almost as an afterthought. "It didn't seem to strike them as interesting that the kids were always doing better on the earlier test," he says. "But I was new to the area." He sent his data to the Harvard Educational Review, which dismissed the paper for its small sample size. And so Flynn dug up every study that had ever been done in the US where the same subjects took a new and an old version of an IQ test. "And lo and behold, when you examined that huge collection of data, it revealed a 14-point gain between 1932 and 1978." According to Flynn's numbers, if someone testing in the top 18 percent the year FDR was elected were to time-travel to the middle of the Carter administration, he would score at the 50th percentile. When Flynn finally published his work in 1984, Jensen objected that Flynn's numbers were drawing on tests that reflected educational background. He predicted that the Flynn effect would disappear if one were to look at tests - like the Raven Progressive Matrices - that give a closer approximation of g, by measuring abstract reasoning and pattern recognition and eliminating language altogether. And so Flynn dutifully collected IQ data from all over the world. All of it showed dramatic increases. "The biggest of all were on Ravens," Flynn reports with a hint of glee still in his voice. The trend Flynn discovered in the mid-'80s has been investigated extensively, and there's little doubt he's right. In fact, the Flynn effect is accelerating. US test takers gained 17 IQ points between 1947 and 2001. The annual gain from 1947 through 1972 was 0.31 IQ point, but by the '90s it had crept up to 0.36. Though the Flynn effect is now widely accepted, its existence has in turn raised new questions. The most fundamental: Why are measures of intelligence going up? The phenomenon would seem to make no sense in light of the evidence that g is largely an inherited trait. We're certainly not evolving that quickly. The classic heritability research paradigm is the twin adoption study: Look at IQ scores for thousands of individuals with various forms of shared genes and environments, and hunt for correlations. This is the sort of chart you get, with 100 being a perfect match and 0 pure randomness: The same person tested twice: 87 Identical twins raised together: 86 Identical twins raised apart: 76 Fraternal twins raised together: 55 Biological siblings: 47 Parents and children living together: 40 Parents and children living apart: 31 Adopted children living together: 0 Unrelated people living apart: 0 After analyzing these shifting ratios of shared genes and the environment for several decades, the consensus grew, in the '90s, that heritability for IQ was around 0.6 - or about 60 percent. The two most powerful indications of this are at the top and bottom of the chart: Identical twins raised in different environments have IQs almost as similar to each other as the same person tested twice, while adopted children living together - shared environment, but no shared genes - show no correlation. When you look at a chart like that, the evidence for significant heritability looks undeniable. Four years ago, Flynn and William Dickens, a Brookings Institution economist, proposed another explanation, one made apparent to them by the Flynn effect. Imagine "somebody who starts out with a tiny little physiological advantage: He's just a bit taller than his friends," Dickens says. "That person is going to be just a bit better at basketball." Thanks to this minor height advantage, he tends to enjoy pickup basketball games. He goes on to play in high school, where he gets excellent coaching and accumulates more experience and skill. "And that sets up a cycle that could, say, take him all the way to the NBA," Dickens says. Now imagine this person has an identical twin raised separately. He, too, will share the height advantage, and so be more likely to find his way into the same cycle. And when some imagined basketball geneticist surveys the data at the end of that cycle, he'll report that two identical twins raised apart share an off-the-charts ability at basketball. "If you did a genetic analysis, you'd say: Well, this guy had a gene that made him a better basketball player," Dickens says. "But the fact is, that gene is making him 1 percent better, and the other 99 percent is that because he's slightly taller, he got all this environmental support." And what goes for basketball goes for intelligence: Small genetic differences get picked up and magnified in the environment, resulting in dramatically enhanced skills. "The heritability studies weren't wrong," Flynn says. "We just misinterpreted them." Dickens and Flynn showed that the environment could affect heritable traits like IQ, but one mystery remained: What part of our allegedly dumbed-down environment is making us smarter? It's not schools, since the tests that measure education-driven skills haven't shown the same steady gains. It's not nutrition - general improvement in diet leveled off in most industrialized countries shortly after World War II, just as the Flynn effect was accelerating. Most cognitive scholars remain genuinely perplexed. "I find it a puzzle and don't have a compelling explanation," wrote Harvard's Steven Pinker in an email exchange. "I suspect that it's either practice at taking tests or perhaps a large number of disparate factors that add up to the linear trend." Flynn has his theories, though they're still speculative. "For a long time it bothered me that g was going up without an across-the-board increase in other tests," he says. If g measured general intelligence, then a long-term increase should trickle over into other subtests. "And then I realized that society has priorities. Let's say we're too cheap to hire good high school math teachers. So while we may want to improve arithmetical reasoning skills, we just don't. On the other hand, with smaller families, more leisure, and more energy to use leisure for cognitively demanding pursuits, we may improve - without realizing it - on-the-spot problem-solving, like you see with Ravens." When you take the Ravens test, you're confronted with a series of visual grids, each containing a mix of shapes that seem vaguely related to one another. Each grid contains a missing shape; to answer the implicit question posed by the test, you need to pick the correct missing shape from a selection of eight possibilities. To "solve" these puzzles, in other words, you have to scrutinize a changing set of icons, looking for unusual patterns and correlations among them. This is not the kind of thinking that happens when you read a book or have a conversation with someone or take a history exam. But it is precisely the kind of mental work you do when you, say, struggle to program a VCR or master the interface on your new cell phone. Over the last 50 years, we've had to cope with an explosion of media, technologies, and interfaces, from the TV clicker to the World Wide Web. And every new form of visual media - interactive visual media in particular - poses an implicit challenge to our brains: We have to work through the logic of the new interface, follow clues, sense relationships. Perhaps unsurprisingly, these are the very skills that the Ravens tests measure - you survey a field of visual icons and look for unusual patterns. The best example of brain-boosting media may be videogames. Mastering visual puzzles is the whole point of the exercise - whether it's the spatial geometry of Tetris, the engineering riddles of Myst, or the urban mapping of Grand Theft Auto. The ultimate test of the "cognitively demanding leisure" hypothesis may come in the next few years, as the generation raised on hypertext and massively complex game worlds starts taking adult IQ tests. This is a generation of kids who, in many cases, learned to puzzle through the visual patterns of graphic interfaces before they learned to read. Their fundamental intellectual powers weren't shaped only by coping with words on a page. They acquired an intuitive understanding of shapes and environments, all of them laced with patterns that can be detected if you think hard enough. Their parents may have enhanced their fluid intelligence by playing Tetris or learning the visual grammar of TV advertising. But that's child's play compared with Pokemon. Contributing editor Steven Johnson (stevenberlinjohnson at earthlink.net) is the author of Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter. _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From he at psychology.su.se Fri May 6 16:07:38 2005 From: he at psychology.su.se (Hannes Eisler) Date: Fri, 6 May 2005 18:07:38 +0200 Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement In-Reply-To: References: Message-ID: Flynn's explanation sounds plausible; it reminds of "idiots savants." I think you can only interpret the g factor as an often misleading indication of an individual's attainable ceiling of intelligent performance which may be hereditary. Since only few environments empower the individual to the maximal possible performance, a better measure of the hereditary part might be the time, or the number of repetitions or rehearsal, necessary to achieve a certain gaol, e.g., to solve a problem. This would make all tests speed tests, of course. >Again the notion of heritability is being >presented as a meaningful measure of >genetic-versus-environmental influence. Most >monozygotic twins are monochorionic, sharing the >same choroid plexus and therefore the same blood >supply in the womb. A minority are dichorionic, >with identical genes but a different >intrauterine blood supply. Davis, Phelps and >Bracha (Schizophrenia Bulletin, 1995, 21, >357-366) investigated concordance of >schizophrenia in monochorionic and dichorionic >monozygotic twins, and found that while the >concordance rate for MC MZ twins was 60% (i.e., >if one twin is schizophrenic there is a 60% >chance the other will be as well), the >concordance rate of the DC MZ twins (with >identical genes) was 10.7%. Environmental >influences are overwhelming, and they begin at >conception: the genes do nothing without >environmental influences turning them on and off. > >The Flynn effect suggests that the vast media >wasteland may actually function as a vast brain >playground. > >Cheers, Ross > >Ross Buck, Ph. D. >Professor of Communication Sciences > and Psychology >Communication Sciences U-1085 >University of Connecticut >Storrs, CT 06269-1085 >860-486-4494 >fax 860-486-5422 >Ross.buck at uconn.edu >http://www.coms.uconn.edu/docs/people/faculty/rbuck/index.htm > >-----Original Message----- >From: paleopsych-bounces at paleopsych.org >[mailto:paleopsych-bounces at paleopsych.org] On >Behalf Of Premise Checker >Sent: Thursday, May 05, 2005 12:27 PM >To: paleopsych at paleopsych.org >Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement > >Dome Improvement >http://www.wired.com/wired/archive/13.05/flynn_pr.html > > >First some remarks from >From: Hal Finney >Date: Tue, 3 May 2005 11:03:41 -0700 (PDT) >To: extropy-chat at lists.extropy.org >except that the article is now available: > >Wired magazine's new issue has an article on the Flynn Effect, which we >have discussed here occasionally. This is probably my favorite Effect, >so completely extropian and contradictory to the conventional wisdom. >Curmudgeons throughout the ages have complained about the decay of society >and how the younger generation is inferior in morals and intelligence >to their elders. Likewise modern communications technology is derided: >TV is a vast wasteland, video games and movies promote sex and violence. >Yet Flynn discovered the astonishing and still little-known fact >that intelligence scores have steadily increased for at least the >past 100 years. And it's a substantial gain; people who would have >been considered geniuses 100 years ago would be merely average today. >Perhaps even more surprisingly, the gains cannot be directly attributed to >improved education, as the greatest improvements are found in the parts >of the test that directly measure abstract reasoning via visual puzzles, >not concrete knowledge based on language or mathematical skills. > >The Wired article (which should be online in a few days) does not have >much that is new, but one fact which popped out is that the Effect has >not only continued in the last couple of generations, but is increasing. >Average IQ gains were 0.31 per year in the 1950s and 60s, but by the >1990s had grown to 0.36 per year. > >Explanations for the Effect seem to be as numerous as people who have >studied it. Flynn himself does not seem to believe that it is real, >in the sense that it actually points to increased intelligence. I was >amused by economist David Friedman's suggestion that it is due to the >increased use of Caesarian deliveries allowing for larger head sizes! >The Wired article focuses on increased visual stimulation as the catalyst, >which seems plausible as part of the story. The article then predicts >that the next generation, exposed since babyhood to video games with >demanding puzzle solving, mapping and coordination skills, will see an >even greater improvement in IQ scores. > >Sometimes I wonder if the social changes we saw during the 20th century >may have been caused or at least promoted by greater human intelligence. >It's a difficult thesis to make because you first have to overcome the >conventional wisdom that says that the 1900s were a century of human >depravity and violence. But if you look deeper and recognize the >tremendous growth of morality and ethical sensitivity in this period >(which is what makes us judge ourselves so harshly), you have to ask, >maybe it is because people woke up, began to think for themselves, and >weren't willing to let themselves be manipulated and influenced as in >the past? If so, then this bodes well for the future. > >--------------now the article: > >Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop reading >the great authors and start playing Grand Theft Auto.) > >By Steven Johnson > >Twenty-three years ago, an American philosophy professor named James Flynn >discovered a remarkable trend: Average IQ scores in every industrialized >country on the planet had been increasing steadily for decades. Despite >concerns about the dumbing-down of society - the failing schools, the garbage >on TV, the decline of reading - the overall >population was getting smarter. And >the climb has continued, with more recent studies showing that the rate of IQ >increase is accelerating. Next to global warming >and Moore's law, the so-called >Flynn effect may be the most revealing line on the increasingly crowded chart >of modern life - and it's an especially hopeful one. We still have plenty of >problems to solve, but at least there's one >consolation: Our brains are getting >better at problem-solving. > >Unless you happen to think the very notion of IQ is bunk. Anyone who has read >Stephen Jay Gould's The Mismeasure of Man or Howard Gardner's work on multiple >intelligences or any critique of The Bell Curve is liable to dismiss IQ as >merely phrenology updated, a pseudoscience fronting for a host of racist and >elitist ideologies that dare not speak their names. > >These critics attack IQ itself - or, more precisely, what intelligence scholar >Arthur Jensen called g, a measure of underlying "general" intelligence. >Psychometricians measure g by performing a factor analysis of multiple >intelligence tests and extracting a pattern of correlation between the >measurements. (IQ is just one yardstick.) Someone with greater general >intelligence than average should perform better on a range of different tests. > >Unlike some skeptics, James Flynn didn't just dismiss g as statistical tap >dancing. He accepted that something real was being measured, but he came to >believe that it should be viewed along another axis: time. You can't just take >a snapshot of g at one moment and make sense of it, Flynn says. You have to >track its evolution. He did just that. Suddenly, g became much more than a >measure of mental ability. It revealed the rising trend line in intelligence >test scores. And that, in turn, suggested that something in the environment - >some social or cultural force - was driving the trend. > >Significant intellectual breakthroughs - to paraphrase the John Lennon song - >are what happen when you're busy making other plans. So it was with Flynn and >his effect. He left the US in the early 1960s to teach moral philosophy at the >University of Otaga in New Zealand. In the late '70s, he began exploring the >intellectual underpinnings of racist ideologies. >"And I thought: Oh, I can do a >bit about the IQ controversies," he says. "And >then I saw that Arthur Jensen, a >scholar of high repute, actually thought that blacks on average were >genetically inferior - which was quite a shock. I should say that Jensen was >beyond reproach - he's certainly not a racist. >And so I thought I'd better look >into this." > >This inquiry led to a 1980 book, Race, IQ, and Jensen, that posited an >environmental - not genetic - explanation for the black-white IQ gap. After >finishing the book, Flynn decided that he would look for evidence that blacks >were gaining on whites as their access to education increased, and so he began >studying US military records, since every incoming member of the armed forces >takes an IQ test. > >Sure enough, he found that blacks were making modest gains on whites in >intelligence tests, confirming his environmental explanation. But something >else in the data caught his eye. Every decade or so, the testing companies >would generate new tests and re-normalize them so that the average score was >100. To make sure that the new exams were in sync with previous ones, they'd >have a batch of students take both tests. They were simply trying to confirm >that someone who tested above average on the new version would perform above >average on the old, and in fact the results >confirmed that correlation. But the >data also brought to light another pattern, one that the testing companies >ignored. "Every time kids took the new and the old tests, they did better on >the old ones," Flynn says. "I thought: That's weird." > >The testing companies had published the comparative data almost as an >afterthought. "It didn't seem to strike them as interesting that the kids were >always doing better on the earlier test," he >says. "But I was new to the area." >He sent his data to the Harvard Educational Review, which dismissed the paper >for its small sample size. And so Flynn dug up every study that had ever been >done in the US where the same subjects took a new and an old version of an IQ >test. "And lo and behold, when you examined that huge collection of data, it >revealed a 14-point gain between 1932 and 1978." According to Flynn's numbers, >if someone testing in the top 18 percent the year FDR was elected were to >time-travel to the middle of the Carter administration, he would score at the >50th percentile. > >When Flynn finally published his work in 1984, Jensen objected that Flynn's >numbers were drawing on tests that reflected educational background. He >predicted that the Flynn effect would disappear if one were to look at tests - >like the Raven Progressive Matrices - that give >a closer approximation of g, by >measuring abstract reasoning and pattern recognition and eliminating language >altogether. And so Flynn dutifully collected IQ data from all over the world. >All of it showed dramatic increases. "The >biggest of all were on Ravens," Flynn >reports with a hint of glee still in his voice. > >The trend Flynn discovered in the mid-'80s has been investigated extensively, >and there's little doubt he's right. In fact, >the Flynn effect is accelerating. >US test takers gained 17 IQ points between 1947 and 2001. The annual gain from >1947 through 1972 was 0.31 IQ point, but by the '90s it had crept up to 0.36. > >Though the Flynn effect is now widely accepted, its existence has in turn >raised new questions. The most fundamental: Why are measures of intelligence >going up? The phenomenon would seem to make no sense in light of the evidence >that g is largely an inherited trait. We're certainly not evolving that >quickly. > >The classic heritability research paradigm is the twin adoption study: Look at >IQ scores for thousands of individuals with various forms of shared genes and >environments, and hunt for correlations. This is the sort of chart you get, >with 100 being a perfect match and 0 pure randomness: > >The same person tested twice: 87 >Identical twins raised together: 86 >Identical twins raised apart: 76 >Fraternal twins raised together: 55 >Biological siblings: 47 >Parents and children living together: 40 >Parents and children living apart: 31 >Adopted children living together: 0 >Unrelated people living apart: 0 > >After analyzing these shifting ratios of shared genes and the environment for >several decades, the consensus grew, in the '90s, that heritability for IQ was >around 0.6 - or about 60 percent. The two most >powerful indications of this are >at the top and bottom of the chart: Identical twins raised in different >environments have IQs almost as similar to each >other as the same person tested >twice, while adopted children living together - shared environment, but no >shared genes - show no correlation. When you look at a chart like that, the >evidence for significant heritability looks undeniable. > >Four years ago, Flynn and William Dickens, a Brookings Institution economist, >proposed another explanation, one made apparent to them by the Flynn effect. >Imagine "somebody who starts out with a tiny little physiological advantage: >He's just a bit taller than his friends," Dickens says. "That person is going >to be just a bit better at basketball." Thanks to this minor height advantage, >he tends to enjoy pickup basketball games. He goes on to play in high school, >where he gets excellent coaching and accumulates more experience and skill. >"And that sets up a cycle that could, say, take him all the way to the NBA," >Dickens says. > >Now imagine this person has an identical twin raised separately. He, too, will >share the height advantage, and so be more >likely to find his way into the same >cycle. And when some imagined basketball >geneticist surveys the data at the end >of that cycle, he'll report that two identical twins raised apart share an >off-the-charts ability at basketball. "If you did a genetic analysis, you'd >say: Well, this guy had a gene that made him a better basketball player," >Dickens says. "But the fact is, that gene is making him 1 percent better, and >the other 99 percent is that because he's slightly taller, he got all this >environmental support." And what goes for basketball goes for intelligence: >Small genetic differences get picked up and magnified in the environment, >resulting in dramatically enhanced skills. "The heritability studies weren't >wrong," Flynn says. "We just misinterpreted them." > >Dickens and Flynn showed that the environment could affect heritable traits >like IQ, but one mystery remained: What part of our allegedly dumbed-down >environment is making us smarter? It's not schools, since the tests that >measure education-driven skills haven't shown the same steady gains. It's not >nutrition - general improvement in diet leveled off in most industrialized >countries shortly after World War II, just as the Flynn effect was >accelerating. > >Most cognitive scholars remain genuinely perplexed. "I find it a puzzle and >don't have a compelling explanation," wrote >Harvard's Steven Pinker in an email >exchange. "I suspect that it's either practice at taking tests or perhaps a >large number of disparate factors that add up to the linear trend." > >Flynn has his theories, though they're still speculative. "For a long time it >bothered me that g was going up without an across-the-board increase in other >tests," he says. If g measured general intelligence, then a long-term increase >should trickle over into other subtests. "And then I realized that society has >priorities. Let's say we're too cheap to hire good high school math teachers. >So while we may want to improve arithmetical reasoning skills, we just don't. >On the other hand, with smaller families, more leisure, and more energy to use >leisure for cognitively demanding pursuits, we may improve - without realizing >it - on-the-spot problem-solving, like you see with Ravens." > >When you take the Ravens test, you're confronted >with a series of visual grids, >each containing a mix of shapes that seem vaguely related to one another. Each >grid contains a missing shape; to answer the implicit question posed by the >test, you need to pick the correct missing shape from a selection of eight >possibilities. To "solve" these puzzles, in >other words, you have to scrutinize >a changing set of icons, looking for unusual patterns and correlations among >them. > >This is not the kind of thinking that happens when you read a book or have a >conversation with someone or take a history exam. But it is precisely the kind >of mental work you do when you, say, struggle to program a VCR or master the >interface on your new cell phone. > >Over the last 50 years, we've had to cope with an explosion of media, >technologies, and interfaces, from the TV clicker to the World Wide Web. And >every new form of visual media - interactive >visual media in particular - poses >an implicit challenge to our brains: We have to work through the logic of the >new interface, follow clues, sense >relationships. Perhaps unsurprisingly, these >are the very skills that the Ravens tests measure - you survey a field of >visual icons and look for unusual patterns. > >The best example of brain-boosting media may be videogames. Mastering visual >puzzles is the whole point of the exercise - whether it's the spatial geometry >of Tetris, the engineering riddles of Myst, or >the urban mapping of Grand Theft >Auto. > >The ultimate test of the "cognitively demanding >leisure" hypothesis may come in >the next few years, as the generation raised on >hypertext and massively complex >game worlds starts taking adult IQ tests. This is a generation of kids who, in >many cases, learned to puzzle through the visual >patterns of graphic interfaces >before they learned to read. Their fundamental intellectual powers weren't >shaped only by coping with words on a page. They acquired an intuitive >understanding of shapes and environments, all of them laced with patterns that >can be detected if you think hard enough. Their >parents may have enhanced their >fluid intelligence by playing Tetris or learning the visual grammar of TV >advertising. But that's child's play compared with Pok?mon. > >Contributing editor Steven Johnson (stevenberlinjohnson at earthlink.net) is the >author of Everything Bad Is Good for You: How Today's Popular Culture Is >Actually Making Us Smarter. > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych -- ------------------------------------- Prof. Hannes Eisler Department of Psychology Stockholm University S-106 91 Stockholm Sweden e-mail: he at psychology.su.se fax : +46-8-15 93 42 phone : +46-8-163967 (university) +46-8-6409982 (home) internet: http://www.psychology.su.se/staff/he From ross.buck at uconn.edu Fri May 6 16:13:36 2005 From: ross.buck at uconn.edu (Buck, Ross) Date: Fri, 6 May 2005 12:13:36 -0400 Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Message-ID: All sorts of signals, including signals experienced in the womb. In infants simple sensory stimulation is critical to turn on genetic potential. Later, signals from other human beings become critical. We bioregulate one another via emotional communication--indeed this is true of all creatures--working particularly via peptide neurohormones. Human social organization emerges from interactions involving emotional communication, literally priming the DNA to respond appropriately (or not). Environmental signals can screw up the DNA as well, of course. Deprivation of physical/social stimuli due to abuse/neglect--particularly during sensitive periods--can undermine genetic potential for a lifetime, as can signals that can be associated with poverty, discrimination, bad education, lack of opportunity, etc. >From the Flynn effect, it appears possible that modern media optimize the expression of whatever human DNA is associated with performance on IQ-type tests. What they do for SOCIAL competence is another question. Television has been likened to Harlow's cloth-covered surrogate mothers: warm and fuzzy but basically unresponsive to the user. Newer technology is responsive to the user as well, of course, and it is noteworthy how kids eat it up (Piaget's "aliments" at work). Cheers, Ross -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Steve Hovland Sent: Thursday, May 05, 2005 9:13 PM To: 'The new improved paleopsych list' Subject: RE: [Paleopsych] Wired: (Flynn Effect): Dome Improvement What kind of environmental signals optimize the expression of human DNA? Steve Hovland www.stevehovland.net -----Original Message----- From: Buck, Ross [SMTP:ross.buck at uconn.edu] Sent: Thursday, May 05, 2005 10:00 AM To: The new improved paleopsych list Subject: RE: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Again the notion of heritability is being presented as a meaningful measure of genetic-versus-environmental influence. Most monozygotic twins are monochorionic, sharing the same choroid plexus and therefore the same blood supply in the womb. A minority are dichorionic, with identical genes but a different intrauterine blood supply. Davis, Phelps and Bracha (Schizophrenia Bulletin, 1995, 21, 357-366) investigated concordance of schizophrenia in monochorionic and dichorionic monozygotic twins, and found that while the concordance rate for MC MZ twins was 60% (i.e., if one twin is schizophrenic there is a 60% chance the other will be as well), the concordance rate of the DC MZ twins (with identical genes) was 10.7%. Environmental influences are overwhelming, and they begin at conception: the genes do nothing without environmental influences turning them on and off. The Flynn effect suggests that the vast media wasteland may actually function as a vast brain playground. Cheers, Ross Ross Buck, Ph. D. Professor of Communication Sciences and Psychology Communication Sciences U-1085 University of Connecticut Storrs, CT 06269-1085 860-486-4494 fax 860-486-5422 Ross.buck at uconn.edu http://www.coms.uconn.edu/docs/people/faculty/rbuck/index.htm -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Premise Checker Sent: Thursday, May 05, 2005 12:27 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Wired: (Flynn Effect): Dome Improvement Dome Improvement http://www.wired.com/wired/archive/13.05/flynn_pr.html First some remarks from From: Hal Finney Date: Tue, 3 May 2005 11:03:41 -0700 (PDT) To: extropy-chat at lists.extropy.org except that the article is now available: Wired magazine's new issue has an article on the Flynn Effect, which we have discussed here occasionally. This is probably my favorite Effect, so completely extropian and contradictory to the conventional wisdom. Curmudgeons throughout the ages have complained about the decay of society and how the younger generation is inferior in morals and intelligence to their elders. Likewise modern communications technology is derided: TV is a vast wasteland, video games and movies promote sex and violence. Yet Flynn discovered the astonishing and still little-known fact that intelligence scores have steadily increased for at least the past 100 years. And it's a substantial gain; people who would have been considered geniuses 100 years ago would be merely average today. Perhaps even more surprisingly, the gains cannot be directly attributed to improved education, as the greatest improvements are found in the parts of the test that directly measure abstract reasoning via visual puzzles, not concrete knowledge based on language or mathematical skills. The Wired article (which should be online in a few days) does not have much that is new, but one fact which popped out is that the Effect has not only continued in the last couple of generations, but is increasing. Average IQ gains were 0.31 per year in the 1950s and 60s, but by the 1990s had grown to 0.36 per year. Explanations for the Effect seem to be as numerous as people who have studied it. Flynn himself does not seem to believe that it is real, in the sense that it actually points to increased intelligence. I was amused by economist David Friedman's suggestion that it is due to the increased use of Caesarian deliveries allowing for larger head sizes! The Wired article focuses on increased visual stimulation as the catalyst, which seems plausible as part of the story. The article then predicts that the next generation, exposed since babyhood to video games with demanding puzzle solving, mapping and coordination skills, will see an even greater improvement in IQ scores. Sometimes I wonder if the social changes we saw during the 20th century may have been caused or at least promoted by greater human intelligence. It's a difficult thesis to make because you first have to overcome the conventional wisdom that says that the 1900s were a century of human depravity and violence. But if you look deeper and recognize the tremendous growth of morality and ethical sensitivity in this period (which is what makes us judge ourselves so harshly), you have to ask, maybe it is because people woke up, began to think for themselves, and weren't willing to let themselves be manipulated and influenced as in the past? If so, then this bodes well for the future. --------------now the article: Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop re ading the great authors and start playing Grand Theft Auto.) By Steven Johnson Twenty-three years ago, an American philosophy professor named James Flynn discovered a remarkable trend: Average IQ scores in every industrialized country on the planet had been increasing steadily for decades. Despite concerns about the dumbing-down of society - the failing schools, the garbage on TV, the decline of reading - the overall population was getting smarter. And the climb has continued, with more recent studies showing that the rate of IQ increase is accelerating. Next to global warming and Moore's law, the so-called Flynn effect may be the most revealing line on the increasingly crowded chart of modern life - and it's an especially hopeful one. We still have plenty of problems to solve, but at least there's one consolation: Our brains are getting better at problem-solving. Unless you happen to think the very notion of IQ is bunk. Anyone who has read Stephen Jay Gould's The Mismeasure of Man or Howard Gardner's work on multiple intelligences or any critique of The Bell Curve is liable to dismiss IQ as merely phrenology updated, a pseudoscience fronting for a host of racist and elitist ideologies that dare not speak their names. These critics attack IQ itself - or, more precisely, what intelligence scholar Arthur Jensen called g, a measure of underlying "general" intelligence. Psychometricians measure g by performing a factor analysis of multiple intelligence tests and extracting a pattern of correlation between the measurements. (IQ is just one yardstick.) Someone with greater general intelligence than average should perform better on a range of different tests. Unlike some skeptics, James Flynn didn't just dismiss g as statistical tap dancing. He accepted that something real was being measured, but he came to believe that it should be viewed along another axis: time. You can't just take a snapshot of g at one moment and make sense of it, Flynn says. You have to track its evolution. He did just that. Suddenly, g became much more than a measure of mental ability. It revealed the rising trend line in intelligence test scores. And that, in turn, suggested that something in the environment - some social or cultural force - was driving the trend. Significant intellectual breakthroughs - to paraphrase the John Lennon song - are what happen when you're busy making other plans. So it was with Flynn and his effect. He left the US in the early 1960s to teach moral philosophy at the University of Otaga in New Zealand. In the late '70s, he began exploring the intellectual underpinnings of racist ideologies. "And I thought: Oh, I can do a bit about the IQ controversies," he says. "And then I saw that Arthur Jensen, a scholar of high repute, actually thought that blacks on average were genetically inferior - which was quite a shock. I should say that Jensen was beyond reproach - he's certainly not a racist. And so I thought I'd better look into this." This inquiry led to a 1980 book, Race, IQ, and Jensen, that posited an environmental - not genetic - explanation for the black-white IQ gap. After finishing the book, Flynn decided that he would look for evidence that blacks were gaining on whites as their access to education increased, and so he began studying US military records, since every incoming member of the armed forces takes an IQ test. Sure enough, he found that blacks were making modest gains on whites in intelligence tests, confirming his environmental explanation. But something else in the data caught his eye. Every decade or so, the testing companies would generate new tests and re-normalize them so that the average score was 100. To make sure that the new exams were in sync with previous ones, they'd have a batch of students take both tests. They were simply trying to confirm that someone who tested above average on the new version would perform above average on the old, and in fact the results confirmed that correlation. But the data also brought to light another pattern, one that the testing companies ignored. "Every time kids took the new and the old tests, they did better on the old ones," Flynn says. "I thought: That's weird." The testing companies had published the comparative data almost as an afterthought. "It didn't seem to strike them as interesting that the kids were always doing better on the earlier test," he says. "But I was new to the area." He sent his data to the Harvard Educational Review, which dismissed the paper for its small sample size. And so Flynn dug up every study that had ever been done in the US where the same subjects took a new and an old version of an IQ test. "And lo and behold, when you examined that huge collection of data, it revealed a 14-point gain between 1932 and 1978." According to Flynn's numbers, if someone testing in the top 18 percent the year FDR was elected were to time-travel to the middle of the Carter administration, he would score at the 50th percentile. When Flynn finally published his work in 1984, Jensen objected that Flynn's numbers were drawing on tests that reflected educational background. He predicted that the Flynn effect would disappear if one were to look at tests - like the Raven Progressive Matrices - that give a closer approximation of g, by measuring abstract reasoning and pattern recognition and eliminating language altogether. And so Flynn dutifully collected IQ data from all over the world. All of it showed dramatic increases. "The biggest of all were on Ravens," Flynn reports with a hint of glee still in his voice. The trend Flynn discovered in the mid-'80s has been investigated extensively, and there's little doubt he's right. In fact, the Flynn effect is accelerating. US test takers gained 17 IQ points between 1947 and 2001. The annual gain from 1947 through 1972 was 0.31 IQ point, but by the '90s it had crept up to 0.36. Though the Flynn effect is now widely accepted, its existence has in turn raised new questions. The most fundamental: Why are measures of intelligence going up? The phenomenon would seem to make no sense in light of the evidence that g is largely an inherited trait. We're certainly not evolving that quickly. The classic heritability research paradigm is the twin adoption study: Look at IQ scores for thousands of individuals with various forms of shared genes and environments, and hunt for correlations. This is the sort of chart you get, with 100 being a perfect match and 0 pure randomness: The same person tested twice: 87 Identical twins raised together: 86 Identical twins raised apart: 76 Fraternal twins raised together: 55 Biological siblings: 47 Parents and children living together: 40 Parents and children living apart: 31 Adopted children living together: 0 Unrelated people living apart: 0 After analyzing these shifting ratios of shared genes and the environment for several decades, the consensus grew, in the '90s, that heritability for IQ was around 0.6 - or about 60 percent. The two most powerful indications of this are at the top and bottom of the chart: Identical twins raised in different environments have IQs almost as similar to each other as the same person tested twice, while adopted children living together - shared environment, but no shared genes - show no correlation. When you look at a chart like that, the evidence for significant heritability looks undeniable. Four years ago, Flynn and William Dickens, a Brookings Institution economist, proposed another explanation, one made apparent to them by the Flynn effect. Imagine "somebody who starts out with a tiny little physiological advantage: He's just a bit taller than his friends," Dickens says. "That person is going to be just a bit better at basketball." Thanks to this minor height advantage, he tends to enjoy pickup basketball games. He goes on to play in high school, where he gets excellent coaching and accumulates more experience and skill. "And that sets up a cycle that could, say, take him all the way to the NBA," Dickens says. Now imagine this person has an identical twin raised separately. He, too, will share the height advantage, and so be more likely to find his way into the same cycle. And when some imagined basketball geneticist surveys the data at the end of that cycle, he'll report that two identical twins raised apart share an off-the-charts ability at basketball. "If you did a genetic analysis, you'd say: Well, this guy had a gene that made him a better basketball player," Dickens says. "But the fact is, that gene is making him 1 percent better, and the other 99 percent is that because he's slightly taller, he got all this environmental support." And what goes for basketball goes for intelligence: Small genetic differences get picked up and magnified in the environment, resulting in dramatically enhanced skills. "The heritability studies weren't wrong," Flynn says. "We just misinterpreted them." Dickens and Flynn showed that the environment could affect heritable traits like IQ, but one mystery remained: What part of our allegedly dumbed-down environment is making us smarter? It's not schools, since the tests that measure education-driven skills haven't shown the same steady gains. It's not nutrition - general improvement in diet leveled off in most industrialized countries shortly after World War II, just as the Flynn effect was accelerating. Most cognitive scholars remain genuinely perplexed. "I find it a puzzle and don't have a compelling explanation," wrote Harvard's Steven Pinker in an email exchange. "I suspect that it's either practice at taking tests or perhaps a large number of disparate factors that add up to the linear trend." Flynn has his theories, though they're still speculative. "For a long time it bothered me that g was going up without an across-the-board increase in other tests," he says. If g measured general intelligence, then a long-term increase should trickle over into other subtests. "And then I realized that society has priorities. Let's say we're too cheap to hire good high school math teachers. So while we may want to improve arithmetical reasoning skills, we just don't. On the other hand, with smaller families, more leisure, and more energy to use leisure for cognitively demanding pursuits, we may improve - without realizing it - on-the-spot problem-solving, like you see with Ravens." When you take the Ravens test, you're confronted with a series of visual grids, each containing a mix of shapes that seem vaguely related to one another. Each grid contains a missing shape; to answer the implicit question posed by the test, you need to pick the correct missing shape from a selection of eight possibilities. To "solve" these puzzles, in other words, you have to scrutinize a changing set of icons, looking for unusual patterns and correlations among them. This is not the kind of thinking that happens when you read a book or have a conversation with someone or take a history exam. But it is precisely the kind of mental work you do when you, say, struggle to program a VCR or master the interface on your new cell phone. Over the last 50 years, we've had to cope with an explosion of media, technologies, and interfaces, from the TV clicker to the World Wide Web. And every new form of visual media - interactive visual media in particular - poses an implicit challenge to our brains: We have to work through the logic of the new interface, follow clues, sense relationships. Perhaps unsurprisingly, these are the very skills that the Ravens tests measure - you survey a field of visual icons and look for unusual patterns. The best example of brain-boosting media may be videogames. Mastering visual puzzles is the whole point of the exercise - whether it's the spatial geometry of Tetris, the engineering riddles of Myst, or the urban mapping of Grand Theft Auto. The ultimate test of the "cognitively demanding leisure" hypothesis may come in the next few years, as the generation raised on hypertext and massively complex game worlds starts taking adult IQ tests. This is a generation of kids who, in many cases, learned to puzzle through the visual patterns of graphic interfaces before they learned to read. Their fundamental intellectual powers weren't shaped only by coping with words on a page. They acquired an intuitive understanding of shapes and environments, all of them laced with patterns that can be detected if you think hard enough. Their parents may have enhanced their fluid intelligence by playing Tetris or learning the visual grammar of TV advertising. But that's child's play compared with Pokemon. Contributing editor Steven Johnson (stevenberlinjohnson at earthlink.net) is the author of Everything Bad Is Good for You: How Today's Popular Culture Is Actually Making Us Smarter. _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Fri May 6 21:39:36 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 17:39:36 -0400 (EDT) Subject: [Paleopsych] SF Chronicle: Cloned pet ban rejected: Law would have been nation's first Message-ID: Cloned pet ban rejected: Law would have been nation's first http://sfgate.com/cgi-bin/article.cgi?file=/c/a/2005/05/04/CLONING.TMP - John M. Hubbell, Chronicle Sacramento Bureau Wednesday, May 4, 2005 Sacramento -- State lawmakers Tuesday turned away a bill that could have brought a first-in-the-nation ban on pet cloning, moved less by a host of scientific and ethical arguments than by photos of wide-eyed, copy-cat kittens. The 4-2 vote against the bill with four abstentions by members of Assembly Business and Professions Committee on AB1428 by Assemblyman Lloyd Levine, D-Van Nuys, came after a brief discussion that touched on everything from free enterprise to mad science -- all triggered largely by a pioneering Bay Area firm's willingness to replicate pet owners' favorite cat or dog. That firm, Genetic Savings & Clone, has created replicas of six cats, representatives said Tuesday, and hopes to start work on dogs by December. Pictures of two dark-haired, cloned felines were shown during testimony by Lou Hawthorne, the firm's chief executive, prompting committee Chairwoman Gloria Negrete McLeod, D-Chino, to inquire of him: "So you even do tabbies?" "We do everything except calicoes," Hawthorne said, citing their genetic complexity. It was not the type of inquiry hoped for by Levine, who framed pet cloning as a needless scientific incursion in a world where millions of needy animals are euthanized each year. With the practice lacking federal or state regulation, he said, cloning could not only lead to deformities in the laboratory, but to unintended consequences in society. "What happens when people decide they want to cross their boa constrictor with their rattlesnake to get a really big poisonous snake?" he asked. "Life is more than a commodity," Levine said, "and this is where we draw the line. Just because we can doesn't mean we should." Crystal Miller-Spiegel, policy analyst with the American Anti-Vivisection Society, said pet owners should realize that "animals can't be replaced like a printer." She called Levine's legislation "not anti-science, not an animal-rights bill, and not based on emotion. It's simply common sense." Assemblyman Paul Koretz, D-West Hollywood, queried Hawthorne on claims on a recent Genetic Savings & Clone mailer touting it can clone an owner's "perfect" pet. "I'm wondering whether consumers are being pulled into this," Koretz said. But Hawthorne said he was "perfectly comfortable" with the advertisement. "Contractually, we guarantee only physical resemblance," he said. Hawthorne, who said his firm charged about $23,000 per cat, also touted the promise of animal cloning one day addressing the repopulation of endangered species. Christine Dillon, lobbyist for the California Veterinary Medical Association, said generations of selective breeding meant that, in all practicality, "vets have been working on genetically modified animals for years." Democratic Assemblyman Joe Nation, whose district includes Sausalito, where Hawthorne's firm is based, noted that a California ban on pet cloning would fail to prevent the practice in neighboring states. Jokingly, he pondered the scenario of a familiar state inspector intercepting cars inbound from Nevada to ask, "Do you have any fresh fruits, vegetables or cloned kittens with you?" Levine agreed cloning issues should be decided at the federal level, but likened continued inaction in California to "trying to close the barn door after the horses are already out." But the fears seemed unwarranted to Ken Press of Sacramento, who has stored the DNA of his recently deceased cat, a 12-year-old Siamese mix named Kitamus he called "an exceptional pet," with Genetic Savings & Clone. "I've considered his genetic lineage worthy of continuing," Press told the committee, adding that neutering the pet proved a mistake. "Sometimes you make a decision and later regret it." From checker at panix.com Fri May 6 21:41:52 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 17:41:52 -0400 (EDT) Subject: [Paleopsych] NYT: Perils of Pain Relief Often Hide in Tiny Type Message-ID: Personal Health: Perils of Pain Relief Often Hide in Tiny Type New York Times, 5.5.3 http://www.nytimes.com/2005/05/03/health/03brod.html By [1]JANE E. BRODY If ever there was a classic case of "no free lunch," popular pain control medications are it. There's not one without a potentially serious risk. Yet, far too many people use them carelessly, without adequate attention to dosage and warnings about possible risks. For over a century, aspirin was the pain drug of choice, until data emerged on the rather large number of bleeding-related deaths this time-honored medicine caused each year. In fact, many pharmaceutical experts say that if aspirin had to go through the Food and Drug Administration's approval process today, it would never make it to market. Along came some dandy substitutes, now also sold over the counter under brand names and as generics: ibuprofen (Advil, Motrin IB) and naproxen (Aleve). Ibuprofen and naproxen, known as nonsteroidal anti-inflammatory drugs, or Nsaids, can equal or outdo aspirin's action against painful inflammation but at less risk of bleeding. But they, too, can have serious side effects: they can irritate the gastrointestinal tract and possibly cause ulcers. People who use Nsaids chronically are often told to take an anti-acid drug to protect their stomachs. This problem opened up a market for a new kind of drug called a cox-2 inhibitor, sold as Celebrex, Vioxx, Bextra and Mobic. These drugs are as good or better than ibuprofen for pain, although as patented prescription medications they greatly multiplied the cost of pain relief. The cox-2 inhibitors were considered safer because they reduced the risks of bleeding and gastrointestinal damage. And as major moneymakers, they were heavily promoted, especially to the millions who need relief for chronic problems. Alas, these, too, have come under serious fire as their use mushroomed and evidence emerged linking them to heart attacks and strokes among users already at risk for these problems. With many multimillion-dollar lawsuits looming, Vioxx was the first to be withdrawn from the market, recently followed by Bextra. Both drugs may come back, accompanied by more stringent warnings. Or their cox-2 cousins, Celebrex and Mobic, may join the ranks as drugs gone by. Problems also accompany other prescription painkillers, like the opioids, to be discussed in greater detail in a future column. This brings us to an entirely different drug, acetaminophen, long used to counter fever and occasional aches and pains like tension headaches. But now acetaminophen is being hailed as an excellent first choice for the relief of chronic pain. Can Tylenol Take Over? Acetaminophen, often referred to by its most popular brand name, Tylenol, has no anti-inflammatory action. Nor does it cause bleeding or gastrointestinal distress. Many pain specialists say it should be considered first for relief for the persistent pain of osteoarthritis and prolonged pain of muscle or joint injuries. All in all, acetaminophen is a safe drug for children and adults. Despite the many millions of doses taken by Americans each year, few reports of serious side effects emerge when acetaminophen is used in the dosages recommended by manufacturers. For example, in a study published a decade ago evaluating the experience of 28,130 children who had taken acetaminophen, there was no increased risk of gastrointestinal bleeding, kidney failure, life-threatening allergic reactions or Reye's syndrome, a potential fatal side effect of aspirin when given to children with viral infections. Acetaminophen is also considered safe for women who are pregnant or breast-feeding, although they are wisely advised to check first with their doctors. And acetaminophen is the pain reliever of choice for those with serious allergies who may be at risk of severe allergic reactions from aspirin and Nsaids. Perhaps as a testament to its safety, acetaminophen is found, not only on its own in a variety of dosages, but also in combination with other medications, over the counter and prescription. If consumers are unaware of its presence in different medications, or if they fail to adhere to cautionary statements about dosages, it is possible to take too much acetaminophen inadvertently. As with any other medicine, with acetaminophen it is critically important to keep in mind this irrefutable adage: The dose makes the poison. For example, no one questions the safety of following recommended doses. If you can read the fine print on the label, it will tell you that for adults and for children 12 and older, two 500-milligram tablets or capsules can be taken every 4 to 6 hours, as long as no more than 8 tablets (a total of 4,000 milligrams) are taken in a 24-hour period - unless a physician says otherwise. Taking more than 4,000 milligrams a day of acetaminophen on a chronic basis can damage the liver of an adult. The danger dose would be far smaller for young children. It is easier than you may think to take more than 4,000 milligrams a day. With the higher-dose tablets (650 milligrams each) now sold to treat arthritis, you can easily exceed the safety limit if you do not follow the instructions to take 2 tablets every 8 hours, for a maximum daily dose of 6 tablets in 24 hours, adding up to 3,900 milligrams a day. Even if you follow these directions, you can exceed the recommended daily dose if you also take another medication - say, an over-the-counter cold or flu remedy - that contains acetaminophen. The label on my Tylenol Arthritis Pain has a clearly stated warning: "Do not use with any other product containing acetaminophen." But until writing this column, I admit I never read that warning, and I'd guess that more than 90 percent of other users haven't read it either. Without a magnifying glass, many elderly people who are the most likely users of an arthritis drug would have trouble reading the labels on this and many other medicines like it. A second warning on acetaminophen says: "If you drink three or more alcoholic drinks every day, ask your doctor whether you should take acetaminophen or other pain relievers/fever reducers. Acetaminophen may cause liver damage." A Liver Under Siege So, if your liver is already under attack from alcohol, acetaminophen can be that last straw, resulting in liver failure. This year, the journal Emergency Medicine warned physicians about the hazards of overdoses of acetaminophen. Dr. Shirley Kung and Dr. Kennon Heard wrote that acetaminophen poisoning could often be much worse than it seemed at first. Nausea and vomiting can progress to complete liver failure in as little as 24 hours unless the problem is promptly recognized and the proper antidote given within 24 hours of a toxic dose. To fully prevent liver injury, the antidote should be given within eight hours. Each year, more than 100,000 calls related to acetaminophen are made to poison control centers in the United States, and about 150 acetaminophen-related deaths are reported. Some cases result from deliberate overdoses by people trying to commit suicide. But many others are accidental, like the one described in the journal: an 18-month-old child with a fever and cough for three days who had been given acetaminophen every two to four hours. Other cases result when people whose livers are damaged by other disease take acetaminophen for respiratory infections or pain. From checker at panix.com Fri May 6 21:42:02 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 17:42:02 -0400 (EDT) Subject: [Paleopsych] NYT: One Family's Story: Apples to Applejack Message-ID: One Family's Story: Apples to Applejack New York Times, 5.5.4 http://www.nytimes.com/2005/05/04/dining/04lair.html [My 7-great grandfather, Samuel Forman (1662 or 1663-1740) was High Sherif of Monmouth Country when the first Lairds came over but it was only later that the Lairds got into the liquor business. Samuel was the 5-great grandfather of both my grandparents on the Forman side, thus making the fifth cousins. I'm now a little unclear on this, but a George Forman was an accountant and went into the liquor business with a Mr. Brown, forming the Brown-Forman Company, which is one of last independent distillers today, in the late 19th century. Previously, I had thought that Forman fled to Kentucky during the Whiskey Rebellion of 1974, which took place in what is now Pittsburgh. He then met a Mr. Forman who said, "Forman, you make mighty good moonshine; let's go into business and make it legal." Though the date is wrong by a century, there was indeed a Kentucky Forman, namely George, and it is true that my father's mother's family did hail from Kentucky before they moved to Kansas. So my father's father may have garbled the report from his wife, or his wife did the garbling herself, or maybe it was me who added to the garbling. Anyhow, the article below is a good one.] -------------- By FRANK J. PRIAL SCOBEYVILLE, N.J. LAIRD Emilie Dunn is only 7 years old, but one day history will catch up with her. Since 1698, some 12 generations of the Laird family have lived in or around this tiny Monmouth County village, making history and, yes, applejack. The family's business, Laird & Company, is the oldest commercial distillery in the United States and one of the country's oldest family businesses. The first Laird to come to these shores, William, was a Scotsman who, his family likes to think, made Scotch whiskey back in County Fyfe and switched to apple brandy when he reached Monmouth County. Almost a century later, in 1780, a grandson of William Laird, Robert Laird, started Laird & Company; the Lairds still have his account book from that year to prove it. Nine generations of Lairds have run the company since then. Laird Emilie, the daughter of Lisa Laird Dunn, of the ninth generation since the company was started, is the only one of the 10th generation bearing the Laird name who might conceivably go into the business. The story of applejack and the history of the Lairds are intertwined. George Washington, who owned large apple orchards, wrote to the Lairds around 1760 asking for their applejack recipe. In his diary he noted on Aug. 3, 1763, that he "began selling cider." During the Revolutionary War, Washington dined with Moses Laird, an uncle of Robert, on the eve of the Battle of Monmouth. Abraham Lincoln ran a tavern in Springfield, Ill., for a time; the Lairds have a copy of his bill of fare from 1833 offering applejack at 12 cents a half pint. That's not cheap: dinner was 25 cents. Presumably Lincoln's applejack was the straight stuff. Today, the names for apple spirits are more specific. By law, applejack can refer only to a blend. "The trend has been to lighter drinks," Lisa Laird Dunn said. "Until the 1970's, our applejack was pure apple juice, fermented then distilled. Today, at 80 proof, it's a blend of about 35 percent apple brandy and 65 percent neutral grain spirits." Federal regulations also require that applejack be aged four years in used bourbon barrels. The unblended style has not been abandoned. There is Laird's 100 proof Straight Apple Brandy; Laird's 80-proof Old Apple Brandy, aged a minimum of seven and a half years, and the family's pride, Laird's 88-proof 12-Year-Old Apple Brandy, aged in charred bourbon barrels. Like a 20-year-old Calvados from the Pays d'Auge in Normandy, Laird's 12-year-old can take its place alongside most fine Cognacs. Seventeenth-century settlers in the Northeast turned to apples for their strong spirits because the weather and the soil were not hospitable to rye, barley and corn. Until whiskey began to flow through the Cumberland Gap in the 18th century, and rum, or molasses to make rum, arrived from the Caribbean as part of the slave trade, applejack was America's favorite spirit. By the 1670's, according to the Laird archives, almost every prosperous farm had an apple orchard whose yield went almost entirely into the making of cider. Hard cider - simple fermented apple juice - was the most abundant drink in the colonies. Much of it was made by leaving apple cider outside in winter until its water content froze and was discarded. About 20 years later, farmers began to distill the hard cider into 120-proof "cyder spirits," which soon became known as applejack. The first Laird distillery was a small affair behind the Colt's Neck Inn, a stagecoach stop between Freehold and Perth Amboy. While the inn is still there and still open, the distillery was moved to its current site, five miles away, after a fire in 1849. Originally the small plant was surrounded by apple orchards. Now most of the area is given over to horse farms and a slowly encroaching line of megamansions. "We haven't purchased an apple around here for years," Lisa Laird Dunn said. "All our apples come from the Shenandoah Valley, and they are processed in our distillery in North Garden, Va." Scobeyville is the site of the company's headquarters and its warehouses. The best apples for making applejack are small, late-ripening Winesaps, Larrie Laird said, "because they yield more alcohol." Sixteen pounds of apples produce about 25 ounces of applejack. Laird & Company is the nation's top producer of apple brandies and its only producer of applejack, but the company's production is relatively small, about 40,000 cases a year in all. To increase its sales, Laird imports wines and spirits from France, Italy and elsewhere and acts as a contract bottler for a variety of spirits produces. It buys spirits in bulk - bourbon, Scotch, tequila, Canadian whiskey, gin, vodka and others - and bottles them. Applejack and the apple brandies make up only about 5 percent of Laird's catalog. While Laird is the only producer of applejack, there are several other apple brandy makers, one of the most prominent the Clear Creek distillery in Portland, Ore. Clear Creek calls its version Eau de Vie de Pomme, makes it from Golden Delicious apples and ages it eight years in French oak barrels. Here in Scobeyville, a representative of the eighth generation, Larrie, 65, currently president and chief executive, will eventually give way to a representative of the ninth, his daughter, Lisa Laird Dunn, 43, vice president of sales and marketing, and her cousin, John E. Laird III, 57, executive vice president and chief financial officer. After that, it all depends on Laird Emilie. Of course, she has a few years to think about it. From checker at panix.com Fri May 6 21:39:43 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 17:39:43 -0400 (EDT) Subject: [Paleopsych] CHE: A glance at the spring issue of The Wilson Quarterly: The future of big cities Message-ID: A glance at the spring issue of The Wilson Quarterly: The future of big cities The Chronicle of Higher Education: Magazine & journal reader http://chronicle.com/prm/daily/2005/05/2005050301j.htm 5.5.3 Just at the time large cities seem to be most dominant, huge urban centers may be losing ground, says Joel Kotkin, a fellow at the Steven L. Newman Real Estate Institute at Bernard M. Baruch College of the City University of New York, and a visiting lecturer in history, theory, and humanities at the Southern California Institute of Architecture. The 21st century is the first in which a majority of people live in cities, he says, but recent technological and demographic changes threaten to weaken cities. Advances in telecommunications now allow individuals and corporations to conduct business from places outside major cities. Small towns and suburban areas around cities are drawing more and more professionals and businesses out of the city centers, he says. Immigrants and young people have traditionally bolstered city populations, but many immigrants are choosing to live in outlying areas, he says, and young people who start their careers in cities now tend to move out when they start families or businesses of their own. And, in many developed countries, the younger population is dwindling because of low birth rates. Many cities have focused on tourism and entertainment to compensate for their losses, but "a busy city must be more than a construct of diversions for essentially nomadic populations," he writes. "It requires an engaged and committed citizenry with a long-term financial and familial stake in the metropolis." The needs of cities have not changed much over the millennia they have existed, he says, and "to be successful today, urban areas must resonate with the ancient fundamentals -- they must be sacred, safe, and busy." While cities no longer need to be built around temples or identified with particular gods, their citizens should share a sense of common purpose and identity, he says. Citizens also need to perceive their cities as safe, a special challenge in the face of terrorism. "In the sprawling cities of the developing world, the lack of a healthy economy and the absence of a stable political order loom as the most pressing problems," he says. But cities in developed countries "seem to lack a shared sense of sacred place, civic identity, or moral order," he writes. "And the study of urban history suggests that affluent cities without moral cohesion or a sense of civic identity are doomed to decadence and decline." The article, "Will Great Cities Survive?," is adapted from Mr. Kotkin's recent book, The City: A Global History (Modern Library, 2005) and is not online. Information about the journal is available at [53]http://wwics.si.edu/index.cfm?fuseaction=wq.welcome --Kellie Bartlett From checker at panix.com Sat May 7 00:00:46 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:00:46 -0400 (EDT) Subject: [Paleopsych] Meme 040: Everything I Learned in Graduate Economics Was Wrong Message-ID: Meme 040: Everything I Learned in Graduate Economics Was Wrong sent 5.5.6 I mean that the assumption that investors were both knowledgeable and rational, that the less knowledgeable and rational investors got driven out, is wrong. I'd have thought, along with Gary North (below), that investors would peer behind accounting rules and see the real figures. And I'd have thought that they would have seen that health care has been rising faster than GDP for quite a long time now and would have factored this in when estimating the true wealth of General Motors. In other words, like conspiracy theorists, I want to believe that somewhere, somehow there are competent people. There are, of course, but their competence is more sharply limited than I had hitherto thought. And now I have an explanation why businesses clamor for cheap labor through immigration, while they surely ought to know that cheap labor for one business means cheap labor for all competing businesses, with the result that there is no increase in profits in any economy with a reasonably healthy amount of competition, which is the case for the United States. The explanation is that they are bad economists, as North shows in the case of investors. And so are politicians and liberals who clamor for an increase in the minimum wage. ---------------------- Gary North's REALITY CHECK Issue 444, 5.5.6 GENERAL MOTORS RUNS OVER THE EXPERTS DETROIT (AP) -- Standard & Poor's Ratings Services cut its corporate credit ratings to junk status for both General Motors Corp. and Ford Motor Co., a significant blow that will increase borrowing costs and limit fund-raising options for the nation's two biggest automakers. Shares of both companies fell 5 percent or more after Thursday's downgrades, and the news sent the overall market lower. "New York Times" (May 5, 2005) All of a sudden, without warning, the investment world is talking about the looming crisis at General Motors. Its pension fund obligations and health care obligations now appear to threaten the future of the company. The decline of its stock price from $55 in January, 2004, to today's $30 range has revealed a loss of confidence in the company by investors. To see this decline in action, click here: http://shurl.org/gm05 I have no objection to the experts' pessimism regarding the future of General Motors. I happen to share it, and have for years, precisely because of the pension issue. What astounds me is that investors and financial columnists have only just begun to regard the company's pension obligations as a significant factor in the future profitability of the firm. Why now? Why not in 2003 or ten years ago? The United Auto Workers' officers and GM's senior managers decided decades ago to agree to high pension and health benefits in exchange for reduced increases in wages. Health care benefits are tax-free income for workers. Even retired workers are covered. It seemed like a low-risk deal for GM. Nobody thought about the price effects on health care of Medicare. The health care market, like all markets, is a giant auction. If bidders get their hands on more money, they will bid up prices. All over America, workers are bidding health care prices. So are retirees. A DISASTER CALLED OPEB Alan Sloan, a financial columnist for "Newsweek," has painted a stark picture. He begins with a description of how GM got into this pickle. Lower salaries meant that GM reported higher profits, which translated into higher stock prices -- and higher bonuses for executives. Commitments for pensions and "other post-employment benefits" -- known as OPEB in the accounting biz -- had little initial impact on GM's profit statement and didn't count as obligations on its balance sheet. So why not keep employees happy with generous benefits? It was a free lunch. Besides, GM's only major competitors at the time, Ford and Chrysler, were making similar deals. This is the free lunch mentality: something for nothing. As with all free lunches, people eat more than they normally would. The price is right! Now, as we all can see, pension and health care obligations are eating GM alive. The bill for the "free" lunch has come in -- and GM is having trouble paying the tab. In the past two years, GM has put almost $30 billion into its pension funds and a trust to cover its OPEB obligations. Yet these accounts are still a combined $54 billion underwater. Note the phrase, "as we all can see." But nobody saw it until about February, 2004. Sloan says the problem by then had been building for over half a century. GM began its slide down the slippery slope in 1950, when it began picking up costs for medical insurance, pensions and retiree benefits. There was huge risk to GM in taking on these obligations -- but that didn't show up as a cost or balance-sheet liability. By 1973, the UAW says, GM was paying the entire health insurance bill for its employees, survivors and retirees, and had agreed to "30 and out" early retirement that granted workers full pensions after 30 years on the job, regardless of age. These problems began to surface about 15 years ago because regulators changed the accounting rules. In 1992, GM says, it took a $20 billion non-cash charge to recognize pension obligations. Evolving rules then put OPEB on the balance sheet. Now, these obligations -- call it a combined $170 billion for U.S. operations -- are fully visible. And out-of-pocket costs for health care are eating GM alive. I report this because of the delay factor. This was all built in, Sloan says. He is correct. It is why I counselled small businessmen in the late 1970s not to set up health plans and pension plans for their employees. The legal liability was too great, I warned them. But I was almost alone in this view. Not now. "DON'T ARGUE WITH THE MARKET!" We are told that the stock market discounts the future rationally. This means that the best and the brightest investors use their best estimates to buy and sell. Today's prices therefore include all of the relevant information, as judged by experts who bought or sold. Any unexpected price changes must come from new information or new perceptions that had not operated before. With respect to GM, it's "new information, no; new perception, yes." The information was there for many years. All of a sudden, investors' perception changed. Down went GM shares. Yet the basics had not changed. By tying stock pricing theory to information, and by relegating changed perceptions to the footnotes, economic commentators can then tell us that good times are coming, that bad news will be more than offset by good news. After all, isn't the stock market rising? Anyway, it's not falling. "Don't argue against the stock market!" Here is the reality of stock market pricing: seriously bad news is not discounted until it threatens the survival of the company. Optimism usually prevails among investors. Only toward the end of a bear market does investor perception change. With respect to pensions and health care, optimism is government policy. The government has assured us, year after year, that "pay as you go" works just fine for Social Security and Medicare, smart people believed the spiel. They carried the same attitude with them when they looked at GM's pension/health obligations. They refused to factor in the estimated numbers. At the end of last year, GM says, its U.S. pension funds showed a $3 billion surplus. GM's pension accounting, which assumes that the funds will earn an average of 9 percent a year on their assets, is highly optimistic. But things are under control -- as long as GM stays solvent. By contrast, OPEB is out of control. At year-end, OPEB was $57 billion in the hole, even though GM threw $9 billion into an OPEB trust in 2004. http://shurl.org/gmsloan Consider these numbers in relation to GM's market capitalization of about $17 billion. The company is deeply in debt: around $300 billion. (http://shurl.org/gmdebt) It had to sell $17.6 billion in bonds in 2003 to meet its pension obligations. Yet in January, 2004, its share value peaked. Optimism still reigned supreme. The best and the brightest missed what should have been obvious. It could happen again. Next time, it could happen to a lot more companies. The worse the news out of Medicare, the less optimistic the outlook of investors. A MINI-WELFARE STATE? Political columnist George Will has described the plight of GM as the common plight of the welfare state, in an article, "The Latest welfare state? It's General Motors." Who knew? Speculation about which welfare state will be the first to buckle under the strain of the pension and medical costs of aging populations usually focuses on European nations with declining birth rates and aging populations. Who knew the first to buckle would be General Motors, with Ford not far behind? GM is a car and truck company -- for the 74th consecutive year, the world's largest -- and has revenues greater than Arizona's gross state product. But GM's stock price is down 45 percent since a year ago; its market capitalization is smaller than Harley Davidson's. This is partly because GM is a welfare state. Will's angle is a nice touch. A journalist looks for a hook to snag readers, and the current discussions about the demographic train crash of the Western world's retirement and medical programs serve as a convenient hook. Statistically, it's the same problem: the bills are coming due, and there is no money set aside to pay them. But GM is not a state. It is run by profit-seeking managers on behalf of profit-seeking investors by means of serving consumers who have a choice to buy or not to buy. Why should GM's managers and investors make the same mistake as politicians? For politicians, it never was a mistake. It was a way to get each era's voters to hand more money over to the politicians, whose careers would end long before the demographic day of reckoning arrived. It involved hoodwinking the voters by promising them future goodies. The voters who saw through the sham could not sell their shares. There are no shares to sell. The system is compulsory. GM's shareholders can sell, and have. The problem is, the managers at GM seem to have acted in the same short-sighted, self-interested way. So did a generation of investors in GM stock. Yet we free market advocates like to believe that things are different in free markets than in political affairs. Are we wrong? No. But we have to understand how the system works. The problem has been building for a long time. The tax code has treated the funding of future benefits as deductible expenses to a company, but not taxable events for the employees. Labor union saw the advantage. They could claim victories in their negotiations with management. This is true across the board, in company after company. What has been in it for senior management? Stock option profits. It is legal for managers of American companies to reward themselves by investing workers' retirement money in corporate shares. This raises the value of managers' stock options. This is what Enron's senior managers did. It is a widespread practice. Profit-seeking people respond to incentives. The tax code has created incentives for pension fund payments. The tax code has also provided incentives for stock options: long-term capital gains, taxed at a lower rate than salaries. Government-authorized accounting practices have added to the illusion of future wealth: assumptions regarding estimated future investment returns based on the post-1982 stock market boom-era. GM expects to earn 9% per annum in its pension fund. How? The federal government has created business in its own image with respect to pension funds. The bills are now coming due. COST PER CAR The cost of health care plans for GM workers is now over $5 billion a year. This is now affecting GM's ability to compete. Writes Will: GM says health expenditures -- $1,525 per car produced; there is more health care than steel in a GM vehicle's price tag -- are one of the main reasons it lost $1.1 billion in the first quarter of 2005. But it's not just GM. Ford's profits fell 38 percent, and although Ford had forecast 2005 profits of $1.4 billion to $1.7 billion, it now probably will have a year's loss of $100 million to $200 million. All this while Toyota's sales are up 23 percent this year, and Americans are buying cars and light trucks at a rate that would produce 2005 sales almost equal to the record of 17.4 million in 2000. Foreign auto companies are steadily eating into GM's profits. GM's market share keeps dropping. So is the market share of the other members of the Big Three. In 1962 half the cars sold in America were made by GM. Now its market share is roughly 25 percent. In 1999 the Big Three -- GM, Ford, Chrysler -- had 71 percent market share. Their share is now 58 percent and falling. Twenty-three percent of those working for auto companies in North America now work for companies other than the Big Three, up from 14.6 percent just five years ago. The number of Big Three employed workers has fallen by 134,000 since 2000. Then these is the issue of who should pay for these benefits. The free market's answer is clear: consumers. Their money determines what should be produced. If consumers say, "No; your price is too high," this leaves GM's management with bills to pay and no income to pay them. When the bills come due, those receiving them start looking for other people to share the burden. The bills are coming due for GM. GM says its health care burdens, negotiated with the United Auto Workers, put it at a $5 billion disadvantage against Toyota in the United States because Japan's government, not Japanese employers, provides almost all health care in Japan. This reasoning could produce a push by much of corporate America for the federal government to assume more health care costs. This would be done in the name of "leveling the playing field" to produce competitive "fairness." In short, because taxpayers in Japan are required to pay for health costs of Japanese auto workers, American firms want you and me to dig a little deeper into our wallets and our futures, in the interest of fairness. It doesn't sound fair to me. I didn't sign those long-term contracts with GM's workers. I didn't lower my costs of production by making promises instead of paying higher wages. Then there are GM's retirees: "Health care for retirees and their families -- there are 2.6 of them for every active worker -- is 69 percent of GM's health costs." http://shurl.org/gmwill Up, up, up go medical costs. Down, down, down go GM's profits. We think of GM as an auto company. But its auto division is small potatoes. About 80% of GM's profits come from GMAC, its in-house loan company: consumer credit and mortgages. It profited greatly during the mortgage boom. But this source of profits has begun to taper off. Now what? CONCLUSION This report is about GM, insofar as GM is representative of a mindset. Managers have treated GM as a career investment vehicle. Workers have treated it as a rich uncle who will always be there with money. Investors have treated GM as if the company were not subject to the reality of long-term increases in medical care costs. In retrospect, the experts say all of this was visible years ago. But the share price of GM indicates that nobody paid any attention until it was too late. This is why I am not impressed by economists who assure the public that Social Security/Medicare are not out of control, that there is time to maneuver. Nobody in charge ever seems to maneuver until the investment vehicle goes into a skid on an icy road in the mountains. Bad news is dismissed as irrelevant. Statistical reality is deferred by investors until they finally start unloading shares. Then there is not much that the people in charge can do to solve the problem. If highly sophisticated investors are this naive about where their money is being invested, why should we expect politicians to tell us the truth about the looming insolvency of Social Security/Medicare? [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Sat May 7 00:01:04 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:01:04 -0400 (EDT) Subject: [Paleopsych] Human Genetics News 5.5.6 Message-ID: Human Genetics News 5.5.6 This is a news clipping service from Human Genetics Alert. (www.hgalert.org) The articles selected do not represent HGA's policies but are provided for information purposes. For subscription details, please refer to the end of this mail. *********************************************************** Contents 1. Pioneering stem-cell surgery restores sight 2. Genetic Screening for Iron Disease Feasible 3. Giving its DNA code away 4. Vampire fears over DNA data 5. N.C. House members filed eugenics compensation bill 6. D.C. scientist breaks new lead on gay gene 7. Lawmakers kill proposed pet cloning ban 8. Newborn Screening Education Materials Lacking: Study 9. Spanish government accepts the assisted reproduction *********************************************************** 1. Pioneering stem-cell surgery restores sight 29 April 2005 www.timesonline.co.uk/article/0,,2-1589642,00.html The Times By Sam Lister, Health Correspondent A PIONEERING form of surgery has been developed that can restore the sight of patients by using stem cells to encourage damaged eyes to repair themselves. A team of British specialists has successfully treated more than a dozen patients with impaired corneas by transplanting human stem cells grown in a laboratory on to their eyes. Recent operations on ten patients showed that the technique restored sight in seven cases of people who had been blinded after getting acid, alkali and boiling metal in their eyes, or because of congenital disorders. Many of the patients treated at the Centre for Sight, Queen Victoria Hospital, in East Grinstead, West Sussex, had been told that they had no hope of getting their sight back, or had already undergone failed corneal transplants. The process involves taking stem cells, which occur naturally in the eye, and developing them into sheets of cells in the laboratory. These are transplanted on to the surface of the eye where they are held in place by an amniotic membrane, which dissolves away as the sheet fuses to the eye. Sheraz Daya, an ophthalmic surgeon leading the Sussex team, which has spent five years perfecting the technique, said that doctors had been astonished at how the cells appeared to trigger the eye's natural regeneration of its damaged surface. Tests on the patients after a year revealed no trace of the DNA of the stem-cell donor, meaning that the repair was carried out by the eye's own cells - a permanent healing process that does not require long-term use of powerful drugs to suppress the patient's immune system. Mr Daya said: "The technique not only works, but there was no donor tissue there. That is what really blew our minds. The cells appeared to have been shed from the eye and replaced by the patient's own, much more hardy, cells." The team, including scientists at the hospital's McIndoe Surgical Centre, now hopes to identify the processes at work, which might then be used to trigger the repair of other damaged tissue around the body. Details of the trial were revealed this month at an international conference of eye specialists in America. All the patients in the trial had corneas that had become damaged because they no longer had limbal stem cells, which are normally under the eyelid and help to keep the surface of the cornea clear, protecting it. Edward Bailey, who lost his sight after caustic acid landed in his left eye while he was cleaning pipes at a yoghurt factory, said that the operation had transformed his life. "It was the most emotional moment," Mr Bailey, 65, said. "I couldn't believe it. For ten years all I had seen was shades of black and grey, then after I had the operation the nurse came by and I saw a flash of blue from her uniform. I went home and when I took the patch off my eye, I had my vision back. It is only when you lose something like sight that you realise how precious it is." Nadey Hakim, a consultant surgeon at St Mary's Hospital, London, said that it was likely that such action could be mimicked in other organs, thus reducing the need for organ transplants. Professor Hakim said: "The hope is that stem cells will one day be used to generate large quantities of cells and tissues and possibly entire organs damaged by disease and injury. It is a dream." *********************************************************** 2. Genetic Screening for Iron Disease Feasible 26 April 2005 http://www.newscientist.com/article.ns?id=dn7306 The Lancet NEW YORK (Reuters Health) - Although genetic screening for hemochromatosis, a type of iron disease, is considered controversial, new research indicates that such screening can be successfully applied in a workplace setting with high satisfaction rates. Hemochromatosis, which is often associated with a mutation in a gene called HFE, occurs when the body absorbs more iron than is needed from the diet. Since the body lacks a method to rid itself of iron, it accumulates in various organs, resulting in a range of symptoms as well as potentially serious complications. Patients with the disease are often required to give blood every few months to keep their iron levels down. The controversy regarding screening stems from the fact that not everyone with an HFE mutation will go on to develop hemochromatosis. This can lead to anxiety among those who test positive and may lead to discrimination by insurers and employers. However, identifying the condition early is important to reduce iron build-up before permanent damage occurs. As reported in the medial journal The Lancet, Dr. Katie J. Allen, from the Murdoch Children's Research Institute in Melbourne, Australia, and colleagues looked for a key HFE mutation in cheek swabs that were obtained from 11,197 adults at their workplaces. A total of 1325 subjects were heterozygous for the mutation, meaning that one of their two HFE genes was normal, while the other had the mutation. Fifty-one subjects were homozygous for the mutation, having both HFE genes with the mutation. The remaining subjects had two normal genes. Subjects homozygous for the mutation are immediately diagnosed as having hemochromatosis, whereas those who have just one abnormal gene may or may not develop the disease. One month after receiving the test results, subjects homozygous for the mutation did not report increased anxiety compared with other subjects. Most importantly, nearly all of the homozygous subjects took measures to prevent or treat iron build-up. Because the authors were able to reach an agreement with the Australian insurance industry, all of the subjects who were homozygous for the mutation had their policies underwritten at standard rates. At present, an economic analysis is underway to determine if this screening approach is cost-effective, the investigators note. In a related editorial, Dr. Paul C. Adams, from the London Health Sciences Center in Ontario, Canada, comments that the current study is a "strong endorsement for the feasibility and acceptability of genetic testing for hemochromatosis in the workplace." However, he adds that "it is likely that optimum screening strategies, including no screening, will vary in different countries depending on various medical, ethical, legal, and social issues." *********************************************************** 3. Giving its DNA code away 27 April 2005 http://www.baltimoresun.com/news/health/bal-bz.celera27apr27,1,4537039.s tory?page=2&cset=true&ctrack=1&coll=bal-health-headlines Baltimore Sun By Tricia Bishop Public domain: The for-profit rival in the race to map the human genome will give its DNA sequences to a national biotechnology center. Five years ago on a summer day in the East Room of the White House, then-President Bill Clinton and Tony Blair - the British prime minister weighing in by satellite - hailed the mapping of the human genome as "the first great technological triumph of the 21st century." It was an achievement that many said would one day lead to eradication of disease and the creation of made-to-order, individualized drugs. On each side of the president were the beaming victors, ready to reap the spoils: a brash, but brilliant scientist named J. Craig Venter, then president of Celera Genomics Group of Rockville, and the accomplished Francis S. Collins, head of the Human Genome Project, an international consortium of academic laboratories led by the National Institutes of Health. The two factions - the first for profit, the second not - had been bitter rivals in the race to sequence human genes, egging each other forward and ultimately, diplomatically, agreeing to share worldwide credit for identifying the human recipe. Neither, however, seemed willing to give on one point of contention: whether the data belonged in the public or private domain - until yesterday. During a routine conference call to discuss quarterly earnings yesterday morning, Celera Genomics announced that after July 1 it would contribute much of its hard-earned DNA sequence data to public domain through the National Center for Biotechnology Information, a division of the National Institutes of Health. "This data just wants to be public," said a pleased Collins, who is also director of the National Human Genome Research Institute. "It's the kind of fundamental information that has no direct connection to a product, it's information that everybody wants, and it will find its way into the public." Celera Genomics, a unit of Connecticut-based Applera Corp., was unable to make a commercial success trading in the genetic information. It has spent the past three years slowly dismantling its foundation as a supplier of genetic data to instead concentrate on drug development, a transformation that will become official this summer. "This has been a very long kind of planned exit strategy from that business," Peter Dworkin, Applera's vice president for investor relations, said in an interview. "We're coming to an end of that period." Also coming to an official end is a contest that has raged for years, begun when Celera increased efforts to map the human genome by declaring it, too, would tackle the project, despite an eight-year head start by public laboratories. A competition The story began in 1998, when Applera created Celera Genomics to leverage technology developed by another of its holdings, Applied Biosystems. Applied Biosystems had created the means to sequence genes being used by scientists within the Human Genome Project, under way since 1990. Celera's presence turned the project into a competition, both frustrating and fruitful for the consortium scientists, who were suddenly forced to speed up their efforts and consider other possibilities. Access to the resulting information was a battleground from the start, with some opposing Celera's efforts because they feared the company would try to patent the genes and lay claim to the human gene code. Shortly before the historic joint announcement in June 2000 that the first full-length record of human DNA had been catalogued, both Clinton and Blair had argued for "unencumbered access" to the data. And Celera obliged, with a caveat: cost. Many believed there was money to be made on the data itself, selling access to it or developing drugs based on it. But it was much easier said than done, and a venture that some say is still best suited to the world of grant-funded research, which can focus on discoveries with less worry about the bottom line. Celera's get-rich plan was to sell subscriptions to the genetic information, and get "income from customers using our data to make discoveries," Venter, the company's former president, said in 2000. What he and his colleagues didn't quite seem to grasp was that their counterparts in academia had similar information, and they weren't going to charge for access to it. Others ran into similar situations, discovering that academics were publishing their research on the Internet, accessible to anyone with a computer and a connection. Incyte Pharmaceuticals of Delaware, for example, began life as a company that sells genomic research databases, but today - like Celera - is becoming a "leading drug discovery and development company by building a proprietary product pipeline of novel small molecule drugs," according to its Web site. Stocks soared "People don't want to pay for it if it's going to become free," said Constance Hsai, a biotechnology analyst who follows Celera Genomics for SG Cowen Securities Corp. in New York. Hsai owns five shares of the company's stock, bought years ago when she was a graduate student and stocks for companies working on the human genome map were soaring, peaking at $247 per share in March 2000. They've since fallen back to earth: Celera Genomics' stock fell 30 cents yesterday to close at $10.07 on the New York Stock Exchange. "This was all uncharted territory, we were trailblazers and pioneers in this area. ... We really helped kind of create this era of genomic science," Dworkin said. "You can't know everything when you start out." Venter resigned from the business in 2002, shortly before the company announced it would shift gears and stop marketing its genome databases. Those resources would instead go toward developing products. Applied Biosystems still makes technology others can use in interpreting the information, whether they've paid for access to it or found it free on the Web. "It's a natural evolution of genome science," said Dennis Gilbert, chief scientific officer of Applied Biosystems. "The payoff from the human genome is discoveries people will make, and that's the phase we're entering now." Those affiliated with the Human Genome Project say Celera's information had become outdated as well because they stopped at mapping a draft of the human genome, while the public consortium worked until 2003 to complete its data. "In many ways, the product that Celera was holding onto decreased in value," said Aristedes Patrinos, who represented the U.S. Department of Energy in the Human Genome Project. He also lent the use of his basement to the two sides in May 2000, when, over jalapeno pizza, Venter and Collins agreed to share credit. Patrinos said he believes Venter, who could not be reached yesterday, would want the information public. Venter is busy with other enterprises these days, though. He's started his own non-profit organization - the J. Craig Venter Institute, based in Rockville - "dedicated to the advancement of the science of genomics" and understanding its societal implications. Currently, his institute is working on projects to catalog the genomic spectrum found in air, as well the various microbes in marine and terrestrial environments. Venter has sailed around the globe collecting data. Seeing variations Collins said the information released by Celera to the National Center for Biotechnology Information - certain human, mouse and rat DNA sequences - will likely not do much to further assembly of genomes, though it will be useful in demonstrating how data differs in different subjects. "I give a lot of credit to [Applied Biosystems] and Celera," Collins said. "It does make sort of the battle days of what appeared to be an unpleasant race a distant memory." *********************************************************** 4. Vampire fears over DNA data 3 May 2005 http://australianit.news.com.au/articles/0,7204,15155214%5E15321%5E%5Enb v%5E15306,00.html The Australian Karen Dearne A PRIVATE DNA database project that aims to collect blood samples from 100,000 indigenous people - including Australian Aborigines - as a means of tracing ancient migration routes has reignited fears of "vampire research" and claims of biopiracy. The $US40 million ($51 million) Genographic Project, led by US population geneticist Dr Spencer Wells, will rely on massive computing power to investigate the genetic roots of modern humans. National Geographic is co-ordinating an international team of scientists to collect DNA samples and oral histories from indigenous people. IBM is contributing its Blue Gene computational biology machines and data analysis tools. The five-year project, funded by the Waitt Family Foundation, headed by Gateway computer billionaire Ted Waitt, was immediately denounced by the US-based Indigenous Peoples Council on Biocolonialism. The group declared the Genographic Project "a clone" of the Human Genome Diversity Project that it defeated in the early 1990s. Dubbed the "vampire project", the HGDP was considered to be an "unconscionable attempt" by genetic researchers "to pirate our DNA for their own purposes". The council has called for an international boycott of IBM, National Geographic and Gateway until the project is dropped. It's understood the matter will be discussed by the Australian Institute of Aboriginal and Torres Strait Islander Studies at its council meeting this month. Institute research director Luke Taylor said the body had only been made aware of the project through a media kit that arrived a couple of days before its Australian launch. "The kit has been referred to inistitue chairman Mick Dodson and we'll be evaluating it, Dr Taylor said. "At this stage, there has been no consultation with us, and we've been given no details of what appears to be a complex and problematic study." The media kit invites participation in the public part of the project. Anyone can take part by logging on to the National Geographic website, paying a $US100 fee for a swab kit to return a saliva sample, and providing some non-identifying family information for inclusion in the database. GeneEthics Network executive director Bob Phelps said the project was being "sweetened" by public participation and access, but "it's still biopiracy". "Here we've got indigenous people who are being overrun by dominant populations, but these researchers are not advocates for them," Mr Phelps said. "It's like animals in the zoo: taking the last remnants of disappearing peoples and grabbing the material that may be of scientific or commercial use in the future. "They're making sure that their DNA doesn't disappear, instead of saying these people are of value and are entitled to survive in their own right, aside from their genetic material." The head of IBM's Computational Biology Centre, Ajay Royyuru, said the Genographic Project would not involve collection of sensitive information on individuals' medical or health status. "The information we're gathering is only about geographic location and the language they speak," Dr Royyuru said. "We aim to create a database that holds information about markers that speak of deep ancestry and the migratory routes that our ancestors have travelled. "We are deliberately not looking for health information. We will not be gathering data that is medically relevant." All research outcomes would be published and the entire database would become a public resource at the project's completion, he said. "We recognise that the data we're gathering is perhaps the most personal information. It is you, your genome, which is unique to you," he said. "We've found that if we are up-front about what we will do and what we will not do, if we tell people what the project's about, they're actually delighted to participate. If the project succeeds, it will be because enough people on the planet understand that when we share this data, we'll be able to interpret it. "We can only discover the story of our migratory history when we put all the details together and look at the correlations." Australian Law Reform Commission acting president Brian Opeskin said there were considerable medical and cultural sensitivities about the creation of genetic databases, and particularly commercial use of the data. An ALRC inquiry on the protection of human genetic information in 2003 recommended strengthening existing measures. "No one would really doubt the value of many of these databases, particularly for medical research, but questions arise when there are conflicts of interest and profit-taking by those involved in collecting the data," Mr Opeskin said. "Every few months there's some new use for genetic information. "What happens when researchers collect DNA data for one purpose, and then want to make different uses of it later? "If people are well-informed they are often quite happy to be altruistic in contributing their samples, but discovering later that the material is being used for commercial purposes often causes a lot of grief." *********************************************************** 5. N.C. House members filed eugenics compensation bill 5 May 2005 www.tuscaloosanews.com/apps/pbcs.dll/article?AID=/20050505/APN/505051138 &cachetime=3&template=dateline Tuscaloosa News, Alabama (Associated Press) North Carolina would give each victim of eugenics sterilization in the state $20,000 in compensation if a measure filed Thursday in the state House became law. The bill, filed by several Democrats, would place $69.1 million from a special fund to cover claims filed before mid-2009. About 7,600 people were sterilized under North Carolina's program, which ordered the operations from 1929 through 1974. Many of them were sterilized against their will, and the program was the third-largest in the nation, after California and Virginia. Some researchers say about 3,400 victims are still alive in North Carolina. Most of the victims were poor women who were often talked into sterilization by social workers. Inaccurate labels of "feeble-mindedness" were often used as justification based on eugenics, the movement to solve social problems by preventing the "unfit" from having children. Gov. Mike Easley apologized for the program in 2002 after the Winston-Salem Journal ran a series of articles exposing the abuses that took place. Legislators also repealed the state's old sterilization law, but the state has not offered any tangible form of compensation. A commission that Easley appointed in 2003 recommended that the state at least provide health and education benefits for sterilization victims. Those benefits haven't been approved. The bill would order the Department of Health and Human Services to determine whether each claim was valid. Compensation for a person who files a claim but dies before receiving the money would be forwarded to a descendant's estate. House Speaker Jim Black, D-Mecklenburg, said recently he wants legal issues thoroughly researched on the compensation idea. Senate leader Marc Basnight, D-Dare, hasn't taken a position yet on the idea. Some fear that the requested reparations would set a precedent for other types of victims. *********************************************************** 6. D.C. scientist breaks new lead on gay gene 6 May 2005 www.washblade.com/2005/5-6/news/localnews/dcdcience.cfm Washington Blade By Eartha Melzer NIH study builds on genetic theory of sexual orientation A recent study by researchers at the National Institutes of Health has added to the body of knowledge on the relationships between genes and sexual orientation, according to a recent issue of Human Genetics. Although the research was concluded two years ago, the small number of people working on the issue resulted in a delay of two years before the research was published, lead investigator and D.C. resident Dr. Dean Hamer said this week. The investigation builds on studies that have suggested that there tend to be clusters of gays within a family. In 1993, a group of researchers under the direction of Hamer, who was also a researcher on the recent study, examined DNA from gay men and their family members and found that gay men within a family share a segment of DNA on the X chromosome, which men inherit only from their mothers. "This told us that genes play a role," said Brian Mustanski, one of the researchers on the genomescan. "But it doesn't tell us where the genes are or what they do." To develop a more precise picture of what genes might be involved in sexual orientation, researchers examined the genes of 456 individuals from 146 unrelated families - 137 families with two gay brothers and 9 families with three gay brothers. Researchers reasoned that brothers are expected to share an average of 50 percent of their genes but that genes that influence sexual orientation would be shared more than 50 percent of the time by gay brothers. Mustanski compared the process of scanning gay brothers for sexual orientation-related genes to looking for doctors in a town of 40,000 people, a number that corresponds to the number of human genes. "You could take a guess that [a doctor] probably lives in a six bedroom brick house - and only go to a few houses that meet this criteria," Mustanski said. "Alternatively, you could go to every street in the town and knock on one door in the neighborhood and ask them if a doctor lives on their street. We used this second approach and narrowed it down to a few streets that are likely to have a doctor on them. When we say 'chromosomal regions,' it is akin to the street. The next step is to discover which specific gene within these newly discovered chromosomal regions, is related to sexual orientation," he said. The researchers placed 403 markers across the genome. This strategy revealed three chromosomal areas that are shared by the gay brothers around 60 percent of the time. This frequency of shared markers is not a "significant link," according to Mustanski but it does rise to the level of "suggestive link." Mustanski said that the idea that these chromosomal regions are related to sexual orientation is very compelling because the areas identified through the scan are known to contain genes involved in sexual orientation. "I think it's important because it reinforces the theory that sexual orientation is at least partially genetic and that there are many different genes, not just one or two," Hamer said. "I think it is important knowledge because homophobes often argue that sexual orientation is a choice, which simply isn't true. It is important to have concrete data showing that it is not simply a choice." Research into genetic aspects of homosexuality is controversial. Hamer said that the effect of politics on science can be seen in the fact that there have only been five papers on the subject in 10 years. "In 1994 our lab discovered a gene involved in anxiety, and there have been 850 papers on that." Hamer said. The Council for Responsible Genetics, a 21-year-old Cambridge, Mass.-based group founded by scientists to educate the public on genetics issues, has issued a position paper on the hunt for the genetic basis of sexual orientation. *********************************************************** 7. Lawmakers kill proposed pet cloning ban 4 May 2005 www.reuters.com/newsArticle.jhtml?type=oddlyEnoughNews&storyID=8388222 Reuters SACRAMENTO, Calif. (Reuters) - California lawmakers rejected a proposal on Tuesday that would have banned sales of cloned pets, a measure aimed at a San Francisco-area company's bid to replicate beloved family animals for profit. Proponents of the measure argued for the pet-clone ban because the technology was unregulated and animal shelters were already filled to capacity with potential pets. The proposed ban came after the first sale of a cloned pet last year by Sausalito, California-based Genetic Savings & Clone Inc. The company revealed in December it had cloned a cat -- named Little Nicky after its progenitor, Nicky -- for a client in Texas for $50,000. The privately held company financed by billionaire John Sperling has said it has other cat clones in various stages of production and is developing a dog-cloning service. A California State Assembly committee rejected the bill after lawmakers raised concern a ban on cloned pets was premature because of uncertainties surrounding the future of the technology. "I believe that the bill is a good candidate for a serious study by experts," said Democrat Gloria Negrete-McLeod. Defeat of the measure came as California moves to set up its own $3 billion publicly financed stem cell research program, the largest pool of public funding in the United States. The pet cloning ban would not have extended to stem cell research, according to its sponsor. *********************************************************** 8. Newborn Screening Education Materials Lacking: Study 4 May 2005 www.drkoop.com/newsdetail/93/525442.html DrKoop.com (from HealthDay Reporter) By Serena Gordon, HealthDay Reporter Parents aren't getting enough or the right kind of information, researchers say Parents aren't getting enough information about the genetic screening tests performed on their newborns. That's the conclusion of a study appearing in the May issue of Pediatrics that found the materials explaining newborn screening tests varied significantly from state to state, and none of the materials contained all of the information recommended by the American Academy of Pediatrics (AAP). ******************************************************** 9. Spanish government accepts the assisted reproduction 6 May 2005 www.eitb24.com/noticia_en.php?id=58549 EITB4 - Basque news and Information Channel The standard, which expects genetic selection with therapeutic aims for people, maintains surrogate mothers are forbidden, but fixes no deadline regarding age for artificial insemination. The board of ministers will predictably approve the draft bill on human assisted reproduction techniques. The law banns human cloning with reproduction aims. The standard, which expects genetic selection with therapeutic aims for people, maintains surrogate mothers are forbidden, but fixes no deadline regarding age for artificial insemination. Another novelty would be the creation of a National Register Office for Donors, and a Register Office that gathers activities of assisted reproduction centres for clients. Control of assisted reproduction techniques The law is expected to come into force in 2006. The objective is to control the application of assisted reproduction techniques to facilitate couples with fertility problems to have children. These techniques could treat and prevent diseases. The new law will prohibit human cloning with reproduction aims, as the European Constitution does. Regarding cloning with therapeutic aims, the ministry is carrying out the Investigation Law in Biomedicine. If a couple wants a child with strong immune system that could cure his brother, the Investigation Law could work on it. It's expected families won't need to travel abroad, as it has occurred until now. *********************************************************** To Subscribe / Unsubscribe to HGA's News clipping service, contact us via e-mail on info at hgalert.org with 'Subscribe' in the subject field or 'Unsubscribe' to be taken off our mailing list. Human Genetics Alert 22-24 Highbury Grove 127 Aberdeen House London N5 2EA TEL: +44 20 77046100 FAX: +44 20 73598423 From HowlBloom at aol.com Sat May 7 00:01:19 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Fri, 6 May 2005 20:01:19 EDT Subject: [Paleopsych] why I need you and you need me Message-ID: <12a.5cd6ac8d.2fad5f4f@aol.com> Thanks to your input and to the energy you give me I've been mapping out a theory of the extracranial extensions of the self. One part of that theory says that when I get upset about a fight with my wife, I need to run to you and blurt out my tale. Why? On the surface, in order to calm myself down. But there's another reason. Groups with the nimblest collective intelligence outcompete groups with lame collective brains. When I have trouble in my trek through tough emotional terrain--like the terrain of a relationship--I bring my report on that problem to a friend, to you. You calm me down. In the process you follow Alice in Wonderland's rule: "How do I know what I'm thinking until I hear what I have to say?" You think out solutions that are useful to you and are useful to me. In fact, you wonder when you've finished delivering your wisdom, why you could do this miraculous problem solving for me, but you couldn't do it for yourself. My problem and your solution, if we?re all very lucky, can do something remarkable. It can become a metaphor that helps us understand other relations that ride on shifting sands?from understanding how particles behave or how America has to deal with our Chinese trade deficit to understanding what a business needs to do next or to puzzling out the patterns of signals we get from a probe on the moon of a distant planet. If the tale of what I've been through makes for a really good story, and if your solution to my problem is a triumph, too, you get excited. What happens to us humans when we're excited. We need to share the excitement with someone else. We need to blurt, to vent, and to brag. So you, having helped me, call your spouse or a friend and send the tale of my dilemma and your solution out on the seas of the grapevine, out on the seas of gossip, out on the sea of collective information processing, collective intelligence, and collective memory. In the process you and I help the groups and subgroups we belong to smart. If we lived in a culture that forbade this sort of confession, this constant conversation about intimacy, we'd be a lot dumber. Which my explain why I no longer want to write What the Nuclear Knights of Islam Want From You: The Osama Code. Reading books on the history of Islam's founding fathers, the Companions of the Prophet, has worn me out. How? I'm still trying to define it, but these books dry out my brain. They stop me from thinking. There's no introspective depth. It is very, very hard to kill my curiosity, but the aridness of these Islamic source books has managed to do it. Is that because the culture within which these books have been written is deprived of the cross-talk that takes place when we Westernizers run into problems--especially problems that whack us with the whips and paddles of confusion and insecurity? Meanwhile, my limbic system--and probably yours--needs to resolve its problems with my cortical consciousness not by sending a signal a mere four inches or so through the brain, but by going the thousands of miles it takes to get to you. Then you explain me to my self--you complete a loop from the turmoil of my emotional brain, my limbic system, to the somewhat semi-calm of my talking, thinking, and writing brain, my left frontal and pre-frontal cortex. If I read Jeff Hawkins' right, he says that this sort of loop creates a memory that allows us to see the patterns of the immediate past and use those patterns to predict the future. And memory of this sort is a vital part of collective intelligence. Here's Hawkin's quote (once again). See if you think it applies: auto-associative memories in neural nets. "Instead of only passing information forward...auto-associative memories fed the output of each neuron back into the input.... When a pattern of activity was imposed on the artificial neurons, they formed a memory of this pattern. ...To retrieve a pattern stored in such a memory, you must provide the pattern you want to retrieve. ....The most important property is that you don't have to have the entire pattern you want to retrieve in order to retrieve it. You might have only part of the pattern, or you might have a somewhat messed-up pattern. The auto-associative memory can retrieve the correct pattern, as it was originally stored, even though you start with a messy version of it. It would be like going to the grocer with half eaten brown bananas and getting whole green bananas in return. ...Second, unlike mist neural networks, an auto-associative memory can be designed to store sequences of patterns, or temporal patterns. This feature is accomplished by adding time delay to the feedback. ...I might feed in the first few notes of 'Twinkle, Twinkle Little Star' and the memory returns the whole song. When presented with part of the sequence, the memory can recall the rest." Jeff Hawkins, Sandra Blakeslee. On Intelligence. New York: Times Books, 2004: pp 46-47. ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sat May 7 00:02:38 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:02:38 -0400 (EDT) Subject: [Paleopsych] Science: A Heavyweight Battle over CDC's Obesity Forecasts Message-ID: A Heavyweight Battle over CDC's Obesity Forecasts Science, Vol 308, Issue 5723, 770-771 , 6 May 2005 Jennifer Couzin How many people does obesity kill? That question has turned into a headache for the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia: In the past year, its scientists have published dueling papers with conflicting estimates on obesity-associated deaths--the first three times greater than the second. The disagreement, some fear, is undermining the agency's health warnings. The bidding on obesity's annual death toll started at a staggering 400,000--the number cited in a CDC paper co-authored by CDC chief Julie Gerberding in 2004. But dissent prompted an internal inquiry, and CDC decided this year to lower the number to 365,000. That was still too high for some CDC analysts, who together with colleagues at the National Cancer Institute (NCI) in Bethesda, Maryland, published a new figure on 20 April--112,000 deaths. The low estimate is spawning other problems, though. A food-industry interest group is touting it as evidence that obesity is not so risky. Even researchers who favor the low number worry that it will lead to complacency. After trumpeting the highest estimate a year ago and warning that obesity deaths were poised to overtake those caused by tobacco, CDC officials now say that numbers are unimportant. The real message should be that "obesity can be deadly," says George Mensah, acting director of CDC's National Center for Chronic Disease Prevention and Health Promotion. "We really add to the confusion by sticking to one number." But some of CDC's own scientists disagree. "It's hard to argue that death is not an important public health statistic," says David Williamson, an epidemiologist in CDC's diabetes division and an author on the paper with the 112,000 deaths estimate. Calculating whether obesity leads directly to an individual's demise is a messy proposition. To do so, researchers normally determine by how much obesity increases the death rate and what proportion of the population is obese. Then they apply that to the number of deaths in a given time, revealing excess deaths due to obesity. Both studies use that approach, but methodological differences produced big disparities between the two papers--one by epidemiologist Ali Mokdad, Gerberding, and their CDC colleagues, published in the Journal of the American Medical Association (JAMA) on 10 March 2004, and the new estimate by CDC epidemiologist Katherine Flegal and colleagues at CDC and NCI, published in JAMA on 20 April. Both relied on data about individuals' weight and other measures from the National Health and Nutrition Examination Survey (NHANES), which has monitored the U.S. population since the 1970s. The Mokdad group used the oldest, NHANES I. Flegal's group also used two more recent NHANES data sets from the 1980s and 1990s. Her method found fewer obesity-associated deaths--suggesting that although obesity is rising, some factor, such as improved health care, is reducing deaths. Other variations in methodology proved crucial. For example, the two groups differed in their choice of what constitutes normal weight, which forms the baseline for comparisons. Flegal's team adopted the definition favored by the National Institutes of Health and the World Health Organization, a body mass index (BMI) between 18.5 and less than 25. The Mokdad team chose a BMI of 23 to less than 25; this changed the baseline risk of death, and with it, deaths linked to obesity. In their paper, the Mokdad authors said they selected that narrower, heavier range because they were trying to update a landmark 1999 JAMA paper on obesity led by biostatistician David Allison of the University of Alabama, Birmingham, and chose to follow Allison's methodology. (CDC spokesperson John Mader said that Mokdad and his co-authors were not available to be interviewed.) "There's no right answer" to which BMI range should be the "normal" category, says Allison. He felt his choice was more "realistic," and that expecting Americans to strive for even lower BMIs might be asking too much. But that relatively small difference in BMI had a big effect on the estimates: Had Flegal's team gone with the 23-to-25 range, she reported, the 112,000 deaths estimate would have jumped to 165,000. The scientists also diverged sharply in how they tackled age. It's known that older individuals are less at risk and may even benefit from being heavier: A cushion of fat can keep weight from falling too low during illness. And young obese people tend to develop more severe health problems, says David Ludwig, director of the obesity program at Children's Hospital in Boston. Flegal's group took all this into account by assigning risks from obesity to different age groups. Stratifying by age meant that when Flegal turned to actual death data--all deaths from the year 2000--she was less likely to count deaths in older age groups as obesity-related. Allison concedes that in retrospect, his decision not to stratify by age was a mistake. And it had a big impact on the estimates. "Very minor differences in assumption lead to huge differences in the number of obesity-induced deaths," says S. Jay Olshansky, a biodemographer at the University of Illinois, Chicago. Olshansky, Allison, and Ludwig published their own provocative obesity paper in The New England Journal of Medicine in March. It argued that U.S. life expectancy could begin decreasing as today's obese children grow up and develop obesity-induced diseases, such as diabetes and heart disease (Science, 18 March, p. 1716). But Olshansky now says that in light of Flegal's recent paper on obesity deaths and a companion paper that she, Williamson, and other CDC scientists authored in the same issue of JAMA, his life expectancy forecasts might be inaccurate. The companion paper, led by CDC's Edward Gregg, examined how much cardiovascular disease was being driven by obesity. The findings were drawn from five surveys, most of them NHANES, beginning in 1960 and ending in 2000, and they dovetailed with the conclusions in Flegal's 112,000 deaths paper. All heart disease risk factors except diabetes were less likely to show up in heavy individuals in recent surveys than in older ones. That suggests, says Allison, that "we've developed all these great ways to treat heart disease" such as by controlling cholesterol. This could also explain, he and others say, why NHANES I led to much higher estimates of obesity-associated deaths than did NHANES I, II, and III combined. Although obesity rates are rising, obesity-associated deaths are dropping. Ludwig disagrees that this trend will necessarily continue or that Gregg's paper disproves the one he co-authored with Olshansky. Type 2 diabetes, which is becoming more common in youngsters, "starts the clock ticking towards life-threatening complications," he notes. Olshansky is uncomfortable with the kind of attention Flegal's 112,000 estimate is getting. "It's being portrayed," he says, as if "it's OK to be obese because we can treat it better." In fact, one of Flegal's conclusions that sparked much interest--that being overweight, with a BMI of 25 to 30, slightly reduced mortality risk--had been suggested in the past. Certainly, food-industry groups are thrilled by Flegal's work. "The singular focus on weight has been misguided," says Dan Mindus, a senior analyst with the Center for Consumer Freedom, a Washington, D.C.-based nonprofit supported by food companies and restaurants. Since Flegal's paper appeared, the center has spent $600,000 on newspaper and other ads declaring obesity to be "hype"; it plans to blanket the Washington, D.C., subway system with its ad campaign. Some say that CDC needs to choose one number of deaths and stand behind it. "You don't just put random numbers into the literature," says antitobacco activist and heart disease expert Stanton Glantz of the University of California, San Francisco, who disputed the Mokdad findings. Scientists agree that Flegal's study is superior, but it may also be distracting, suggests Beverly Rockhill, an epidemiologist at the University of North Carolina, Chapel Hill. Even if obese individuals' risk of death has been overplayed in the past, she says, we ought to ask: "Are they living a sicker life?" From checker at panix.com Sat May 7 00:02:57 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:02:57 -0400 (EDT) Subject: [Paleopsych] Joel Garreau on "Radical Evolution," May 19th in NYC Message-ID: Joel Garreau on "Radical Evolution," May 19th in NYC GBN Presents... Joel Garreau speaking on Radical Evolution May 19th, 5:30 pm to 7:30 pm CUNY Graduate Center, Skylight Room (9th Floor) 365 Fifth Avenue New York, NY 10016 Please join GBN and Network member Joel Garreau for a compelling look at the dramatic acceleration of change that is literally transforming human nature. In his new book, Radical Evolution, Joel shows that we are at an inflection point in history. Through advances in genetic, robotic, information, and nano-technologies, we are engineering the next stage of human evolution: altering our minds, memories, metabolisms, personalities, progeny, and perhaps our very souls. After spending two years behind the scenes with today's foremost researchers and pioneers, Joel's amazing tales reveal that the superpowers of our comic-book heroes already exist, or are in development in hospitals, labs, and research facilities around the country--from the revved up reflexes and speed of Spider-Man and Superman to the enhanced mental acuity and memory capabilities of an advanced species. Over the next 15 years, these enhancements will become part of our everyday lives. But where will they lead us? One scenario is "Heaven," where technologies promise to make us smarter, vanquish illness, and extend our lives. But there are other scenarios, including "Hell," where unrestrained technology brings about the ultimate destruction of our entire species. To help us understand the possibilities, Joel taps the insights of many gifted thinkers and scientists who are making what has previously been thought of as science fiction a reality. Among these fellow travelers are Bill Joy, Ray Kurzweil, and Jaron Lanier, each of whom offer radically different views of the developments that will, in our lifetime, affect everything from the way we date to the way we work, from how we think and act to how we fall in love. As Joel cautions, it is only by anticipating the future that we can hope to shape it. Joel is the best-selling author of The Nine Nations of North America and Edge Cities and a reporter and editor for the Washington Post.For GBNers this is a special treat, having enjoyed regular interviews with Joel throughout his research as he struggled to make sense of what he was experiencing on the remarkable journey that became Radical Evolution. RSVP to Jeanne Scheppach at Jeanne_Scheppach at gbn.com. http://www.gbn.com/EventInformationDisplayServlet.srv?eid=26807 From checker at panix.com Sat May 7 00:04:13 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:04:13 -0400 (EDT) Subject: [Paleopsych] In the Mideast, ask the right question Message-ID: In the Mideast, ask the right question The International Herald Tribune, 5.5.5. http://www.iht.com/bin/print_ipub.php?file=/articles/2005/05/04/opinion/edsiegman.php Henry Siegman International Herald Tribune PARIS The window of opportunity widely believed to have been opened by Prime Minister Ariel Sharon's decision to withdraw Israeli settlements from Gaza, and by the election of Mahmoud Abbas as head of the Palestinian Authority, has prompted a debate in U.S. policy circles. The question is whether President George W. Bush is moving quickly enough to prevent the spreading Israeli settlement enterprise in the West Bank from foreclosing the possibility of the emergence of a Palestinian state. Politically speaking, whether a viable Palestinian state is still possible is the wrong question, if only because by now it should be clear that Bush will not take the political risks entailed in ensuring the creation of such a state in the face of Sharon's determination to prevent it. The right question - the answer to which perhaps may yet invest the peace process with the energy and direction it now lacks - is whether there is still hope for the survival of Israel as a Jewish state. For it is the Jewish state, far more so than a state for the Palestinian people, that is now threatened and in doubt. Whatever uncertainties exist about a Palestinian state, what is certain, even after Israel's disengagement from Gaza, is that it is only a matter of time before Arabs will constitute a majority of the population between the Mediterranean Sea and the Jordan River. When this happens, Israel will cease to be a Jewish state, both formally and in fact - unless it herds the majority Arab population into enclosed bantustans, and turns into an apartheid state. It is a supreme irony that only a Palestinian state can assure the survival of Israel as a Jewish state. However as Sharon's settlement project continues and intensifies in the West Bank - not despite but because of the Gaza disengagement - and relentlessly diminishes and fragments the West Bank, Palestinians will sooner or later abandon a two-state solution and pursue the political logic of their own demography instead. Palestinians will not settle for less than a state that is fully within the pre-1967 borders. Having already yielded to Israel half the territory acknowledged by the United Nations in its partition resolution of 1947 as their legitimate patrimony, Palestinians will not consent to additional Israeli annexations of the remaining 22 percent of Palestine, except in swaps for comparable territory on Israel's side of the border. The capital of this Palestinian state, moreover, will have to be located in East Jerusalem. The chances of a Palestinian leader signing a peace accord that shuts Palestinians out of any part of Jerusalem are about as great as an Israeli leader signing a peace agreement that grants Palestinian refugees a "right of return" to Israel. Indeed, Palestinian agreement to a formula that redirects the refugees' right of return from Israel to a Palestinian state is entirely dependent on compromises in Israel's present position on territory and Jerusalem. These difficult concessions by an Israeli government are conceivable only if it finally tells its citizens the truth - that only if a viable and successful Palestinian state comes into being alongside Israel can its Jews avoid being turned into a minority in their own state. Those in Israel who believe that the world - including Israel's great friend and ally, the United States - will abide a Jewish apartheid regime that permanently disenfranchises and dominates by force of arms an Arab majority, or allow Israel to ethnically cleanse much of the West Bank through repressive economic and "security" measures, are deluding themselves. Unfortunately, some political parties in Israel call for such thinly disguised ethnic cleansing, and yet are seen by most Israelis as acceptable partners in their governments. Indeed, Natan Sharansky, a former minister in Sharon's government, has been agitating to declare the private property of Arabs who live just outside the municipal borders of Jerusalem, but whose adjoining properties are located within those borders, as "abandoned." Such a designation would allow the government to confiscate these Arab properties without compensation or right of appeal. This from the man who has convinced Bush that Palestinians must be kept under Israeli occupation until Israel is ready to certify they have been transformed into democrats! Inexorable demographic "facts on the ground" will be far more determining of Israel's future than the settlements and the so-called security fence that Israel is building largely on stolen Palestinian land. When this realization begins to break through the illusions that beset the peace process, Israel's supporters may finally understand that the question is not whether the window of opportunity is closing on a Palestinian state, but whether it is closing on a Jewish state. Unfortunately, given the all too clever manipulations of Sharon and his advisor, Dov Weissglas, who believe (as Weissglas boasted recently in a Haaretz interview) that they have persuaded the United States to let the road map and the peace process remain in "formaldehyde," this realization is likely to come about only after that proverbial window will have slammed shut. (Henry Siegman is a senior fellow on the Middle East at the Council on Foreign Relations and a former executive head of the American Jewish Congress. These views are his own.) From checker at panix.com Sat May 7 00:07:18 2005 From: checker at panix.com (Premise Checker) Date: Fri, 6 May 2005 20:07:18 -0400 (EDT) Subject: [Paleopsych] NYT: The Making of a Vegetarian: A Dinosaur Is Caught in the Act Message-ID: The Making of a Vegetarian: A Dinosaur Is Caught in the Act New York Times, 5.5.5 http://www.nytimes.com/2005/05/05/science/05dino.html [I will be glad to send along any Creationist response that is specific to this discovery, as opposed to general critiques of evolution, of which I have read and sent many.] By JOHN NOBLE WILFORD Without government nutrition guidelines, a doctor's advice or some primeval diet fad, entire species of dinosaurs sometimes forsook their predatory, meat-eating lifestyle and evolved into grazing vegetarians. Scientists now think they have found rare evidence of a species undergoing just such a dietary transition 125 million years ago. Paleontologists in Utah announced yesterday that they had discovered a new species of dinosaurs in an intermediate stage between carnivore and herbivore, on the way to becoming a committed vegetarian. They could only speculate on the reasons for the change, but noted that it occurred in a time of global warming and the arrival of flowering plants in profusion, a tempting new food source. Dr. James I. Kirkland, a paleontologist with the Utah Geological Survey, said the new species, named Falcarius utahensis, was uncovered two years ago at a remote dig site near the town of Green River. The animal, about 13 feet long and 4? feet tall, was a primitive member of the therizinosaur group of feathered dinosaurs. Under closer examination, Dr. Kirkland said, the Falcarius fossils showed "the beginnings of features we associate with plant-eating dinosaurs." The teeth were not the sharp, bladelike serrated teeth of the typical predator, but smaller and adapted for shredding leaves. "I doubt that this animal could have cut a steak," he said. Other characteristics of an animal in transition to herbivory included an expansion of the gut to digest the mass of fermenting plants, stouter legs for supporting a bulkier body instead of the slender legs of a fast-running predator, and a lengthening of the neck, perhaps to reach for leaves higher in the trees. Dr. Scott D. Sampson, chief curator of the Museum of Natural History at the University of Utah, said the new fossils were "amazing documentation of a major dietary shift" and promised to "tell us how this shift happened." The scientists described and interpreted the findings in interviews and a teleconference from Salt Lake City. A detailed report is being published today in the journal Nature. Dr. Mark A. Norell, a dinosaur specialist at the American Museum of Natural History who was not involved in the research, said the fossils were well preserved and the teeth appeared to be similar to those of plant-eating dinosaurs. But he questioned how much scientists would be able to learn from the specimen about the change from meat eating to plant eating. Dr. Sampson said, "Falcarius represents evolution caught in the act, a primitive form that shares much in common with its carnivorous kin, while possessing a variety of features demonstrating that it had embarked on the path toward more advanced plant-eating forms." Dr. Norell agreed that the new species "is a very important and interesting animal," primarily because it is a rare early example of the therizinosaur group in North America. Falcarius is anatomically more primitive than the better-known therizinosaurs that were prevalent in China about 90 million years ago and had already evolved as plant eaters. Lindsay E. Zanno, a doctoral student in paleontology at Utah, said Falcarius was "the most primitive known therizinosaur, demonstrating unequivocally that this large-bodied group of bizarre herbivorous dinosaurs" came from predatory carnivores like the swift, fierce Velociraptor. Falcarius and Velociraptor had a common ancestor. Scientists say all the vegetarian dinosaurs evolved from ancestors that were carnivores. Some 230 million years ago, the first dinosaur was presumably a small-bodied, fleet-footed predator. Then two major groups of dinosaurs, the gigantic species and the smaller duck-billed grazers, evolved as plant eaters. As for Falcarius, scientists are not sure what it ate, meat or plants or both, and they suspect that the transition extended over several million years. But with Falcarius, Dr. Sampson said, "we have actual fossil evidence of a major dietary shift, certainly the best example documented among dinosaurs." From HowlBloom at aol.com Sat May 7 00:14:09 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Fri, 6 May 2005 20:14:09 EDT Subject: [Paleopsych] fads and atoms Message-ID: <1ee.3b3eba41.2fad6251@aol.com> The following article hits the motherlode when it comes to our past discussions of Ur patterns, iteration, and fracticality. Ur patterns are those that show up on multiple levels of emergence, patterns that make anthropomorphism a reasonable way of doing science, patterns that explain why a metaphor can capture in its word-picture the underlying structure of a whirlwind, a brain-spin, or a culture-shift. Here?s how a pattern in the molecules of magnets repeats itself in the mass moodswings of human beings. Howard etrieved May 6, 2005, from the World Wide Web http://www.newscientist.com/article.ns?id=mg18624984.200 HOME |NEWS |EXPLORE BY SUBJECT |LAST WORD |SUBSCRIBE |SEARCH |ARCHIVE |RSS |JOBS Click to PrintOne law rules dedicated followers of fashion 06 May 2005 Exclusive from New Scientist Print Edition Mark Buchanan FADS, fashions and dramatic shifts in public opinion all appear to follow a physical law: one of the laws of magnetism. Quentin Michard of the School of Industrial Physics and Chemistry in Paris and Jean-Philippe Bouchaud of the Atomic Energy Commission in Saclay, France, were trying to explain three social trends: plummeting European birth rates in the late 20th century, the rapid adoption of cellphones in Europe in the 1990s and the way people clapping at a concert suddenly stop doing so. In each case, they theorised, individuals not only have their own preferences, but also tend to imitate others. "Imitation is deeply rooted in biology as a survival strategy," says Bouchaud. In particular, people frequently copy others who they think know something they don't. To model the consequences of imitation, the researchers turned to the physics of magnets. An applied magnetic field will coerce the spins of atoms in a magnetic material to point in a certain direction. And often an atom's spin direction pushes the spins of neighbouring atoms to point in a similar direction. And even if an applied field changes direction slowly, the spins sometimes flip all together and quite abruptly. The physicists modified the model such that the atoms represented people and the direction of the spin indicated a person's behaviour, and used it to predict shifts in public opinion. In the case of cellphones, for example, it is clear that as more people realised how useful they were, and as their price dropped, more people would buy them. But how quickly the trend took off depended on how strongly people influenced each other. The magnetic model predicts that when people have a strong tendency to imitate others, shifts in behaviour will be faster, and there may even be discontinuous jumps, with many people adopting cellphones virtually overnight. More specifically, the model suggests that the rate of opinion change accelerates in a mathematically predictable way, with ever greater numbers of people changing their minds as the population nears the point of maximum change. Michard and Bouchaud checked this prediction against their model and found that the trends in birth rates and cellphone usage in European nations conformed quite accurately to this pattern. The same was true of the rate at which clapping died away in concerts. Close this window Printed on Sat May 07 01:01:50 BST 2005 ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Sat May 7 00:59:45 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 6 May 2005 17:59:45 -0700 Subject: [Paleopsych] why I need you and you need me Message-ID: <01C55265.63CB2780.shovland@mindspring.com> What do the various extensions have in common? What is unique about each? Steve Hovland www.stevehovland.net -----Original Message----- From: HowlBloom at aol.com [SMTP:HowlBloom at aol.com] Sent: Friday, May 06, 2005 5:01 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] why I need you and you need me << File: ATT00026.txt; charset = UTF-8 >> << File: ATT00027.html; charset = UTF-8 >> << File: ATT00028.txt >> From checker at panix.com Sat May 7 10:44:55 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:44:55 -0400 (EDT) Subject: [Paleopsych] WSJ: Chimeras exist, what if some turn out too human? Message-ID: Chimeras exist, what if some turn out too human? http://www.post-gazette.com/pg/05126/500265.stm 5.5.6. I presume this was picked up from the WSJ. By Sharon Begley, The Wall Street Journal If you had just created a mouse with human brain cells, one thing you wouldn't want to hear the little guy say is, "Hi there, I'm Mickey." Even worse, of course, would be something like, "Get me out of this & percentGBP !! body!" It's been several millennia since Greek mythology dreamed up the chimera, a creature with the head of a lion, the body of a goat and the tail of a serpent. Research on the chimera front was pretty quiet for 2,500 years. But then in 1984 scientists announced that they had merged embryonic goat cells with embryonic sheep cells, producing a "geep." (It's part wooly, part hairy, with a face only a nanny goat could love.) A human-mouse chimera made its debut in 1988: "SCID-hu" is created when human fetal tissue -- spleen, liver, thymus, lymph node -- is transplanted into a mouse. These guys are clearly mice, but other chimeras are harder to peg. In the 1980s, scientists took brain-to-be tissue from quail embryos and transplanted it into chicken embryos. Once hatched, the chicks made sounds like baby quails. More part-human chimeras are now in the works or already in lab cages. StemCells Inc., of Palo Alto, Calif., has given hundreds of mice human-brain stem cells, for instance. And before human stem cells are ever used to treat human patients, notes biologist Janet Rowley of the University of Chicago, they (or the cells they develop into) will be implanted into mice and other lab animals. "The centaur has left the barn more than people realize," says Stanford University law professor and bioethicist Henry Greely. Part-human creatures raise enough ethical concerns that a National Academy of Sciences committee on stem cells veered off into chimeras. It recommended last week that some research be barred, to prevent some of the more monstrous possibilities -- such as a human-sperm-bearing mouse mating with a human-egg-bearing mouse and gestating a human baby. "We're not very concerned about a mouse with a human spleen," says Prof. Greely. "But we get really concerned about our brain and our gonads." That's why his Stanford colleague, Irving Weissman, asked Prof. Greely to examine the ethical implications of a mouse-human chimera. StemCells, co-founded by Prof. Weissman, has already transplanted human-brain stem cells into the brains of mice that had no immune system (and hence couldn't attack the foreign cells). The stem cells develop into human neurons, migrate through the mouse brain and mingle with mouse cells. The human cells make up less than 1 percent of the mouse brain, and are being used by the company to study neurodegenerative diseases. But Prof. Weissman had in mind a new sort of chimera. He would start with ill-fated mice whose neurons all die just before or soon after birth. He planned to transplant human-brain stem cells into their brains just before their own neurons died off. Would that lead the human cells to turn into neurons and replace the dead-or-dying mouse neurons, producing a mostly human brain in a mouse? Such a chimera could bring important scientific benefits. The SCID-hu mouse, though it hasn't yielded a cure for AIDS, has been "a very valuable animal model," says Ramesh Akkina of Colorado State University, Fort Collins, who directs a lab that uses this part-human mouse. "It has human T cells circulating, which will allow us to test gene therapy for AIDS" in a way that will be more relevant to patients than all-animal models. The co-creator of SCID-hu, Michael McCune of the Gladstone Institute of Virology and Immunology, San Francisco, notes that because the human organs last for months in the mice (they would die in days in a lab dish), "it is possible to study the effects of HIV" in many kinds of human cells in a living system. Similarly, studying living human neurons in a living mouse brain would likely yield more insights than studying human neurons in a lab dish or mouse neurons in a mouse brain. "You could see how pathogens damage human neurons, how experimental drugs act, what happens when you infect human neurons with prions (which cause mad-cow disease) or amyloid (associated with Alzheimer's)," says Prof. Greely. "The big concern is, could you give the mouse some sort of human consciousness or intelligence?" "All of us are aware of the concern that we're going to have a human brain in a mouse with a person saying, 'Let me out,'" Prof. Rowley told the President's Council on Bioethics when it discussed chimeras in March. To take no chances, scientists could kill the mice before birth to see if the brain is developing mouse-y structures such as "whisker barrels," which receive signals from the whiskers. If so, it's a mouse. If it is developing a large and complex visual cortex, it's too human. "If you saw something weird, you'd stop," says Prof. Greely. "If not, let the next ones be born, and examine them at different ages to be sure they're still fully mouse." To reduce the chance that today's chimeras will be as monstrous as the Greeks' were, the U.S. patent office last year rejected an application to patent a human-chimp chimera, or "humanzee." But that, of course, just keeps someone from patenting one -- not making one. From checker at panix.com Sat May 7 10:44:47 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:44:47 -0400 (EDT) Subject: [Paleopsych] NYT: Time Travelers to Meet in Not Too Distant Future Message-ID: Time Travelers to Meet in Not Too Distant Future New York Times, 5.5.6 http://www.nytimes.com/2005/05/06/national/06time.html By [2]PAM BELLUCK CAMBRIDGE, Mass., May 5 - Suppose it is the future - maybe a thousand years from now. There is no static cling, diapers change themselves, and everyone who is anyone summers on Mars. What's more, it is possible to travel back in time, to any place, any era. Where would people go? Would they zoom to a 2005 Saturday night for chips and burgers in a college courtyard, eager to schmooze with computer science majors possessing way too many brain cells? Why not, say some students at the Massachusetts Institute of Technology, who have organized what they call the first convention for time travelers. Actually, they contend that theirs is the only time traveler convention the world needs, because people from the future can travel to it anytime they want. "I would hope they would come with the idea of showing us that time travel is possible," said Amal Dorai, 22, the graduate student who thought up the convention, which is to be this Saturday on the M.I.T. campus. "Maybe they could leave something with us. It is possible they might look slightly different, the shape of the head, the body proportions." The event is potluck and alcohol-free - present-day humans are bringing things like brownies. But Mr. Dorai's Web site asks that future-folk bring something to prove they are really ahead of our time: "Things like a cure for AIDS or cancer, a solution for global poverty or a cold fusion reactor would be particularly convincing as well as greatly appreciated." He would also welcome people from only a few days in the future, far enough to, say, give him a few stock market tips. Mr. Dorai and fellow organizers are the kind of people who transplant a snowblower engine into a sleeper sofa and drive the couch around Cambridge. (If the upholstery were bright red, it could be a midlife crisis convertible for couch potatoes.) They built a human-size hamster wheel - eight feet in diameter. And they concocted the "pizza button," a plexiglass pizza slice mounted in their hallway; when pressed, it calls up a Web site and arranges for pizza delivery 30 minutes later. (For anyone wanting to try this at home, the contraption uses a Huffman binary code. It takes fewer keystrokes to order the most popular toppings, like pepperoni, more keystrokes for less popular extras, like onions.) At the convention, they plan to introduce a robot with an "infrared pyro-electric detector," designed to follow anything that emits heat, including humans. "It's supposed to be our pet," said Adam Kraft, 22, a senior. "It needs fur," added David Nelson, 23, a graduate student. While Mr. Dorai has precisely calculated that "the odds of a time traveler showing up are between one in a million and one in a trillion," organizers have tried to make things inviting. In case their august university does not exist forever, they have posted the latitude and longitude of the East Campus Courtyard (42:21:36.025 degrees north, 71:05:16.332 degrees west). A roped-off area, including part of an improvised volleyball court, will create a landing pad so materializing time-travel machines will not crash into trees or dormitories. To set the mood, organizers plan to display a DeLorean - the sleek but short-lived 1980's car that was the time-traveling vehicle in the "Back to the Future" movies. At first, Mr. Dorai urged people to publicize the event with methods likely to last. "Write the details down on a piece of acid-free paper," he directed, "and slip them into obscure books in academic libraries!" But Mr. Dorai said the response was so overwhelming that the police, concerned about security, had asked that anyone who had not replied by Wednesday not be allowed to attend. No future-guests are confirmed as of yet, although one responder purports to be from 2026. But among the 100 likely attendees, there are those from another time zone - Chicago - and from New York, which at least likes to think of itself as light-years ahead. "I'm keeping my fingers crossed," said Erik D. Demaine, an M.I.T. mathematician who will be one of the professors speaking. There will also be two bands, the Hong Kong Regulars and Off-White Noise, performing new, time-travel-apropos tunes. "If you subscribe to alternative-world theory, then time travel makes sense at some level," said Professor Demaine, who would like future-guests to bring answers to mathematical mysteries. "The universe is inherently uncertain, and at various times it's essentially flipping coins to make a decision. At any point, there's the heads version of the world and the tails version of the world. We think that we actually live in one of them, and you could imagine that there's actually many versions of the universe, including one where suddenly you appear from 10 years in the future." If you can not imagine that, consider Erin Rhode's view of time travel. "I kind of think if it's going to happen, it'll be the wormhole theory," said Ms. Rhode, 23, a recent graduate, adding, "If you create a stable wormhole," a hole in space, "people can go back to visit it." William McGehee, 19, a freshman who helped build a "Saturday Night Fever"-like dance floor in his dorm, said, "It's pretty obvious if time travel does occur, then it doesn't cause the universe to explode." And Sam McVeety, 18, a freshman, wondered if wearing a tinfoil hat would be comforting or insulting to future-people. Mr. Dorai has had quirky brainstorms before: proposing the imprisonment of Bill Watterson, the retired cartoonist, to force him to continue his "Calvin and Hobbes" comic strip; and donning the costume of M.I.T.'s mascot, the beaver, while climbing the statue of John Harvard, namesake of that other Cambridge college. That incident went awry when some Harvard men swiped a paw. But Mr. Dorai's time travel idea seems to have legs. "If you can just give up a Saturday night, there's a very small chance at it being the biggest event in human history," he said. And if it is a flop, futuristically speaking? Well, Mr. Dorai reasoned, "Certainly, if no one from the future shows up, that won't prove that it's impossible." From checker at panix.com Sat May 7 10:45:46 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:45:46 -0400 (EDT) Subject: [Paleopsych] NS: A concise guide to mind-altering drugs Message-ID: A concise guide to mind-altering drugs http://www.newscientist.com/article.ns?id=mg18424735.900&print=true * 13 November 2004 Alcohol What is it? Ethanol produced by the action of yeast on sugars. What does it do? Ethanol is a biphasic drug: low doses have a different effect to high doses. Small amounts of alcohol (one or two drinks) act as a stimulant, reducing inhibition and producing feelings of mild euphoria. Higher doses depress the central nervous system, initially producing relaxation but then leading to drunkenness - characterised by poor coordination, memory loss, cognitive impairment and blurred vision. Very high doses cause vomiting, coma and death through respiratory failure. The fatal dose varies but is somewhere around 500 milligrams of ethanol per 100 millilitres of blood. How does it work? At low doses (5 milligrams per 100 millilitres of blood), alcohol sensitises NMDA receptors in the brain, making them more responsive to the excitatory neurotransmitter glutamate, so boosting brain activity. These effects are most pronounced in areas associated with thinking, memory and pleasure. At higher doses it desensitises the same receptors and also activates the inhibitory GABA system. Amphetamine-type stimulants What are they? A class of synthetic drugs invented (and still used as) appetite suppressors. Includes amphetamine itself and derivatives including methamphetamine and dextroamphetamine. What do they do? Amphetamines are powerful stimulants of the central nervous system, producing feelings of euphoria, alertness, mental clarity and increased energy lasting for 2 to 12 hours depending on the dose. The downsides are increased heart rate and blood pressure, nausea, irritability and jitteriness, plus fatigue once the effects have worn off. Overdosing can lead to convulsions, heart failure, coma and death. The fatal dose varies from person to person, with some reports of acute reactions to as little as 2 milligrams and others of non-fatal 500-milligram doses. Most deaths from overdose have been among injecting users. How do they work? Their principal effect is to block dopamine transporters, which leads to higher-than-normal levels of the pleasure chemical dopamine in the brain. Caffeine What is it? An alkaloid found in coffee, cocoa beans, tea, kola nuts and guarana. Also added to many fizzy drinks, energy drinks, pep pills and cold and flu remedies. What does it do? A stimulant of the central nervous system. Pure caffeine is a moderately powerful drug and is sometimes passed off as amphetamine. In small doses, such as the 150 milligrams in a typical cup of filter coffee, it increases alertness and promotes wakefulness. Caffeine also raises heart and respiration rate and promotes urine production. Higher doses induce jitteriness and anxiety. The fatal dose is about 10 grams. How does it work? Caffeine blocks receptors for the neurotransmitter adenosine, which is generally inhibitory and associated with the onset of sleep. Also raises dopamine levels, and stimulates the release of the fight-or-flight hormone adrenalin. Cannabis What is it? Leaves, buds, flowers and resin from the cannabis plant, Cannabis sativa, a native of central Asia. The plant contains numerous psychoactive compounds called cannabinoids, the most potent of which is delta-9-tetrahydrocannabinol (THC). Cannabis is usually smoked in the form of dried leaves and buds, or as a dried resin (hashish). What does it do? Smoked in moderate quantities, cannabis can produces feelings of fuzzy mellowness and general well-being. It can interfere with memory and increase appetite ("the munchies"). Some users experience nausea, anxiety and paranoia. If eaten, the resin can be powerfully hallucinogenic. No fatal dose has ever been recorded in humans How does it work? THC latches onto specific receptors in the brain that are known to be involved in reward, appetite regulation and pain perception, though their precise role has yet to be worked out. Cocaine What is it? An alkaloid extracted from the leaves of the coca plant (Erythroxylon coca), a native of the eastern slopes of the Andes. It is commonly consumed in the form of the hydrochloride salt, a white crystalline powder which is usually snorted into the nostrils. Crack cocaine is pure cocaine liberated from the hydrochloride (hence known as "free base"), which makes it smokeable. What does it do? Cocaine is a potent stimulator of the central nervous system; a typical dose (about 50 to 100 milligrams) rapidly induces feelings of self-confidence, exhilaration and energy which last for 15 to 45 minutes before giving way to fatigue and melancholy. Crack cocaine condenses these effects into a shorter and more intense high. The drug also increases heart rate and blood pressure, sometimes fatally. Very high doses depress brain stem function, potentially leading to cardiac arrest and respiratory failure. The fatal dose can be as low as 1 gram. How does it work? Its principal effect is to block the re-uptake of dopamine, serotonin and noradrenalin into neurons, leading to higher-than-normal levels of these neurotransmitters in the brain. Dissociatives What are they? A class of hallucinogenic drugs that produce feelings of depersonalisation and detachment from reality. The most commonly used are ketamine and its relatives DXM (dextromethorphan hydrobromide) and PCP (phencyclidine, angel dust). What do they do? In small doses (up to about 75 milligrams) ketamine produces a psychedelic stimulant effect. The effect of higher doses has been described as an "out-of-body" experience. Users lose all sense of self and feel a detachment of mind and body, leading to a trance-like state in which they can experience a "superior reality" full of dazzling insights and visions. Some people find it wonderful, others terrifying. Effects last about an hour and wear off rapidly, leaving the user feeling groggy, and sometimes traumatised. Accidental overdoses are unknown: the drug has a wide safety margin. How do they work? Ketamine is an inhibitor of NMDA receptors, which normally respond to the excitatory neurotransmitter glutamate. This has the effect of severely depressing activity in many parts of the brain while leaving some functions intact. Ecstasy What is it? The amphetamine derivative MDMA (3,4-methylenedioxy-N-methylamphetamine). What is sold as ecstasy on the street, however, often contains no MDMA. What does it do? Technically known as a hallucinogenic amphetamine and also as an "empathogen", MDMA produces feelings of energy, euphoria, empathy, openness and a desire for physical contact (users are often described as "loved up"), plus mild visual and auditory hallucinations. Effects last for several hours and are followed by an equally lengthy period of lethargy and mild depression. MDMA is not toxic per se but can cause death due to overheating and dehydration. It also inhibits the production of urine and so can lead to a fatal build-up of fluid in the tissues. How does it work? The drug causes the brain to dump large amounts of the mood-modulating neurotransmitter serotonin into the synapses, and also raises dopamine levels. Hallucinogens/psychedelics What are they? A broad class of natural and synthetic compounds that profoundly alter perception and consciousness. The most widely used are the LSD group, including LSD (lysergic acid diethylamide), LSA (d-lysergic acid amide), DMT (dimethyltryptamine, found in ayahuasca) and psilocybin (the main active ingredient of magic mushrooms). What do they do? LSD produces experiences far removed from normal reality, including visual and auditory hallucinations, synaesthesia, time distortion, altered sense of self and feelings of detachment. Surfaces undulate and shimmer, colours are more intense and everyday objects can take on a surreal and fascinating appearance. The experience can be extremely frightening. After effects include fatigue and a vague sense of detachment. LSD is one of the most potent psychoactive substances known. Only 25 micrograms are required to produce an effect; 100 micrograms will induce 12 hours or more of profound psychedelia. How do they work? No one really knows. LSD stimulates three subtypes of serotonin receptor, 5-HT2A, 5-HT2C and 5-HT1A, though it is not clear that this alone can account for its effects. Opiates What are they? Any compound that stimulates opioid receptors found in the brain, spinal cord and gut. The word "opioid" derives from opium, the narcotic resin extracted from unripe seed pods of the opium poppy (Papaver somniferum). The opiates include naturally occurring alkaloids such as morphine (the main active ingredient of opium), derivatives of these such as heroin, and entirely synthetic compounds such as methadone. What do they do? Heroin, the most commonly used opiate, can induce euphoria, dreamy drowsiness and a general sense of well-being. The effects of injecting the drug have been described as a "whole-body orgasm", though some users experience no pleasurable effects at all. It also causes nausea, constipation, sweating, itchiness, depressed breathing and heart rate. Higher doses lead to respiratory failure and death. The fatal dose depends on tolerance and how the drug is taken but a naive user would probably die after injecting 200 milligrams. How do they work? By activating any of the three subtypes of opioid receptors. These normally respond to the body's natural painkilling chemicals including endorphins, which are released in highly stressful situations where pain would be disadvantageous. Tobacco What is it? Dried leaves of the tobacco plant Nicotiana tabacum, a native of South America. Usually smoked but can also be snorted as snuff or chewed. The main active ingredient is the alkaloid nicotine. What does it do? Nicotine is a mild stimulant which increases alertness, energy levels and memory function. Paradoxically, users also report a relaxant effect. It also increases blood pressure and respiration rate and suppresses appetite. Larger doses cause hallucinations, nausea, vomiting and death. The lethal dose is about 60 milligrams; a typical cigarette delivers about 2 milligrams of nicotine into the bloodstream. How does it work? Nicotine's principal effect is to stimulate nicotinic acetylcholine receptors in the brain, which leads to increased levels of the fight-or-flight hormone adrenalin. Also increases levels of dopamine. From checker at panix.com Sat May 7 10:45:53 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:45:53 -0400 (EDT) Subject: [Paleopsych] NS: Decaff coffee gives a buzz too Message-ID: Decaff coffee gives a buzz too http://www.newscientist.com/article.ns?id=dn3075&print=true * 10:31 19 November 2002 * James Randerson The buzz from your morning cup of coffee may not be caused by caffeine after all. According to new research, decaffeinated coffee may be just as good at raising the blood pressure, at least for drinkers not used to the black stuff. Numerous studies have shown that too much caffeine interferes with sleep patterns, but the long term health effects of the drug are more controversial. Some scientists claim that daily caffeine stimulation increases our risk of high blood pressure and heart disease later in life. But overall, the evidence is equivocal, says Alice Lichtenstein of the American Heart Association's Nutrition Committee. Now, the small Swiss-led study suggests that to focus on caffeine alone may be wrong. The researchers gave triple espressos to six regular coffee drinkers and nine volunteers who never consumed food or drinks containing caffeine. On a separate occasion, the caffeine-abstainers also drank a triple decaffeinated espresso, but they were not told which was which. To the researchers surprise, both drinks had the same effect on the non-coffee drinkers, stimulating the sympathetic nervous system and raising blood pressure. "Fake" hit One interpretation of the result is that the subjects are reacting like Pavlov's dog and receiving a "fake" caffeine hit in anticipation of the real thing. But lead researcher Roberto Corti, a cardiologist at University Hospital in Zurich, thinks this is unlikely because the volunteers did not usually drink coffee. "We also saw a linear trend in blood pressure. This is not typical of a placebo effect," he adds. A more likely explanation, he thinks, is that coffee components other than caffeine have a stimulating effect. But the study threw up other puzzling findings. Regular coffee drinkers did not experience higher blood pressure after normal coffee, but their nervous system was stimulated. Corti does not think this is due simply to their bodies having got used to the effects of caffeine, because an intravenous caffeine injection did raise their blood pressure. He believes that other chemicals in coffee might block the caffeine stimulation. But Lichtenstein says the result could be due to differences in the method of delivery. Absorption via the gut can be slow and depends on what the volunteer had in their last meal. "It's very different from mainlining caffeine," she says. "The study has raised questions," says Lichtenstein. But she thinks it is too early to draw broad conclusions on the effects of caffeine and coffee. Journal reference: Circulation (vol 106, p 2935) Related Articles * [12]Coffee drinkers have lower diabetes risk * [13]http://www.newscientist.com/article.ns?id=dn3032 * 8 November 2002 * [14]Caffeine 'lotion' protects against skin cancer * [15]http://www.newscientist.com/article.ns?id=dn2714 * 26 August 2002 * [16]Caffeine key to curing a headache * [17]http://www.newscientist.com/article.ns?id=dn1491 * 29 October 2001 Weblinks * [18]University Hospital, Zurich * [19]http://www.usz.ch/e/index.html * [20]American Heart Association * [21]http://www.americanheart.org/ * [22]Caffeine FAQ * [23]http://coffeefaq.com/caffaq.html * [24]Circulation * [25]http://circ.ahajournals.org/ References 12. http://www.newscientist.com/article.ns?id=dn3032 13. http://www.newscientist.com/article.ns?id=dn3032 14. http://www.newscientist.com/article.ns?id=dn2714 15. http://www.newscientist.com/article.ns?id=dn2714 16. http://www.newscientist.com/article.ns?id=dn1491 17. http://www.newscientist.com/article.ns?id=dn1491 18. http://www.usz.ch/e/index.html 19. http://www.usz.ch/e/index.html 20. http://www.americanheart.org/ 21. http://www.americanheart.org/ 22. http://coffeefaq.com/caffaq.html 23. http://coffeefaq.com/caffaq.html 24. http://circ.ahajournals.org/ 25. http://circ.ahajournals.org/ From checker at panix.com Sat May 7 10:46:11 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:46:11 -0400 (EDT) Subject: [Paleopsych] NYTBR: Freud and His Discontents Message-ID: Subject: NYTBR: Freud and His Discontents Freud and His Discontents New York Times Book Review, 5.5.8 http://www.nytimes.com/2005/05/08/books/review/08SIEGELL.html By LEE SIEGEL "CIVILIZATION AND ITS DISCONTENTS'' first appeared in 1930, and on the occasion of its 75th anniversary has been reissued by Norton ($19.95). A new edition of a classic text of Western culture is a happy occasion, not least because it offers the opportunity to debate the book's effect on the way we see the world -- or whether it has any effect at all. ''Classic'' can mean that an intellectual work is indisputably definitive in its realm, or it can mean that its prestige has outlived its authority and influence. Being leatherbound is sometimes synonymous with being timebound. Freud's essay rests on three arguments that are impossible to prove: the development of civilization recapitulates the development of the individual; civilization's central purpose of repressing the aggressive instinct exacts unbearable suffering; the individual is torn between the desire to live (Eros) and the wish to die (Thanatos). It is impossible to refute Freud's theses, too. All three arguments have died in the minds of many people, under the pressure of intellectual opposition, only to remain alive and well in the minds of many others. To clarify the status of Freud's influence today is to get a better sense of a central rift running through the culture we live in. In one important sense, Freud's ideas have had an undeniable impact. They've spelled the death of psychology in art. Freud's abstract, impersonal concepts have worn away the specificity of fictional character. By the 1950's, here and in Western Europe, it was making less and less sense to fashion the idiosyncratic, original inner and outer lives of a character in a novel. His or her behavior was already accounted for by the universal realities of id, ego, superego, not to mention the forces of repression, displacement and neurosis. Thus the postwar rise of the nouveau roman, with its absence of character, and of the postmodern and experimental novels, with their many strategies -- self-annulling irony, deliberate cartoonishness, montage-like ''cutting'' -- for releasing fiction from its dependence on character. For all the rich work published after the war, there's barely a fictional figure that has the memorableness of a Gatsby, a Nick Adams, a Baron Charlus, a Leopold Bloom, a Settembrini. And that's leaving aside the magnificent 19th century, when authors plumbed the depths of the human mind with something on the order of clairvoyance. Of course, before that, there was Shakespeare. And Cervantes. And Dante. And . . . It seems that the further back you go in time, away from Freud, the deeper the psychological portraits you encounter in literary art. Nowadays, often even the most accomplished novels offer characters that are little more than flat, ghostly reflections of characters. The author's voice, or self-consciousness about voice, substitutes mere eccentricity for an imaginative surrender to another life. But if we have Freud to blame for the long-drawn-out extinction of literary character, we also have Freud to thank for the prestige of film. The depiction of fictional people's inner lives is not the strength of the silver screen. Character gets revealed to us by plot turns, camera angles, musical scores -- by abstract, impersonal forces, much like Freud's concepts. In a novel, character is shaped from the inside out; in a film, it's molded from the outside and stays outside. How many movie characters can you think of -- with the exception, perhaps, of Citizen Kane -- whose names have the archetypal particularity of Isabel Archer or Sister Carrie? For better or for worse, film's independence from character is the reason it has replaced the novel as the dominant art form in our culture. Yet Freud himself drew his conception of the human mind from the type of imaginative literature his ideas were about to start making obsolete. His work is full of references to poets, playwrights and novelists from his own and earlier periods. In the latter half of his career, he applied himself more and more to using literature to prove his theories, commenting, most famously, on Shakespeare and Dostoyevsky. ''Civilization and Its Discontents'' brims with quotations from Goethe, Heine, Romain Rolland, Mark Twain, John Galsworthy and others. If Freud had had only his own writings to refer to, he would never have become Freud. Having accomplished his intellectual aims, he unwittingly destroyed the assumptions behind the culture that had nourished his work. Freud's universal paradigm for the human personality didn't mean only the decline of character in fiction. Its authoritative reduction of the human personality to developmental flaws undermined authority. The priest, the rabbi, the minister, the politician, the general may refer to objective facts and invoke objective truths and even ideals. They may be decent, reasonable people who have a strong sense of the reality principle, and of the reality of other people. But in Freud's eyes, they are, like everyone else, products of their own narrow, half-perceived conditions, which they project upon the world around them and sometimes mistake for reality. Nothing they say about the world goes unqualified by their conditions. ''Civilization and Its Discontents'' itself is the product of a profoundly agitated, even disturbed, mind. By the summer of 1929, when Freud began the book, anti-Semitism -- long a staple of Austrian politics -- had become at least as virulent in Austria as in neighboring Germany. Hatred of Jews played a central role in Austria's Christian Socialist and German Nationalist parties, which were about to win a majority in parliament, and there was widespread enthusiasm for Germany's rapidly growing National Socialists. It's not hard to imagine that Freud, slowly dying from the cancer of the mouth that had been diagnosed in 1923, and in great pain, felt more and more anxious about his life, and about the fate of his work. Perhaps it's this despairing frame of mind that leads Freud into sharp contradictions and intellectual lapses in ''Civilization and Its Discontents.'' He writes at one point that ''the low estimation put upon earthly life by the Christian doctrine'' was the first great expression of hostility to civilized society in the West; yet elsewhere, he cites the Christian commandment to love one's neighbor as oneself as ''one of the ideal demands, as we have called them, of civilized society.'' Later, in the space of two sentences, he gets himself tangled up when he tries to identify that commandment with civilization itself. He describes the sacred injunction as being ''undoubtedly older than Christianity,'' and then catches himself, as if realizing that the idea of universal love was unique to Christianity, and adds, ''yet it is certainly not very old; even in historical times it was still strange to mankind.'' Throughout the essay, Freud's hostility to Christianity is so intense that he seems determined to define civilization in Christian terms. The book should have been called ''Christian Society and Its Discontents.'' That is what it really is. And then there is the aggressive instinct, a universal impulse that Freud claims presents the sole impediment to Christian love and civilized society, but which he cannot quite bring in line with his earlier theories. It's as if he were, understandably, sublimating into theory his own feelings about the Christian civilization that, even before Hitler's formal ascension to power in 1933, seemed about to devour him and his family. Certainly, Freud's rage against the dark forces gathering against him has something to do with his repeated references, throughout the book, to great men in history who go to their deaths vilified and ignored. In one weird, remarkable moment, Freud introduces the idea of ''the superego of an epoch of civilization,'' thus supplanting even Jesus Christ with a Freudian concept -- thus supplanting Christ with Freud. But the most enigmatic, or maybe just incoherent, element of ''Civilization and Its Discontents'' is Freud's contention -- fancifully laid in 1920, in ''Beyond the Pleasure Principle'' -- that every individual wishes, on some level, to die. In ''Civilization and Its Discontents,'' he does not account for this outrageously counterintuitive idea, explain his application of it to history or even elaborate on it. The notion appears toward the end of the book and then does not occur again. Nine years later, in exile in England, weak and ill, Freud committed physician-assisted suicide, asking his doctor to give him a lethal dose of morphine. For all Freud's stern kindness toward humanity, for all his efforts to lessen the burden of human suffering, Thanatos seems to be the embittered way in which he universalized his parlous inner state. It hampers the understanding to read ''Civilization and Its Discontents'' without taking into consideration all these circumstances. If Freud has taught us anything, it's that any evaluation of authority has to examine the condition of those who stand behind it. As for repairing to ''Civilization and Its Discontents'' to gain essential elucidation of our own condition, the work seems as severely circumscribed by its time as by its author's situation. Today, Freud's stress on the formative effect of the family romance seems less and less relevant amid endless deconstructions and permutations of the traditional family. His argument that society's repressions create unbearable suffering seems implausible in a society where permissiveness is creating new forms of suffering. His fearless candor about sex appears quaint in a culture that won't stop talking about sex. And a great many people with faith in the inherent goodness of humankind believe that they are living according to ideal sentiments, universal principles or sacred commandments, unhampered by Freudian skepticism. Yet there are, unquestionably, people for whom Freud's immensely powerful ideas are a permanent condition of their lives. Behind the declaration of ideal sentiments, universal principles and sacred commandments, they see a craven sham concealing self-interest, greed and the wish to do harm. Neither of these two groups will ever talk the other out of its worldview. In this sense the conflict is not between the Islamic world and the ''liberal'' West; it is between religious people everywhere and people who, like Freud, see faith as an illusion, a set of self-deceiving notions about life. To put it another way, Freudianism is not a science; you either grasp the reality of Freud's dynamic notion of the subconscious intuitively -- the way, in fact, you do or do not grasp the truthfulness of Ecclesiastes -- or you cannot accept that it exists. For that reason, the most intractable division in the world now is between those who believe that the subconscious plays a fundamental role in human life, and those who don't. That's the real culture war, and maybe even the real clash of civilizations. Lee Siegel is the book critic for The Nation, the television critic for The New Republic and the art critic for Slate. From checker at panix.com Sat May 7 10:46:53 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:46:53 -0400 (EDT) Subject: [Paleopsych] Dowd: What Rough Beasts? Message-ID: What Rough Beasts? Liberties column by Maureen Dowd, The New York Times, 5.5.7 http://www.nytimes.com/2005/05/07/opinion/07dowd.html WASHINGTON I love chimeras. I've seen just about every werewolf, Dracula and mermaid movie ever made, I have a Medusa magnet on my refrigerator, and the Sphinx of Greek mythology is a role model for her lethal brand of mystery. So when chimeras reared up in science news, I grabbed my disintegrating copy of Edith Hamilton's "Mythology" to refresh my memory on the Chimera, the she-monster with a lion's head, a goat's body and a serpent's tail: "A fearful creature, great and swift of foot and strong/Whose breath was flame unquenchable." Bellerophon, "a bold and beautiful young man" on flying Pegasus, shot arrows down at the flaming monster and killed her. Chimeras with "generally sinister powers," as Nicholas Wade [3]wrote in The Times, seemed to be a lesson in "the pre-Darwinian notion that species are fixed and penalties are severe" for crossing boundaries. Chimeras got attention again in the mid-80's, Sharon Begley of The Wall Street Journal noted, when embryonic goat cells were merged with embryonic sheep cells to produce a "geep," when a human-mouse chimera was born and when "scientists took brain-to-be tissue from quail embryos and transplanted it into chicken embryos. Once hatched, the chicks made sounds like baby quails." The U.S. Patent Office balked at an attempt last year to patent a "humanzee," a human-chimp chimera. But as the Stanford University bioethicist Henry Greely told Ms. Begley: "The centaur has left the barn." Knowing that mixing up species in a Circean blender conjures up nightmarish images, the National Academy of Sciences addressed the matter last month - stepping into the stem-cell vacuum left by the government and issuing research guidelines. While research on chimeras may be valuable, the guidelines, in a fit of "Island of Dr. Moreau" queasiness, suggested bans on inserting human embryonic stem cells into an early human embryo, apes or monkeys. The idea is to avoid animals with human sex cells or brain cells, Mr. Wade wrote. "There is a remote possibility that an animal with eggs made of human cells could mate with an animal bearing human sperm. To avoid human conception in such circumstances, the academy says chimeric animals should not be allowed to mate," he explained. Human cells in an animal brain could also be a problem. As Janet Rowley, a University of Chicago biologist, told a White House ethics panel: "All of us are aware of the concern that we're going to have a human brain in a mouse with a person saying, 'Let me out.' " Mary Shelley was right. Playing Creator is tricky - even if you chase down your accidents with torches. President Bush's experiments in Afghanistan and Iraq created his own chimeras, by injecting feudal and tribal societies with the cells of democracy, and blending warring factions and sects. Some of the forces unleashed are promising; others are frightening. In a chilling classified report to Congress last week, Gen. Richard Myers, chairman of the Joint Chiefs, conceded that Iraq and Afghanistan operations had restricted the Pentagon's ability to handle other conflicts. That's an ominous admission in light of North Korea's rush toward nukes, which was spurred on by the Iraq invasion and North Korea's conviction that, in bargaining with Mr. Bush, real weapons trump imaginary - or chimerical - ones. The U.S. invasion also spawned a torture scandal, and its own chimeric (alas, not chimerical) blend of former enemies - the Baathists and foreign jihadists - with access to Iraqi weapons caches. The Republican Party is now a chimera, too, a mutant of old guard Republicans, who want government kept out of our lives, and evangelical Christians, who want government to legislate religion into our lives. But exploiting God for political ends has set off powerful, scary forces in America: a retreat on teaching evolution, most recently in Kansas; fights over sex education, even in the blue states and blue suburbs of Maryland; a demonizing of gays; and a fear of stem cell research, which could lead to more of a "culture of life" than keeping one vegetative woman hooked up to a feeding tube. Even as scientists issue rules on chimeras in labs, a spine-tingling he-monster with the power to drag us back into the pre-Darwinian dark ages is slouching around Washington. It's a fire-breathing creature with the head of W., the body of Bill Frist and the serpent tail of Tom DeLay. E-mail: [4]liberties at nytimes.com References 1. http://www.nytimes.com/services/xml/rss/nyt/Opinion.xml 2. http://www.nytimes.com/top/opinion/editorialsandoped/oped/columnists/maureendowd/index.html?inline=nyt-per 3. http://www.nytimes.com/2005/05/03/science/03chim.html 4. mailto:liberties at nytimes.com From checker at panix.com Sat May 7 10:47:52 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:47:52 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Perfectly Reasonable Deviations From the Beaten Track: The Letters of Richard P. Feynman' Message-ID: 'Perfectly Reasonable Deviations From the Beaten Track: The Letters of Richard P. Feynman' New York Times Book Review, 5.5.8 http://www.nytimes.com/2005/05/08/books/review/08ZERNIKE.html By KATE ZERNIKE PERFECTLY REASONABLE DEVIATIONS FROM THE BEATEN TRACK The Letters of Richard P. Feynman. Edited by Michelle Feynman. Foreword by Timothy Ferris. Illustrated. 486 pp. Basic Books. $26. In 1975, a woman from Seattle wrote the theoretical physicist and Nobel laureate Richard P. Feynman to declare that she had fallen in love after seeing him on ''Nova.'' ''Are there lots of physicists with fans?'' she wrote. ''You have one!'' Feynman wrote back flattered -- ''I need no longer be jealous of movie stars'' -- and signed off, ''Your fan-nee (or whatever you call it -- the whole business is new to me).'' It wasn't, of course. There were the high school students from Springfield, Mo., who sent him a hand-lettered birthday card to thank him for writing their textbook. The German man who wrote to share the poem he had created from a Feynman lecture. A man from Massachusetts wrote of a move afoot to draft Feynman for governor. A dentist wrote to ask his views on nuclear energy; an office equipment salesman, to propose an idea for a particle accelerator. A California correspondent inquired whether Feynman believed it possible to record dreams on tape, the way you do television programs. As the new collection of Feynman letters, ''Perfectly Reasonable Deviations From the Beaten Track,'' shows, Feynman inspired fan worship far beyond colleagues and students of science. Teenagers wrote to ask how they could be like him, and parents, how their children might be. Never mind whether a physicist might actually know something about child rearing or dreams or running a state. ''Why do I write you this letter?'' wrote the German who had turned a Feynman lecture into a poem to comfort himself after his father's death. ''Partly to extend my thanks to you, to tell you that with these, to you maybe unimportant lines, you have filled another human being's need.'' A whole industry of Feynman books has shown us Feynman the genius (he won the Nobel Prize in 1965), Feynman the iconoclast (at a hearing in Washington, he dropped a piece of rubber into ice water to demonstrate with brilliant simplicity why the space shuttle Challenger had exploded) and Feynman the nutty professor (he played bongo drums). What this latest addition shows most remarkably is Feynman's place in the popular imagination -- and how striking it is that any physicist would occupy it. It has become common to complain that we have no public intellectuals, but think how much rarer is the public scientist; it is a safe bet more people can identify Paris Hilton than Harold Varmus. Ordinary people came to regard Feynman, the boy from Far Rockaway, as theirs -- ''Thanks for the talk,'' one British man wrote after seeing him on television in 1981. He sparked excitement not just about science but also about the power of creativity, passion, curiosity. ''Work hard to find something that fascinates you,'' he wrote to one of the many students who asked him for advice. ''When you find it you will know your lifework. A man may be digging a ditch for someone else, or because he is forced to, or is stupid -- such a man is 'toolish' -- but another working even harder may not be recognized as different by the bystanders -- but he may be digging for treasure. So dig for treasure and when you find it you will know what to do.'' This selection of letters, edited by Feynman's daughter, Michelle, is billed as the closest thing possible to his autobiography; several books written before his death in 1988 were collections of lectures, or spoken memoirs recorded by his frequent collaborator Ralph Leighton. And as you would expect in an autobiography, letters here touch on science and Feynman's place in its history. A letter to his mother toward the end of his work on the Manhattan Project recounts the detonation of the first atomic bomb. A letter from 1967 counsels James Watson not to pay attention to criticism of the manuscript that would become ''The Double Helix.'' Feynman acknowledges in a letter to a man who wrote after reading a newspaper interview that he had hoped ''to quietly demur'' the Nobel Prize, but did not want to create a public stink once the honor had been reported in newspapers. The chapter of congratulatory telegrams and letters sent after the prize was announced bubbles with giddy excitement. But the freshest and most interesting letters here are the ones written to regular folk -- teenagers or teachers or parents who wrote to him from all over the world in moments of academic crisis or emotional doubt. To a student in India who complained that he was teased because of his stuttering, Feynman sent a book of physics problems and a letter encouraging him to ''study calmly and quietly those things which interest you most.'' He assured a high school student from Connecticut, who worried that difficulties in math would make it hard to pursue physics, not to be afraid: ''If you have any talent, or any occupation that delights you, do it, and do it to the hilt. Don't ask why, or what difficulties you may get into.'' A father from Alaska asked for help in directing his 16-year-old stepson -- ''a bit overweight, a little shy'' and ''no genius you understand, but a lot smarter than I am in math and such.'' Feynman told the man to have patience -- ''Let him go, let him get all distorted studying what interests him the most as much as he wants'' -- and to take father-and-son walks in the evening ''and talk (without purpose or routes) about this and that.'' He had no good way, he wrote, to make the boy figure out what he wanted in life. ''But to fall in love with a wonderful woman and to talk to her quietly in the night will do wonders.'' WHILE his spoken memoirs burnished the popular impression of Feynman as the merry prankster, the letters here imply he grew tired of that image. To a Swedish letter writer who had apparently suggested that playing the bongo drums made a physicist ''human,'' he replied: ''Theoretical physics is a human endeavor, one of the higher developments of human beings -- and this perpetual desire to prove that people who do it are human by showing that they do other things that a few other humans do (like playing bongo drums) is insulting to me. I am human enough to tell you to go to hell.'' Some of the earliest letters, written to his mother from college, are less illuminating, recording details like how many hours he slept. But others sketch his relationship with his first wife, who died from tuberculosis at a sanitarium in Albuquerque, where she had moved to live near him while he worked on the Manhattan Project in Los Alamos. The tenderness and the agony expressed in the letters written around the time of her death make you wonder if Feynman cultivated the jokester image to mask the pain of such a tremendous loss at such an early age. ''I find it hard to understand in my mind what it means to love you after you are dead,'' he wrote in 1946, nearly a year and a half after she had died. ''But I still want to comfort and take care of you -- and I want you to love me and care for me.'' Perhaps they could still make plans together, he ventured -- but no, he had lost his ''idea-woman,'' the ''general instigator of all our wild adventures.'' ''You can give me nothing now yet I love you so that you stand in my way of loving anyone else,'' he wrote. ''But I want to stand there. You, dead, are so much better than anyone else alive.'' Feynman's daughter writes that the letter is considerably more worn than the others, suggesting that he went back to reread it again and again. His fans -- new ones, too -- will find themselves doing the same. Kate Zernike is a national correspondent for The Times. From checker at panix.com Sat May 7 10:48:18 2005 From: checker at panix.com (Premise Checker) Date: Sat, 7 May 2005 06:48:18 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Irresistible Empire,' by Victoria de Grazia Message-ID: 'Irresistible Empire,' by Victoria de Grazia New York Times Book Review, 5.5.8 http://www.nytimes.com/2005/05/08/books/review/08HARIL.html 'Irresistible Empire': McEurope By JOHANN HARI IRRESISTIBLE EMPIRE America's Advance Through Twentieth-Century Europe. By Victoria de Grazia. Illustrated. 586 pp. The Belknap Press/Harvard University Press. $29.95. Victoria De Grazia slaps Europeans in the face with her title: face it, mes amis, you are part of the American empire. Zut alors! As a proud European, I confess to letting out a small splutter. Didn't we just snub the emperor-president, George W. Bush, by rejecting great swaths of his foreign policy? Don't we have a social democratic model that avoids the vertiginous inequalities of the United States? Aren't we free and self-determining democracies, building up our own superpower based in Brussels? Well, maybe -- but if Charlemagne or Napoleon could see their continent today, they would be with de Grazia. One glance at Europe's great capitals, and they would assume Europe had been conquered, occupied and settled by Americans. The men who dreamed of l'Europe profonde would curse the ubiquity of Eminem as they sat in the greasy KFC on the Falls Road in Belfast munching their Chicken Popcorn. They would stagger their way around Italy's most beautiful city, guided by a McDonald's map of McVenice. ''Irresistible Empire'' is the story of how this happened, of how an imperium came to Europe in the form of an emporium. Unlike the Middle East and Latin America, Europe has seen only the peaceful face of America's empire. De Grazia, a professor of history at Columbia University, shows how -- in just one century -- the Old Continent was subject to slow conquest by a million consumer goods. She talks us through the rise of a string of outwardly banal institutions: Rotary Clubs, supermarkets, the Hollywood star system, corporate advertising. With the careful skill of an expert defusing an explosive, she teases out the dense clusters of political ideology embodied in these seemingly everyday social institutions. The old European capitalism was epitomized by the small neighborhood store. At its core was ''a commercial ethic that still sought trust in the longevity of contacts and the solidarity of face-to-face contacts.'' Slowly, this model was eclipsed by an American capitalism epitomized by the out-of-town supermarket, big, anonymous and neon. The local, the diverse and the class-segmented were all ironed out in favor of a mass standard of consumption. Instead of the Great Man theory of history, de Grazia gives us the Great Bargain theory. It's startling just how rapidly Europe has been changed by this new model. Who knew, for example, that in the late 1950's there was not a single supermarket in Milan, and this was by no means untypical for a European city? How many of us realize that as recently as 1954, only 9 percent of French people had a refrigerator? Yet in a work crammed with exhaustive research, the reader pines for analysis. All of contemporary European politics spins on the question: Were the developments de Grazia details good for Europe? The answer cannot be squeezed into the simple-minded good-and-evil story craved on both sides of the Atlantic. De Grazia concentrates primarily on the areas where Americanization has had a broadly positive effect. But even the Marxist theorist Antonio Gramsci wrote lovingly in his ''Prison Notebooks'' of how Americanism and Fordism were hollowing out the old feudal snobberies. Perhaps unwittingly, de Grazia steers away from the areas where corporate Americanization has been pernicious for Europe. I would like to see her turn her remarkable skills to charting the effect of American corporate buyouts on Europe's news and entertainment media. Nor does she mention the environmental impact of this economic system. And is any analysis of European supermarkets complete without mentioning the biggest public health scare on the continent for 50 years? The supermarkets, relentless pressure for cheap food and industrial agriculture all pushed farmers to feed processed meat to cattle -- thus leading to Mad Cow disease. The other gap in de Grazia's narrative concerns a strange historical irony: Americanization gave birth to the idea of Europe. ''Henry Ford was as much a father of the European idea as anyone from Europe, given his company's pioneering effort to treat the European region as a single sales territory,'' she notes dryly. She's right, but the point is left hanging. Ford set in motion the conflict that will define the European Union over the coming decades. It can be summarized in a simple question: Does Europe exist to achieve Ford's goal -- one vast market for American goods -- or to resist this possibility and create a distinctively social democratic alternative? De Grazia focuses on one relatively minor European-resistance cause -- the slow food movement -- and at first it seems an eccentric choice. Founded in the late 1980's, it has a simple purpose: to give everybody the time and space to eat good, unprocessed food slowly and carefully. When America waves a Big Mac, the Slow Foodies want Europe to wave a bowl of freshly cooked pasta al dente. But if they are few in number, their approach symbolizes the wider European reaction to American neoliberalism: slow down. Europeans want long vacations, generous welfare states and flexible work hours. They -- we -- are trying to articulate a different model of consumerism that values leisure and family as much as work, work, work. Can this work? Is there a way to combine America's dazzling consumer economy with social justice and environmental sanity? Like many Europeans, I have a dystopian vision of the death of Europe as an alternative to America. Far from resolving tensions across the Atlantic, the irresistible empire's blanding out would simply produce a long European grudge, a quiet rage that we had sold our identity for a bag of Doritos. I see it now: Napoleon and Charlemagne sit grumbling in a doughnut shop after a long day in the new Euro-America. Napoleon swallows hard on a Krispy Kreme, turns to a weeping Charlemagne, and whispers, ''Dude, I so hate America.'' Johann Hari is a columnist for The Independent in London and the author of ''God Save the Queen? Monarchy and the Truth About the Windsors.'' From checker at panix.com Sun May 8 14:56:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:56:17 -0400 (EDT) Subject: [Paleopsych] More on Chimera from Today's Wall Street Journal Message-ID: ---- Original Message ----- From: L. Stephen Coles, M.D., Ph.D. To: Gerontology Research Group Cc: irv at stanford.edu ; sciencejournal at wsj.com Sent: Friday, May 06, 2005 6:36 PM Subject: [GRG] More on Chimera from Today's Wall Street Journal SCIENCE JOURNAL: "Now That Chimeras Exist, What if Some Turn Out Too Human?" by Sharon Begley, Science Writer May 6, 2005; New York, NY (WSJ; p. B1) -- If you had just created a mouse with human brain cells, one thing you wouldn't want to hear the little guy say is, "Hi there, I'm Mickey." Even worse, of course, would be something like, "Get me out of this &%#ing body!" COMMENT: I believe the approved thing to start out with, is a mantra: "Not to go on all-Fours; that is the Law. Are we not Men?" After that come sad considerations on the relative shortness of animal life span. Less Stuart Little than Brian the Dog. Seriously, though, does anybody really believe that a mouse with human brain cells has any chance of achieving even Algernon-hood, let alone human-grade thought? It's less a matter of what species of brain cells you have, than it is of how many and how connected and how specialized. A mouse skull is only so big. I suspect you won't get anywhere making a smarter mouse unless you give it *bird* neurons, not human ones. Bird brains have amazing processing power for the size and weight. So forget human brain cells, and forget the scariness of human chimeras, unless they are animals of approximately human size. For ordinary lab animals you don't get scary intelligence till you scale a *bird brain* up to medium mammal size, and that gives you something with as many neurons as a human brain, yet not human. I have in mind a cartoon I saw once of a cooperative lab rabbit, which informs the two white-coated scientists: "Hi, I'm Linda. I'll be your bunny for the experiment today." SBH From checker at panix.com Sun May 8 14:56:35 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:56:35 -0400 (EDT) Subject: [Paleopsych] McKinsey: The demographic deficit: How aging will reduce global wealth Message-ID: The demographic deficit: How aging will reduce global wealth http://www.mckinseyquarterly.com/article_page.aspx?ar=1588&L2=7&L3=10 To fill the coming gap in global savings and financial wealth, households and governments will need to increase their savings rates and earn higher returns on the assets they already have. Diana Farrell, Sacha Ghai, and Tim Shavers The McKinsey Quarterly, Web exclusive, March 2005 The world's population is aging, and as it gets even grayer, bank balances will stop growing and living standards, which have improved steadily since the industrial revolution, could stagnate. The reason is that the populations of Japan, the United States, and Western Europe, where the vast majority of the world's wealth is created and held, are aging rapidly (Exhibit 1). During the next two decades, the median age in Italy will rise to 51, from 42, and in Japan to 50, from 43. Since people save less after they retire and younger generations in their prime earning years are less frugal than their elders were, savings rates are set to fall dramatically. In just 20 years, household financial wealth in the world's major economies will be roughly $31 trillion1 less than it would have been if historical trends had persisted, according to new research by the McKinsey Global Institute (Exhibit 2).2 If left unchecked, the slowdown in global-savings rates will reduce the amount of capital available for investment and impede economic growth. No country will be immune. For the United States?with its relatively young population, higher birthrates, and steady influx of immigrants?the aging trend will be relatively less severe. Still, its savings rate is already dismally low, even before the baby boomers have started to retire. To finance its massive current-account deficit, the United States relies on capital flows from Europe and Japan, but they too face rapidly aging populations. Even fast-growing developing countries such as China will not be able to generate enough savings to make up the difference. Finding solutions won't be easy. Raising the retirement age, easing restrictions on immigration, or encouraging families to have more children will have little impact. Boosting economic growth alone is not a solution, nor is the next productivity revolution or technological breakthrough. To fill the coming gap in global savings and financial wealth, households and governments will need to increase their savings rates and to earn higher returns on the assets they already have. These changes involve hard choices but can offer a brighter future. Growing older, saving less In just two decades, the proportion of people aged 80 and above will be more than 2.5 times higher than it is today, because women are having fewer children and people are living longer. In about a third of the world's countries, and in the vast majority of developed nations, the fertility rate is at, or below, the level needed to maintain the population. Women in Italy now average just 1.2 children. In the United Kingdom, the figure is 1.6; in Germany, 1.4; and in Japan, 1.3. Meanwhile, thanks to improvements in health care and living conditions,3 average life expectancy has increased from 46 years in 1950 to 66 years today. As the elderly come to make up a larger share of the population, the total amount of savings available for investment and wealth accumulation will dwindle. The prime earning years for the average worker are roughly from age 30 to 50; thereafter, the savings rate falls. With the onset of retirement, households save even less and, in some cases, begin to spend accumulated assets. The result is a decline in the prime savers ratio?the number of households in their prime saving years divided by the number of elderly households. This ratio has been falling in Japan and Italy for many years. In Japan, it dropped below one in the mid-1980s, meaning that elderly households now outnumber those in their highest earning and saving years. Japan is often thought to be a frugal nation of supersavers, but its savings rate actually has already fallen from nearly 25 percent in 1975 to less than 5 percent today. That figure is projected to hit 0.2 percent in 2024. In 2000, the prime savers ratios of Germany, the United Kingdom, and the United States either joined the declining trend or stabilized at very low levels. This unprecedented confluence of demographic patterns will have significant ramifications for global savings and wealth accumulation. How the decline in prime savers will affect total savings depends on how these people's savings behavior changes over the course of a household's life. Germany, Japan, and the United States have traditional hump-shaped life cycle savings patterns (Exhibit 3). In these countries, aging populations will cause a dramatic slowdown in household savings and wealth. In contrast, Italy has a flatter savings curve, resulting in part from historical borrowing constraints that forced households led by people in their 20s and 30s to save more. Thus an increase in the share of elderly households will have less impact on the country's financial wealth. In some countries, the relatively lower savings rates of younger generations in their peak earning years will exacerbate the slowdown in savings and wealth. In the United States and Japan, where we analyzed generation-specific savings data, several factors contribute to this pattern: a tendency to rely more on inheritance than past generations did, the good fortune to avoid the economic hardships that prompted earlier generations to be more frugal, and the availability of consumer credit and mortgages (which, in the case of Japan, have become more socially acceptable). The coming shortfall in household wealth Most of the public discussion on aging populations has focused on the rapidly escalating cost of pensions and health care. Little attention has been paid to the potentially far more damaging effect that this demographic phenomenon will have on savings, wealth, and economic well being. As more households retire, the decline in savings will slow the growth in household financial wealth in the five countries we studied by more than two-thirds?to 1.3 percent, from the historical level of 4.5 percent. By 2024, total household financial wealth will be 36 percent lower?a drop of $31 trillion?than it would have been if the higher historical growth rates had persisted. Of course, changes in savings behavior by households and governments or increases in the average rate of return earned on those savings could alter this outcome. Without such changes, however, our analysis indicates that the aging populations in the world's richest nations will exert severe downward pressure on global savings and financial wealth during the next two decades. The United States will experience the largest shortfall in household financial wealth in absolute terms?$19 trillion by 2024?because of the size of its economy. The growth rate of the country's household financial wealth will decline to 1.6 percent, from 3.8 percent. Since the aging trend is less severe in the United States, reduced savings rates among younger generations are responsible for a large part of the decline. In Japan, the situation is much more serious. Household financial wealth will actually start declining during the next 20 years: by 2024 it will be $9 trillion?47 percent lower than it would have been if historical growth rates had persisted. Japan's demographic trends are severe: the median age will increase to 50, from 43 (for the US population, it will rise to 38, from 37), and the savings of elderly households fall off at a faster rate in Japan than in the United States. Even more important, household financial wealth in Japan is almost exclusively the result of new savings from income rather than of asset appreciation; therefore, the falloff in savings causes a bigger decline in wealth. The outlook for Europe varies by country. Italy will experience a large decline in the growth rate of its financial wealth?to just 0.9 percent, from 3.4 percent?because of the rapid aging of its population. Its relatively flat life cycle savings curve will mitigate the impact, however, resulting in an absolute shortfall of about $1 trillion, or 39 percent. The projected decline in the growth rate of financial wealth in other countries will be less dramatic: to 2.4 percent, from 3.8 percent, in Germany (because of its higher savings rates) and to a still-healthy 3.2 percent, from 5.1 percent, in the United Kingdom (because of its stronger demographics). Global ripple effects This slowdown in household savings will have major implications for all countries. In recent years the United States has absorbed more than half of the world's capital flows while running a current-account deficit approaching 6 percent of GDP. Japan has historically enjoyed a huge current-account surplus, which has allowed it to be a major exporter of capital to other countries, notably the United States. The expected drop in Japan's household savings will make this arrangement increasingly untenable. In all likelihood, the United States also won't be able to rely on European nations, with their aging populations, to increase capital flows. Nor can it expect rapidly industrializing nations, such as China, to fill the gap. Even if China's economy continued to grow at its current breakneck pace, it would need approximately 15 years to reach Japan's current GDP. In any case, if China is to sustain this growth, the United States must continue consuming at its current level?something it cannot do if capital flows from abroad decrease. Even if China did have savings to export, it would have to confront the obstacles posed by its current exchange rate and capital controls regime. Although an increase in global interest rates and the cost of capital may seem inevitable, it is not. On the one hand, as global savings fall markets can adjust through changes in asset prices and demand; which of these will predominate is unclear. Some economists forecast less demand for capital: fewer households will be taking out mortgages and borrowing for college, governments will invest less in infrastructure to keep pace with population growth, and businesses won't have to add as much capital equipment to accommodate a labor force that will no longer be growing. On the other hand, the demand for capital is likely to remain strong if emerging markets and rich countries seek to boost their GDP and productivity growth by increasing the amount of capital per worker. Likewise, while a drop in global savings could drive up asset prices, opposing forces will also be at work, as retirees begin selling their financial assets.4 One thing is certain: as household savings rates decline and the pool of available capital dwindles, persistent government budget deficits will likely push interest rates higher and crowd out private investment. The rising cost of caring for an aging population in the years to come will force national governments to exercise better fiscal discipline. No easy solutions Many policy changes suggested today, such as increasing immigration, raising the retirement age, encouraging households to have more children, and boosting economic growth, will do little to mitigate the coming shortfall in global financial wealth (Exhibit 4). Our analysis shows that an aggressive effort to increase immigration won't solve the problem, simply because new arrivals represent only a tiny proportion of any country's population. In Germany, for instance, a 50 percent increase in net immigration (to 100,000 people a year) would raise total financial assets just 0.7 percent by 2024. In Japan, doubling official projections of net immigration would have almost no impact on the number of households or on the country's aggregate savings. The same is true even in the United States, which had the highest historical levels of immigration in our sample. Since households don't reach their prime saving years until middle age, promoting higher birthrates through policies such as child tax credits, generous maternity-leave policies, and child care subsidies will also have only a negligible effect by 2024. This approach could actually make the situation worse by adding child dependents to a workforce already supporting a larger number of elderly. Similarly, raising the retirement age won't be particularly effective in most countries. In Japan, efforts to expand the peak earning and saving period by five years (a proxy for raising the retirement age) would close 25 percent of the projected wealth shortfall in that country. In Italy, however, this approach would have little impact because households do not greatly reduce their savings in retirement. After the IT revolution and the jump in US productivity growth during the late 1990s, it may be tempting to think that countries might grow themselves out of the problem. Without changes in the relationship between income and spending, however, an increase in economic growth won't generate enough new savings to close the gap. The simple reason is that as incomes and standards of living rise, so does consumption. For instance, raising average income growth in the United States by one percentage point?a huge increase?would narrow the projected wealth shortfall by only 10 percent. Navigating the demographic transition The only meaningful way to counteract the impending demographic pressure on global financial wealth is for governments and households to increase their savings rates and for economics to allocate capital more efficiently, thereby boosting returns. Boosting asset appreciation The underlying performance of domestic capital markets varies widely across countries, resulting in significantly different rates of return.5 Since 1975, the average rate of financial-asset appreciation in the United Kingdom and the United States has been nearly 1 percent a year, after adjusting for inflation. In contrast, financial assets in Japan have depreciated by a real 1.8 percent annually over the same period (although the ten-year moving average is now near zero). Real rates of asset appreciation have been negative in Germany and Italy as well. UK and US households compensate for their low savings rates by building wealth through high rates of asset appreciation. Their counterparts in Continental Europe and Japan save at much higher rates but ultimately accumulate less wealth, since these savings generate low or negative returns. From 1975 to 2003, unrealized capital gains increased the value of the financial assets of US households by almost 30 percent. But in Japan the value of such assets declined. European countries fell somewhere in between. Raising the rates of return on the $56 trillion of household savings in the five countries we studied could avert much of the impending wealth shortfall. In Germany, increasing the appreciation of financial assets to 0 percent, from the historical average of -1.1 percent, would completely eliminate the projected wealth shortfall. The opportunity is also large for Italy, since its real rate of asset appreciation has averaged -1.6 percent since 1992; raising returns to the levels in the United Kingdom and the United States would fully close the gap. For the latter two countries, the challenge could be more difficult because their rates of asset appreciation are already high. Achieving the required rates of return will call for improved financial intermediation so that savings are funneled to the most productive investments. To achieve this goal, policy makers must increase competition and encourage innovation in the financial sector and in the economy as a whole,6 enhance legal protections for investors and creditors, and end preferential lending by banks to companies with political ties or shareholder relationships. For some countries, such as Japan, where households keep more than half of their financial assets in cash equivalents, diversifying the range of assets that individuals hold is an important means of increasing the efficiency of capital allocation.7 To promote a better allocation of assets, policy makers should remove investment restrictions for households, improve investor education, and create tax incentives for well-diversified portfolios. New research in behavioral economics has shown that offering a balanced, prudent allocation as the default option for investors can improve returns because they overwhelmingly stick with this option.8 Increasing savings rates In many countries, today's younger generations earn more and save less than their elders do. This discrepancy is an important driver of the wealth shortfall in the United States and, more surprisingly, in Japan. If younger generations saved as much as their parents did while continuing to earn higher incomes, one-quarter of Japan's wealth shortfall and nearly a third of the US gap would be closed by 2024. Persuading young people to save more is difficult, however, and tax incentives aimed at increasing household savings have yielded mixed results.9 Contrary to conventional wisdom, too much borrowing is not the culprit in most countries. Although household liabilities have grown significantly faster than assets have across our sample since 1982, keeping consumer borrowing in line with asset growth would close $2.3 trillion, or just 7.5 percent, of the projected wealth shortfall. The key to boosting household savings is overcoming inertia. When companies automatically enroll their employees in voluntary savings plans (letting them opt out if they choose) rather than requiring people to sign up actively, participation rates rise dramatically.10 A study at one US Fortune 500 company that instituted such a program found that enrollment in its 401(k) retirement plan jumped to 80 percent, from 36 percent; the increase among low-income workers was even greater.11 In addition, a substantial fraction of the participants in the automatic-enrollment program accepted the default for both the contribution rate and the investment allocation?a combination chosen by few employees outside the program. Of course, governments can also increase the savings rates of their countries through the one mechanism directly under their control?reducing fiscal budget deficits. Maintaining fiscal discipline now is vital if governments are to cope with the escalating pension and health care costs that aging populations will accrue. If policy makers take no action, the coming slowdown in global savings and the projected decline in financial wealth could depress investment, economic growth, and living standards in the world's largest and wealthiest countries. The future development of poor nations could also be in jeopardy. A concerted, sustained effort to increase the efficiency of capital allocation, boost savings rates, and close government budget deficits can avert this outcome. About the Authors Diana Farrell is director of the McKinsey Global Institute, where Tim Shavers is a consultant; Sacha Ghai is a consultant in McKinsey's Toronto office. The authors wish to thank Ezra Greenberg, Piotr Kulczakowicz, Susan Lund, Carlos Ocampo, and Yoav Zeif for their contributions to this article. Notes 1All figures given in this article are valued in 2000 US dollars, and all growth rates indicate real terms. 2This study examined the impact of demographic trends on household savings and wealth in Germany, Italy, Japan, the United Kingdom, and the United States. The full report, The Coming Demographic Deficit: How Aging Populations Will Reduce Global Saving, is available for free online. 3The State of World Population, 1999 and 2004, United Nations Population Fund. 4Empirical analyses on the impact of demographic changes on financial-asset prices and returns are inconclusive. See Barry P. Bosworth, Ralph C. Bryant, and Gary Burtless, The Impact of Aging on Financial Markets and the Economy: A Survey, Brookings Institution, July 2004; and James Poterba, "The impact of population aging on financial markets," National Bureau of Economic Research working paper W10851, October 2004. 5In this article, the terms "financial-asset appreciation" and "returns" refer to the unrealized capital gains on financial assets, not to interest and dividends paid. By convention, interest and dividends are treated as household income, a portion of which may be saved. 6For a good synthesis of MGI's research, see William W. Lewis, The Power of Productivity, Chicago: University of Chicago Press, 2004. 7Moving households closer to the efficient frontier of risk and returns serves to make asset pricing more precise and forces companies to practice greater capital market discipline. 8Brigitte C. Madrian and Dennis F. Shea, "The power of suggestion: Inertia in 401(k) participation and savings behavior," Quarterly Journal of Economics, November 2001, Volume 116, Number 4, pp. 1149?87. 9B. Douglas Bernheim, "Taxation and saving," Handbook of Public Economics, Volume 3, Alan J. Auerbach and Martin Feldstein (eds.), New York: Elsevier North-Holland, 2002. 10James J. Choi, David Laibson, Brigitte C. Madrian, and Andrew Metrick, "Defined contribution pensions: Plan rules, participant decisions, and the path of least resistance," National Bureau of Economic Research working paper W8655, December 2001. 11Brigitte C. Madrian and Dennis F. Shea, "The power of suggestion: Inertia in 401(k) participation and savings behavior," Quarterly Journal of Economics, November 2001, Volume 116, Number 4, pp. 1149?87. From checker at panix.com Sun May 8 14:57:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:57:33 -0400 (EDT) Subject: [Paleopsych] Kahn and Wiener on Computers (1967) Message-ID: Kahn missed the Internet, though. Also the collapse of communism and the fitness revolution. ----- Forwarded message from James Fehlinger ----- From: James Fehlinger Date: Sat, 7 May 2005 11:10:48 -0400 To: eugen at leitl.org Subject: One of the most important discoveries of the twentieth century >From _The Year 2000: A Framework for Speculation on the Next Thirty-Three Years_ by Herman Kahn and Anthony J. Wiener, The Hudson Institute, Inc., 1967 pp. 89 - 91: "If computer capacities were to continue to increase by a factor of ten every two or three years until the end of the century (a factor between a hundred billion and ten quadrillion), then all current concepts about computer limitations will have to be reconsidered. Even if the trend continues for only the next decade or two, the improvements over current computers would be factors of thousands to millions. If we add the likely enormous improvements in input-output devices, programming and problem formulation, and better understanding of the basic phenomena being studied, manipulated, or simulated, these estimates of improvement may be wildly conservative. And even if the rate of change slows down by several factors, there would still be room in the next thirty-three years for an overall improvement of some five to ten orders of magnitude. Therefore, it is necessary to be skeptical of any sweeping but often meaningless or nonrigorous statement such as 'a computer is limited by the designer -- it cannot create anything he does not put in,' or that 'a computer cannot be truly creative or original.' By the year 2000, computers are likely to match, simulate, or surpass some of man's most 'human-like' intellectual abilities, including perhaps some of his aesthetic and creative capacities, in addition to having some new kinds of capabilities that human beings do not have. These computer capacities are not certain; however, it is an open question what inherent limitations computers have. If it turns out that they cannot duplicate or exceed certain characteristically human capabilities, that will be one of the most important discoveries of the twentieth [!] century. . . This idea of computer 'intelligence' is a sensitive point with many people. The claim is not that computers will resemble the structure of the human brain, but that their functional output will equal or exceed that of the human brain in many functions that we have been used to thinking of as aspects of intelligence, and even as uniquely human. Still, a computer will presumably not become 'humanoid' and probably will not use similar processes, but it may have properties which are analogous to or operationally indistinguishable from self-generated purposes, ideas, and emotional responses to new inputs or its own productions. In particular, as computers become more self-programming they will increasingly tend to perform activities that amount to 'learning' from experience and training. Thus they will eventually evolve subtle methods and processes that may defy the understanding of the human designer. In addition to this possibility of independent intelligent activities, computers are being used increasingly as a helpful flexible tool for more or less individual needs -- at times in such close cooperation that one can speak in terms of a man- machine symbiosis. Eventually there will probably be computer consoles in every home, perhaps linked to public utility computers and permitting each user his private file space in a central computer, for uses such as consulting the Library of Congress, keeping individual records, preparing income tax returns from these records, obtaining consumer information, and so on. Computers will also presumably be used as teaching aids, with one computer giving simultaneous individual instruction to hundreds of students, each at his own console and topic, at any level from elementary to graduate school; eventually the system will probably be designed to maximize the individuality of the learning process. Presumably there will also be such things as: 1. A single national information file containing all tax, legal, security, credit, educational, medical employment, and other information about each citizen. (One problem here is the creation of acceptable rules concerning access to such a file, and then. . . the later problem of how to prevent erosion of these rules after one or two decades of increased operation have made the concept generally acceptable. . .) 2. Time-sharing of large computers by research centers in every field, providing national and international pools of knowledge and skill. 3. Use of computers to test trial configurations in scientific work, allowing the experimenter to concentrate on his creativity, judgment, and intuition, while the computer carries out the detailed computation and 'horse work.' A similar symbiotic relationship will prevail in engineering and other technological design. Using the synergism of newer 'problem-oriented' computer languages, time-sharing, and new input-output techniques, engineer-designers linked to a large computer complex will use computers as experienced pattern-makers, mathematical analysts of optimum design, sources of catalogs on engineering standards and parts data, and often substitutes for mechanical drawings. 4. Use of real-time large computers for an enormous range of business information and control activity, including most trading and financial transactions; the flow of inventories within companies and between suppliers and users; immediate analysis and display of company information about availability of products, prices, sales statistics, cash flow, credit, bank accounts and interests on funds, market analysis and consumer tastes, advanced projections, and so on. 5. Vast use of computers to reduce and punish crime, including the capacity of police to check immediately the identification and record of any person stopped for questioning. 6. Computerized processes for instantaneous exchange of money, using central-computer/bank-and-store-computer networks for debiting and crediting accounts. In addition, there will be uses of computers for worldwide communications, medical diagnostics, traffic and transportation control, automatic chemical analyses, weather prediction and control, and so on. The sum of all these uses suggests that the computer utility industry will become as fundamental as the power industry, and that the computer can be viewed as the most basic tool of the last third of the twentieth century. Individual computers (or at least consoles or other remote input devices) will become essential equipment for home, school, business, and profession, and the ability to use a computer skillfully and flexibly may become more widespread than the ability to play bridge or drive a car (and presumably much easier)." From checker at panix.com Sun May 8 14:57:49 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:57:49 -0400 (EDT) Subject: [Paleopsych] Atlantic: Vannevar Bush, "As We May Think" (1945) Message-ID: Vannevar Bush, "As We May Think" The Atlantic Monthly, 1945.7 http://ccat.sas.upenn.edu/~jod/texts/vannevar.bush.html As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For many years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on ``The American Scholar,'' this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge. - The Editor _________________________________________________________________ This has not been a scientist's war; it has been a war in which all have had a part. The scientists, burying their old professional competition in the demand of a common cause, have shared greatly and learned much. It has been exhilarating to work in effective partnership. Now, for many, this appears to be approaching an end. What are the scientists to do next? For the biologists, and particularly for the medical scientists, there can be little indecision, for their war work has hardly required them to leave the old paths. Many indeed have been able to carry on their war research in their familiar peacetime laboratories. Their objectives remain much the same. It is the physicists who have been thrown most violently off stride, who have left academic pursuits for the making of strange destructive gadgets, who have had to devise new methods for their unanticipated assignments. They have done their part on the devices that made it possible to turn back the enemy. They have worked in combined effort with the physicists of our allies. They have felt within themselves the stir of achievement. They have been part of a great team. Now, as peace approaches, one asks where they will find objectives worthy of their best. I Of what lasting benefit has been man's use of science and of the new instruments which his research brought into existence? First, they have increased his control of his material environment. They have improved his food, his clothing, his shelter; they have increased his security and released him partly from the bondage of bare existence. They have given him increased knowledge of his own biological processes so that he has had a progressive freedom from disease and an increased span of life. They are illuminating the interactions of his physiological and psychological functions, giving the promise of an improved mental health. Science has provided the swiftest communication between individuals; it has provided a record of ideas and has enabled man to manipulate and to make extracts from that record so that knowledge evolves and endures throughout the life of a race rather than that of an individual. There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers - conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial. Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month's efforts could be produced on call. Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential. The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present-day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience us being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships. But there are signs of a change as new and powerful instrumentalities come into use. Photocells capable of seeing things in a physical sense, advanced photography which can record what is seen or even what is not, thermionic tubes capable of controlling potent forces under the guidance of less power than a mosquito uses to vibrate his wings, cathode ray tubes rendering visible an occurrence so brief that by comparison a microsecond is a long time, relay combinations which will carry out involved sequences of movements more reliably than any human operator and thousand of times as fast - there are plenty of mechanical aids with which to effect a transformation in scientific records. Two centuries ago Leibnitz invented a calculating machine which embodied most of the essential features of recent keyboard devices, but it could not then come into use. The economics of the situation were against it: the labor involved in constructing it, before the days of mass production, exceeded the labor to be saved by its use, since all it could accomplish could be duplicated by sufficient use of pencil and paper. Moreover, it would have been subject to frequent breakdown, so that it could not have been depended upon; for at that time and long after, complexity and unreliability were synonymous. Babbage, even with remarkably generous support for his time, could not produce his great arithmetical machine. His idea was sound enough, but construction and maintenance costs were then too heavy. Had a Pharaoh been given detailed and explicit designs of an automobile, and had he understood them completely, it would have taxed the resources of his kingdom to have fashioned the thousands of parts for a single car, and that car would have broken down on the first trip to Giza. Machines with interchangeable parts can now be constructed with great economy of effort. In spite of much complexity, they perform reliably. Witness the humble typewriter, or the movie camera, or the automobile. Electrical contacts have ceased to stick when thoroughly understood. Note the automatic telephone exchange, which has hundred of thousands of such contacts, and yet is reliable. A spider web of metal, sealed in a thin glass container, a wire heated to brilliant glow, in short, the thermionic tube of radio sets, is made by the hundred million, tossed about in packages, plugged into sockets - and it works! Its gossamer parts, the precise location and alignment involved in its construction, would have occupied a master craftsman of the guild for months; now it is built for thirty cents. The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it. II A record, if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted. Today we make the record conventionally by writing and photography, followed by printing; but we also record on film, on wax disks, and on magnetic wires. Even if utterly new recording procedures do not appear, these present ones are certainly in the process of modification and extension. Certainly progress in photography is not going to stop. Faster material and lenses, more automatic cameras, finer-grained sensitive compounds to allow an extension of the minicamera idea, are all imminent. Let us project this trend ahead to a logical, if not inevitable, outcome. The camera hound of the future wears on his forehead a lump a little larger than a walnut. It takes pictures 3 millimeters square, later to be projected or enlarged, which after all involves only a factor of 10 beyond present practice. The lens is of universal focus, down to any distance accommodated by the unaided eye, simply because it is of short focal length. There is a built-in photocell on the walnut such as we now have on at least one camera, which automatically adjusts exposure for a wide range of illumination. There is film in the walnut for a hundred exposure, and the spring for operating its shutter and shifting its film is wound once for all when the film clip is inserted. It produces its result in full color. It may well be stereoscopic, and record with spaced glass eyes, for striking improvements in stereoscopic technique are just around the corner. The cord which trips its shutter may reach down a man's sleeve within easy reach of his fingers. A quick squeeze, and the picture is taken. On a pair of ordinary glasses is a square of fine lines near the top of one lens, where it is out of the way of ordinary vision. When an object appears in that square, it is lined up for its picture. As the scientist of the future moves about the laboratory or the field, every time he looks at something worthy of the record, he trips the shutter and in it goes, without even an audible click. Is this all fantastic? The only fantastic thing about it is the idea of making as many pictures as would result from its use. Will there be dry photography? It is already here in two forms. When Brady made his Civil War pictures, the plate had to be wet at the time of exposure. Now it has to be wet during development instead. In the future perhaps it need not be wetted at all. There have long been films impregnated with diazo dyes which form a picture without development, so that it is already there as soon as the camera has been operated. An exposure to ammonia gas destroys the unexposed dye, and the picture can then be taken out into the light and examined. The process is now slow, but someone may speed it up, and it has no grain difficulties such as now keep photographic researchers busy. Often it would be advantageous to be able to snap the camera and to look at the picture immediately. Another process now is use is also slow, and more or less clumsy. For fifty years impregnated papers have been used which turn dark at every point where an electrical contact touches them, by reason of the chemical change thus produced in an iodine compound included in the paper. They have been used to make records, for a pointer moving across them can leave a trail behind. If the electrical potential on the pointer is varied as it moves, the line becomes light or dark in accordance with the potential. This scheme is now used in facsimile transmission. The pointer draws a set of closely spaced lines across the paper one after another. As it moves, its potential is varied in accordance with a varying current received over wires from a distant station, where these variations are produced by a photocell which is similarly scanning a picture. At every instant the darkness of the line being drawn is made equal to the darkness of the point on the picture being observed by the photocell. Thus, when the whole picture has been covered, a replica appears at the receiving end. A scene itself can be just as well looked over line by line by the photocell in this way as can a photograph of the scene. This whole apparatus constitutes a camera, with the added feature, which can be dispensed with if desired, of making its picture at a distance. It is slow, and the picture is poor in detail. Still, it does give another process of dry photography, in which the picture is finished as soon as it is taken. It would be a brave man who could predict that such a process will always remain clumsy, slow, and faulty in detail. Television equipment today transmits sixteen reasonably good images a second, and it involves only two essential differences from the process described above. For one, the record is made by a moving beam of electrons rather than a moving pointer, for the reason that an electron beam can sweep across the picture very rapidly indeed. The other difference involves merely the use of a screen which glows momentarily when the electrons hit, rather than a chemically treated paper or film which is permanently altered. This speed is necessary in television, for motion pictures rather than stills are the object. Use chemically treated film in place of the glowing screen, allow the apparatus to transmit one picture rather than a succession, and a rapid camera for dry photography results. The treated film needs to be far faster in action than present examples, but it probably could be. More serious is the objection that this scheme would involve putting the film inside a vacuum chamber, for electron beams behave normally only in such a rarefied environment. This difficulty could be avoided by allowing the electron beam to play on one side of a partition, and by pressing the film against the other side, if this partition were such as to allow the electrons to go through perpendicular to its surface, and to prevent them from spreading out sideways. Such partitions, in crude form, could certainly be constructed, and they will hardly hold up the general development. Like dry photography, microphotography still has a long way to go. The basic scheme of reducing the size of the record, and examining it by projection rather than directly, has possibilities too great to be ignored. The combination of optical projection and photographic reduction is already producing some results in microfilm for scholarly purposes, and the potentialities are highly suggestive. Today, with microfilm, reductions by a linear factor of 20 can be employed and still produce full clarity when the material is re-enlarged for examination. The limits are set by the graininess of the film, the excellence of the optical system, and the efficiency of the light sources employed. All of these are rapidly improving. Assume a linear ratio of 100 for future use. Consider film of the same thickness as paper, although thinner film will certainly be usable. Even under these conditions there would be a total factor of 10,000 between the bulk of the ordinary record on books, and its microfilm replica. The Encyclopoedia Britannica could be reduced to the volume of a matchbox. A library of a million volumes could be compressed into one end of a desk. If the human race has produced since the invention of movable type a total record, in the form of magazines, newspapers, books, tracts, advertising blurbs, correspondence, having a volume corresponding to a billion books, the whole affair, assembled and compressed, could be lugged off in a moving van. Mere compression, of course, is not enough; one needs not only to make and store a record but also to be able to consult it, and this aspect of the matter comes later. Even the modern great library is not generally consulted; it is nibbled by a few. Compression is important, however, when it comes to costs. The material for the microfilm Britannica would cost a nickel, and it could be mailed anywhere for a cent. What would it cost to print a million copies? To print a sheet of newspaper, in a large edition, costs a small fraction of a cent. The entire material of the Britannica in reduced microfilm form would go on a sheet eight and one-half by eleven inches. Once it is available, with the photographic reproduction methods of the future, duplicates in large quantities could probably be turned out for a cent apiece beyond the cost of materials. The preparation of the original copy? That introduces the next aspect of the subject. III To make the record, we now push a pencil or tap a typewriter. Then comes the process of digestion and correction, followed by an intricate process of typesetting, printing, and distribution. To consider the first stage of the procedure, will the author of the future cease writing by hand or typewriter and talk directly to the record? He does so indirectly, by talking to a stenographer or a wax cylinder; but the elements are all present if he wishes to have his talk directly produce a typed record. All he needs to do us to take advantage of existing mechanisms and to alter his language. At a recent World Fair a machine called a Voder was shown. A girl stroked its keys and it emitted recognizable speech. No human vocal cords entered in the procedure at any point; the keys simply combined some electrically produced vibrations and passed these on to a loud-speaker. In the Bell Laboratories there is the converse of this machine, called a Vocoder. The loudspeaker is replaced by a microphone, which picks up sound. Speak to it, and the corresponding keys move. This may be one element of the postulated system. The other element is found in the stenotype, that somewhat disconcerting device encountered usually at public meetings. A girl strokes its keys languidly and looks about the room and sometimes at the speaker with a disquieting gaze. From it emerges a typed strip which records in a phonetically simplified language a record of what the speaker is supposed to have said. Later this strip is retyped into ordinary language, for in its nascent form it is intelligible only to the initiated. Combine these two elements, let the Vocoder run the stenotype, and the result is a machine which types when talked to. Our present languages are not especially adapted to this sort of mechanization, it is true. It is strange that the inventors of universal languages have not seized upon the idea of producing one which better fitted the technique for transmitting and recording speech. Mechanization may yet force the issue, especially in the scientific field; whereupon scientific jargon would become still less intelligible to the layman. One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together. If he goes into the field, he may be connected by radio to his recorder. As he ponders over his notes in the evening, he again talks his comments into the record. His typed record, as well as his photographs, may both be in miniature, so that he projects them for examination. Much needs to occur, however, between the collection of data and observations, the extraction of parallel material from the existing record, and the final insertion of new material into the general body of the common record. For mature thought there is no mechanical substitute. But creative thought and essentially repetitive thought are very different things. For the latter there are, and may be, powerful mechanical aids. Adding a column of figures is a repetitive thought process, and it was long ago properly relegated to the machine. True, the machine is sometimes controlled by the keyboard, and thought of a sort enters in reading the figures and poking the corresponding keys, but even this is avoidable. Machines have been made which will read typed figures by photocells and then depress the corresponding keys; these are combinations of photocells for scanning the type, electric circuits for sorting the consequent variations, and relay circuits for interpreting the result into the action of solenoids to pull the keys down. All this complication is needed because of the clumsy way in which we have learned to write figures. If we recorded them positionally, simply by the configuration of a set of dots on a card, the automatic reading mechanism would become comparatively simple. In fact, if the dots are holes, we have the punched-card machine long ago produced by Hollorith for the purposes of the census, and now used throughout business. Some types of complex businesses could hardly operate without these machines. Adding is only one operation. To perform arithmetical computation involves also subtraction, multiplication, and division, and in addition some method for temporary storage of results, removal from storage for further manipulation, and recording of final results by printing. Machines for these purposes are now of two types: keyboard machines for accounting and the like, manually controlled for the insertion of data, and usually automatically controlled as far as the sequence of operations is concerned; and punched-card machines in which separate operations are usually delegated to a series of machines, and the cards then transferred bodily from one to another. Both forms are very useful; but as far as complex computations are concerned, both are still embryo. Rapid electrical counting appeared soon after the physicists found it desirable to count cosmic rays. For their own purposes the physicists promptly constructed thermionic-tube equipment capable of counting electrical impulses at the rate of 100,000 a second. The advanced arithmetical machines of the future will be electrical in nature, and they will perform at 100 times present speeds, or more. Moreover, they will be far more versatile than present commercial machines, so that they may readily be adapted for a wide variety of operations. They will be controlled by a control card or film, they will select their own data and manipulate it in accordance with the instructions thus inserted, they will perform complex arithmetical computations at exceedingly high speeds, and they will record results in such form as to be readily available for distribution or for later further manipulation. Such machines will have enormous appetites. One of them will take instructions and data from a roomful of girls armed with simple keyboard punches, and will deliver sheets of computed results every few minutes. There will always be plenty of things to compute in the detailed affairs of millions of people doing complicated things. IV The repetitive processes of thought are not confined, however, to matters of arithmetic and statistics. In fact, every time one combines and records facts in accordance with established logical processes, the creative aspect of thinking is concerned only with the selection of the data and the process to be employed, and the manipulation thereafter is repetitive in nature and hence a fit matter to be relegated to the machines. Not so much has been done along these lines, beyond the bounds of arithmetic, as might be done, primarily because of the economics of the situation. The needs of business, and the extensive market obviously waiting, assured the advent of mass-produced arithmetical machines just as soon as production methods were sufficiently advanced. With machines for advanced analysis no such situation existed; for there was and is no extensive market; the users of advanced methods of manipulating data are a very small part of the population. There are, however, machines for solving differential equations - and functional and integral equations, for that matter. There are many special machines, such as the harmonic synthesizer which predicts the tides. There will be many more, appearing certainly first in the hands of the scientist and in small numbers. If scientific reasoning were limited to the logical processes of arithmetic, we should not get far in our understanding of the physical world. One might as well attempt to grasp the game of poker entirely by the use of the mathematics of probability. The abacus, with its beads string on parallel wires, led the Arabs to positional numeration and the concept of zero many centuries before the rest of the world; and it was a useful tool - so useful that it still exists. It is a far cry from the abacus to the modern keyboard accounting machine. It will be an equal step to the arithmetical machine of the future. But even this new machine will not take the scientist where he needs to go. Relief must be secured from laborious detailed manipulation of higher mathematics as well, if the users of it are to free their brains for something more than repetitive detailed transformations in accordance with established rules. A mathematician is not a man who can readily manipulate figures; often he cannot. He is not even a man who can readily perform the transformation of equations by the use of calculus. He is primarily an individual who is skilled in the use of symbolic logic on a high plane, and especially he is a man of intuitive judgment in the choice of the manipulative processes he employs. All else he should be able to turn over to his mechanism, just as confidently as he turns over the propelling of his car to the intricate mechanism under the hood. Only then will mathematics be practically effective in bringing the growing knowledge of atomistics to the useful solution of the advanced problems of chemistry, metallurgy, and biology. For this reason there will come more machines to handle advanced mathematics for the scientist. Some of them will be sufficiently bizarre to suit the most fastidious connoisseur of the present artifacts of civilization. V The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed - that is, whenever thought for a time runs along an accepted groove - there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students' souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine. Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field. A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even a streamlined model. So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before - for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene. Selection, in this broad sense, is a stone adze in the hands of a cabinetmaker. Yet, in a narrow sense and in other areas, something has already been done mechanically on selection. The personnel officer of a factory drops a stack of a few thousand employee cards into a selecting machine, sets a code in accordance with an established convention, and produces in a short time a list of all employees who live in Trenton and know Spanish. Even such devices are much too slow when it comes, for example, to matching a set of fingerprints with one of five millions on file. Selection devices of this sort will soon be speeded up from their present rate of reviewing data at a few hundred a minute. By the use of photocells and microfilm they will survey items at the rate of thousands a second, and will print out duplicates of those selected. This process, however, is simple selection: it proceeds by examining in turn every one of a large set of items, and by picking out those which have certain specified characteristics. There is another form of selection best illustrated by the automatic telephone exchange. You dial a number and the machine selects and connects just one of a million possible stations. It does not run over them all. It pays attention only to a class given by a first digit, and so on; and thus proceeds rapidly and almost unerringly to the selected station. It requires a few seconds to make the selection, although the process could be speeded up if increased speed were economically warranted. If necessary, it could be made extremely fast by substituting thermionic-tube switching for mechanical switching, so that the full selection could be made in one-hundredth of a second. No one would wish to spend the money necessary to make this change in the telephone system, but the general idea is applicable elsewhere. Take the prosaic problem of the great department store. Every time a charge sale is made, there are a number of things to be done. The inventory needs to be revised, the salesman needs to be given credit for the sale, the general accounts need an entry, and, most important, the customer needs to be charged. A central records device has been developed in which much of this work is done conveniently. The salesman places on a stand the customer's identification card, his own card, and the card taken from the article sold - all punched cards. When he pulls a lever, contacts are made through the holes, machinery at a central point makes the necessary computations and entries, and the proper receipt is printed for the salesman to pass to the customer. But there may be ten thousand charge customers doing business with the store, and before the full operation can be completed someone has to select the right card and insert it at the central office. Now rapid selection can slide just the proper card into position in an instant or two, and return it afterward. Another difficulty occurs, however. Someone must read a total on the card, so that the machine can add its computed item to it. Conceivably the cards might be of the dry photography type I have described. Existing totals could then be read by photocell, and the new total entered by an electron beam. The cards may be in miniature, so that they occupy little space. They must move quickly. They need not be transferred far, but merely into position so that the photocell and recorder can operate on them. Positional dots can enter the data. At the end of the month a machine can readily be made to read these and to print an ordinary bill. With tube selection, in which no mechanical parts are involved in the switches, little time need be occupied in bringing the correct card into use - a second should suffice for the entire operation. The whole record on the card may be made by magnetic dots on a steel sheet if desired, instead of dots to be observed optically, following the scheme by which Poulsen long ago put speech on a magnetic wire. This method has the advantage of simplicity and ease of erasure. By using photography, however, one can arrange to project the record in enlarged form, and at a distance by using the process common in television equipment. One can consider rapid selection of this form, and distant projection for other purposes. To be able to key one sheet of a million before an operator in a second or two, with the possibility of then adding notes thereto, is suggestive in many ways. It might even be of use in libraries, but that is another story. At any rate, there are now some interesting combinations possible. One might, for example, speak to a microphone, in the manner described in connection with the speech-controlled typewriter, and thus make his selections. It would certainly beat the usual file clerk. VI The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path. The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature. Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve, for his record have relative permanency. The first idea, however, to be drawn from the analogy concerns selection. Selection by association, rather than by indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage. Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, ``memex'' will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory. It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk. In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the memex is devoted to storage, the rest to mechanism. Yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely. Most of the memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the memex is a transparent platen. On this are placed longhand notes, photographs, memoranda, all sort of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space in a section of the memex film, dry photography being employed. There is, of course, provision for consultation of the record by the usual scheme of indexing. If the user wishes to consult a certain book, he taps its code on the keyboard, and the title page of the book promptly appears before him, projected onto one of his viewing positions. Frequently-used codes are mnemonic, so that he seldom consults his code book; but when he does, a single tap of a key projects it for his use. Moreover, he has supplemental levers. On deflecting one of these levers to the right he runs through the book before him, each page in turn being projected at a speed which just allows a recognizing glance at each. If he deflects it further to the right, he steps through the book 10 pages at a time; still further at 100 pages at a time. Deflection to the left gives him the same control backwards. A special button transfers him immediately to the first page of the index. Any given book of his library can thus be called up and consulted with far greater facility than if it were taken from a shelf. As he has several projection positions, he can leave one item in position while he calls up another. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him. VII All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. If affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing. When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item. Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together to form a new book. It is more than this, for any item can be joined into numerous trails. The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds and interesting but sketchy article, leaves it projected, Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him. And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outranged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail. VIII Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by its patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior. The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only at the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected. Thus science may implement the ways in which man produces, stores, and consults the record of the race. It might be striking to outline the instrumentalities of the future more spectacularly, rather than to stick closely to the methods and elements now known and undergoing rapid development, as has been done here. Technical difficulties of all sorts have been ignored, certainly, but also ignored are means as yet unknown which may come any day to accelerate technical progress as violently as did the advent of the thermionic tube. In order that the picture may not be too commonplace, by reason of sticking to present-day patterns, it may be well to mention one such possibility, not to prophesy but merely to suggest, for prophecy based on extension of the known has substance, while prophecy founded on the unknown is only a doubly involved guess. All our steps in creating or absorbing material of the record proceed through one of the senses - the tactile when we touch keys, the oral when we speak or listen, the visual when we read. Is it not possible that some day the path may be established more directly? We know that when the eye sees, all the consequent information is transmitted to the brain by means of electrical vibrations in the channel of the optic nerve. This is an exact analogy with the electrical vibrations which occur in the cable of a television set: they convey the picture from the photocells which see it to the radio transmitter from which it is broadcast. We know further that if we can approach that cable with the proper instruments, we do not need to touch it; we can pick up those vibrations by electrical induction and thus discover and reproduce the scene which is being transmitted, just as a telephone wire may be tapped for its message. The impulse which flow in the arm nerves of a typist convey to her fingers the translated information which reaches her eye or ear, in order that the fingers may be caused to strike the proper keys. Might not these currents be intercepted, either in the original form in which information is conveyed to the brain, or in the marvelously metamorphosed form in which they then proceed to the hand? By bone conduction we already introduce sounds into the nerve channels of the deaf in order that they may hear. Is it not possible that we may learn to introduce them without the present cumbersomeness of first transforming electrical vibrations to mechanical ones, which the human mechanism promptly transforms back to the electrical form? With a couple of electrodes on the skull the encephalograph now produces pen-and-ink traces which bear some relation to the electrical phenomena going on in the brain itself. True, the record is unintelligible, except as it points out certain gross misfunctioning of the cerebral mechanism; but who would now place bounds on where such a thing may lead? In the outside world, all forms of intelligence, whether of sound or sight, have been reduced to the form of varying currents in an electric circuit in order that they may be transmitted. Inside the human frame exactly the same sort of process occurs. Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another? It is a suggestive thought, but it hardly warrants prediction without losing touch with reality and immediateness. Presumably man's spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his record more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursion may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important. The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good. Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome. [Thanks to John for this.] From checker at panix.com Sun May 8 14:58:06 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:58:06 -0400 (EDT) Subject: [Paleopsych] NYT: (The Omni-incompetent State, Part 114): U.S. to Spend Billions More to Alter Security Systems Message-ID: U.S. to Spend Billions More to Alter Security Systems New York Times, 5.5.8 http://www.nytimes.com/2005/05/08/national/08screen.html By [2]ERIC LIPTON WASHINGTON, May 7 - After spending more than $4.5 billion on screening devices to monitor the nation's ports, borders, airports, mail and air, the federal government is moving to replace or alter much of the antiterrorism equipment, concluding that it is ineffective, unreliable or too expensive to operate. Many of the monitoring tools - intended to detect guns, explosives, and nuclear and biological weapons - were bought during the blitz in security spending after the attacks of Sept. 11, 2001. In its effort to create a virtual shield around America, the Department of Homeland Security now plans to spend billions of dollars more. Although some changes are being made because of technology that has emerged in the last couple of years, many of them are planned because devices currently in use have done little to improve the nation's security, according to a review of agency documents and interviews with federal officials and outside experts. "Everyone was standing in line with their silver bullets to make us more secure after Sept. 11," said Randall J. Larsen, a retired Air Force colonel and former government adviser on scientific issues. "We bought a lot of stuff off the shelf that wasn't effective." Among the problems: ?Radiation monitors at ports and borders that cannot differentiate between radiation emitted by a nuclear bomb and naturally occurring radiation from everyday material like cat litter or ceramic tile. ?Air-monitoring equipment in major cities that is only marginally effective because not enough detectors were deployed and were sometimes not properly calibrated or installed. They also do not produce results for up to 36 hours - long after a biological attack would potentially infect thousands of people. ?Passenger-screening equipment at airports that auditors have found is no more likely than before federal screeners took over to detect whether someone is trying to carry a weapon or a bomb aboard a plane. ?Postal Service machines that test only a small percentage of mail and look for anthrax but no other biological agents. Federal officials say they bought the best available equipment. They acknowledge that it might not have been cutting-edge technology but said that to speed installation they bought only devices that were readily available instead of trying to buy promising technology that was not yet in production. The department says it has created a layered defense that would not be compromised by the failure of a single device. Even if the monitoring is less than ideal, officials say, it is still a deterrent. "The nation is more secure in the deployment and use of these technologies versus having no technologies in place at all," said Brian Roehrkasse, a spokesman for the Department of Homeland Security. Every piece of equipment provides some level of additional security, said Christopher Y. Milowic, a customs official whose office oversees screening at ports and borders. "It is not the ultimate capacity," he said. "But it reduces risk." Some critics say that even though federal agencies were pressed to move quickly by Congress and the administration, they made some poor choices. In some cases, agencies did not seek competitive bids or consider cheaper, better alternatives. And not all the devices were tested to see how well they worked in the environments where they would be used. "After 9/11, we had to show how committed we were by spending hugely greater amounts of money than ever before, as rapidly as possible," said Representative Christopher Cox, a California Republican who is the chairman of the Homeland Security Committee. "That brought us what we might expect, which is some expensive mistakes. This has been the difficult learning curve of the new discipline known as homeland security." Radiation at Seaports One after another, trucks stuffed with cargo like olives from Spain, birdseed from Ethiopia, olive oil from France and carpets from India line up at the Port Newark Container Terminal, approaching what looks like an E-ZPass toll gate. In minutes, they will fan out across the nation. But first, they pass through the gate, called a radiation portal monitor, which sounds an alarm if it detects a nuclear weapon or radioactive material that could be used to make a "dirty bomb," a crude nuclear device that causes damage by widely spreading low levels of radiation. Heralded as "highly sophisticated" when they were introduced, the devices have proven to be hardly that. The portal-monitor technology has been used for decades by the scrap metal industry. Customs officials at Newark have nicknamed the devices "dumb sensors," because they cannot discern the source of the radiation. That means benign items that naturally emit radioactivity - including cat litter, ceramic tile, granite, porcelain toilets, even bananas - can set off the monitors. Alarms occurred so frequently when the monitors were first installed that customs officials turned down their sensitivity. But that increased the risk that a real threat, like the highly enriched uranium used in nuclear bombs, could go undetected because it emits only a small amount of radiation or perhaps none if it is intentionally shielded. "It was certainly a compromise in terms of absolute capacity to detect threats," said Mr. Milowic, the customs official. The port's follow-up system, handheld devices that are supposed to determine what set off an alarm, is also seriously flawed. Tests conducted in 2003 by Los Alamos National Laboratory found that the handheld machines, designed to be used in labs, produced a false positive or a false negative more than half the time. The machines were the least reliable in identifying the most dangerous materials, the tests showed. The weaknesses of the devices were apparent in Newark one recent morning. A truck, whose records said it was carrying brakes from Germany, triggered the portal alarm, but the backup device could not identify the radiation source. Without being inspected, the truck was sent on its way to Ohio. "We agree it is not perfect," said Rich O'Brien, a customs supervisor in Newark. But he said his agency needed to move urgently to improve security after the 2001 attacks. "The politics stare you in the face, and you got to put something out there." At airports, similar shortcomings in technology have caused problems. The Transportation Security Administration bought 1,344 machines costing more than $1 million each to search for explosives in checked bags by examining the density of objects inside. But innocuous items as varied as Yorkshire pudding and shampoo bottles, which happen to have a density similar to certain explosives, can set off the machines, causing false alarms for 15 percent to 30 percent of all luggage, an agency official said. The frequent alarms require airports across the country to have extra screeners to examine these bags. Quick Action After 9/11 Because the machines were installed under tight timetables imposed by Congress, they were squeezed into airport lobbies instead of integrated into baggage conveyor systems. That slowed the screening process - the machines could handle far fewer bags per hour - and pushed up labor costs by hundreds of millions of dollars a year. At busy times, bags are sometimes loaded onto planes without being properly examined, according to several current and former screeners. "It is very discouraging," said a screener who worked at Portland International Airport until last year, but who asked not to be named because he still is a federal employee. "People are just taking your bags and putting them on the airplane." Equipment to screen passengers and carry-on baggage - including nearly 5,000 new metal detectors, X-ray machines and devices that can detect traces of explosives - can be unreliable. A handgun might slip through because screeners rely on two-dimensional X-ray machines, rather than newer, three-dimensional models, for example. The National Academy of Sciences recently described the trace detection devices as having "limited effectiveness and significant vulnerabilities." As a result, the likelihood of detecting a hidden weapon or bomb has not significantly changed since the government took over airport screening operations in 2002, according to the inspector general at the Department of Homeland Security. Transportation security officials acknowledge that they cannot improve performance without new technology, but they dispute suggestions that no progress has been made. "We have created a much more formidable deterrent," said Mark O. Hatfield Jr., a spokesman for the Transportation Security Administration. "Do we have an absolute barrier? No." Counting machinery and personnel, aviation screening has cost more than $15 billion since 2001, a price that Representative John L. Mica, Republican of Florida, says has hardly been worthwhile. "Congress is the one that mandated this," Mr. Mica said. "But we should have done more research and development on the technology and put this in gradually." Concerns Despite Reliability Some screening equipment has performed reliably. Machines that test mail at the United States Postal Service's major processing centers have not had a single false alarm after more than a year, officials said. But the monitors detect only anthrax, which sickened postal workers in 2001. And only about 20 percent of mail is tested - mostly letters dropped into blue post boxes, because they are considered the most likely route for a biological attack. In about 30 major cities, equipment used to test air is also very precise: there have been more than 1.5 million tests without a single false positive. But only about 10 monitors were placed in most cities, and they were often miles apart, according to the inspector general of the Environmental Protection Agency. Detecting a biological attack, particularly one aimed at a specific building or area, would require perhaps thousands of monitors in a big city. In addition, as contractors hurried to install the devices before the start of the war with Iraq - the Bush administration feared that Saddam Hussein might use biological weapons on American cities - they were often placed too low or too high to collect satisfactory samples, the inspector general noted. The monitors use filters that must be collected manually every day before they can be analyzed hours later at a lab. "It was an expedient attempt to solve a problem," said Philip J. Wyatt, a physicist and expert on biological weapons monitoring equipment. "What they got is ineffective, wasteful and expensive to maintain." Homeland security officials say that they have already moved to address some of the initial problems, and that they are convinced that the monitoring is valuable because it could allow them to recognize an attack about a day sooner than if they learned about it through victims' falling ill. At the Nevada Test Site, an outdoor laboratory that is larger than Rhode Island, the next generation of monitoring devices is being tested. In preparing to spend billions of dollars more on equipment, the Department of Homeland Security is moving carefully. In Nevada, contractors are being paid to build prototypes of radiation detection devices that are more sensitive and selective. Only those getting passing grades will move on to a second competition in the New York port. Similar competitions are under way elsewhere to evaluate new air-monitoring equipment and airport screening devices. That approach contrasts with how the federal government typically went about trying to shore up the nation's defenses after the 2001 attacks. Government agencies often turned to their most familiar contractors, including Northrop Grumman, Boeing and SAIC, a technology giant based in San Diego. The agencies bought devices from those companies, at times without competitive bidding or comprehensive testing. Documents prepared by customs officials in an effort to purchase container inspection equipment show that they were so intent on buying an SAIC product, even though a competitor had introduced a virtually identical version that was less expensive, that they placed the manufacturer's brand name in the requests. The agency has bought more than 100 of the machines at $1 million each. But the machines often cannot identify the contents of ship containers, because many everyday items, including frozen foods, are too dense for the gamma ray technology to penetrate. 'Continually Upgrading' The federal government will likely need to spend as much as $7 billion more on screening equipment in coming years, according to government estimates. "One department charged with coordinating efforts and setting standards will result in far better and more efficient technologies to secure the homeland," said Mr. Roehrkasse, the Department of Homeland Security spokesman. Some experts believe that this high-priced push for improvements is necessary, saying the war against terrorism may require the same sort of spending on new weapons and defenses as the cold war did. "You are in a game where you are continually upgrading and you will be forever," said Thomas S. Hartwick, a physicist who evaluates aviation-screening equipment. But given the inevitable imperfection of technology and the vast expanse the government is trying to secure, some warn of putting too much confidence in machines. "Technology does not substitute for strategy," said James Jay Carafano, senior fellow for homeland security at the Heritage Foundation, a conservative think tank. "It's always easier for terrorists to change tactics than it is for us to throw up defenses to counter them. The best strategy to deal with terrorists is to find them and get them." Matthew L. Wald contributed reporting for this article. From checker at panix.com Sun May 8 14:58:16 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:58:16 -0400 (EDT) Subject: [Paleopsych] NYT: This Is Your Brain on Motherhood Message-ID: This Is Your Brain on Motherhood New York Times, 5.5.8 http://www.nytimes.com/2005/05/08/opinion/08ellison.html By KATHERINE ELLISON San Francisco ANYONE shopping for a Mother's Day card today might reasonably linger in the Sympathy section. We can't seem to stop mourning the state of modern motherhood. "Madness" is our new metaphor. "Desperate Housewives" are our new cultural icons. And a mother's brain, as commonly envisioned, is impaired by a supposed full-scale assault on sanity and smarts. So strong is this last stereotype that when a satirical Web site posted a "study" saying that parents lose an average of 20 I.Q. points on the birth of their first child, MSNBC broadcast it as if it were true. The danger of this perception is clearest for working mothers, who besides bearing children spend more time with them, or doing things for them, than fathers, according to a recent Department of Labor survey. In addition, the more visibly "encumbered" we are, the more bias we attract: When volunteer groups were shown images of a woman doing various types of work, but in some cases wearing a pillow to make her look pregnant, most judged the "pregnant" woman less competent. Even in liberal San Francisco, a hearing last month to consider a pregnant woman's bid to be named acting director of the Department of Building Inspection featured four speakers commenting on her condition, with one asking if the city truly meant to hire a "pregnancy brain." But what if just the opposite is true? What if parenting really isn't a zero-sum, children-take-all game? What if raising children is actually mentally enriching for mothers - and fathers? This is, in fact, what some leading brain scientists, like Michael Merzenich at the University of California, San Francisco, now believe. Becoming a parent, they say, can power up the mind with uniquely motivated learning. Having a baby is "a revolution for the brain," Dr. Merzenich says. The human brain, we now know, creates cells throughout life, cells more likely to survive if they're used. Emotional, challenging and novel experiences provide particularly helpful use of these new neurons, and what adjectives better describe raising a child? Children constantly drag their parents into challenging, novel situations, be it talking a 4-year-old out of a backseat meltdown on the Interstate or figuring out a third-grade homework assignment to make a model of a black hole in space. Often, we'd rather be doing almost anything else. Aging makes us cling ever more fiercely to our mental ruts. But for most of us, our unique bond with our children yanks us out of them. And there are other ways that being a dedicated parent strengthens our minds. Research shows that learning and memory skills can be improved by bearing and nurturing offspring. A team of neuroscientists in Virginia found that mother lab rats, just like working mothers, demonstrably excel at time-management and efficiency, racing around mazes to find rewards and get back to the pups in record time. Other research is showing how hormones elevated in parenting can help buffer mothers from anxiety and stress - a timely gift from a sometimes compassionate Mother Nature. Oxytocin, produced by mammals in labor and breast-feeding, has been linked to the ability to learn in lab animals. Rethinking the mental state of motherhood is reasonable after recent years of evolution of our notion of just what it means to be smart. With our economy newly weighted with people-to-people jobs, and with many professions, including the sciences, becoming more multidisciplinary and collaborative, the people skills we've come to think of as "emotional intelligence" are increasingly prized by many wise employers. An ability to tailor your message to your audience, for instance - a skill that engaged parents practice constantly - can mean the difference between failure and success, at home and at work, as Harvard's president, Lawrence Summers, may now realize. To be sure, sleep deprivation, overwork and too much "Teletubbies" can sap any parent's synapses. And to be sure, our society needs to do much more - starting with more affordable, high-quality child care and paid parental leaves - to catch up with other industrialized nations and support mothers and fathers in using their newly acquired smarts to best advantage. That's why some of the recent "mommy lit" complaints are justified, and probably needed to rouse society to action - if only because nobody will be able to stand our whining for much longer. Still, it's worth considering that the torrent of negativity about motherhood comes as part of an era in which intimacy of all sorts is on the decline in this country. Geographically close extended families have long been pass?. The marriage rate has declined. And a record percentage of women of child-bearing age today are childless, many by choice. It's common these days to hear people say they don't have time to maintain friendships. Real relationships take a lot of time and work - it's much more convenient to keep in touch by e-mail. But children insist on face time. They fail to thrive unless we anticipate their needs, work our empathy muscles, adjust our schedules and endure their relentless testing. In the process, if we're lucky, we may realize that just this kind of grueling work - with our children, or even with others who could simply use some help - is precisely what makes us grow, acquire wisdom and become more fully human. Perhaps then we can start to re-imagine a mother's brain as less a handicap than a keen asset in the lifelong task of getting smart. Katherine Ellison is the author of "The Mommy Brain: How Motherhood Makes Us Smarter." From checker at panix.com Sun May 8 14:58:29 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:58:29 -0400 (EDT) Subject: [Paleopsych] NYT Magazine: Jim Holt: Of Two Minds Message-ID: Jim Holt: Of Two Minds New York Times Magazine, 5.5.8 http://www.nytimes.com/2005/05/08/magazine/08WWLN.html The human brain is mysterious -- and, in a way, that is a good thing. The less that is known about how the brain works, the more secure the zone of privacy that surrounds the self. But that zone seems to be shrinking. A couple of weeks ago, two scientists revealed that they had found a way to peer directly into your brain and tell what you are looking at, even when you yourself are not yet aware of what you have seen. So much for the comforting notion that each of us has privileged access to his own mind. Opportunities for observing the human mental circuitry in action have, until recent times, been almost nonexistent, mainly because of a lack of live volunteers willing to sacrifice their brains to science. To get clues on how the brain works, scientists had to wait for people to suffer sometimes gruesome accidents and then see how the ensuing brain damage affected their abilities and behavior. The results could be puzzling. Damage to the right frontal lobe, for example, sometimes led to a heightened interest in high cuisine, a condition dubbed gourmand syndrome. (One European political journalist, upon recovering from a stroke affecting this part of the brain, profited from the misfortune by becoming a food columnist.) Today scientists are able to get some idea of what's going on in the mind by using brain scanners. Brain-scanning is cruder than it sounds. A technology called functional magnetic resonance imaging can reveal which part of your brain is most active when you're solving a mathematical puzzle, say, or memorizing a list of words. The scanner doesn't actually pick up the pattern of electrical activity in the brain; it just shows where the blood is flowing. (Active neurons demand more oxygen and hence more blood.) In the current issue of Nature Neuroscience, however, Frank Tong, a cognitive neuroscientist at Vanderbilt University, and Yukiyasu Kamitani, a researcher in Japan, announced that they had discovered a way of tweaking the brain-scanning technique to get a richer picture of the brain's activity. Now it is possible to infer what tiny groups of neurons are up to, not just larger areas of the brain. The implications are a little astonishing. Using the scanner, Tong could tell which of two visual patterns his subjects were focusing on -- in effect, reading their minds. In an experiment carried out by another research team, the scanner detected visual information in the brains of subjects even though, owing to a trick of the experiment, they themselves were not aware of what they had seen. How will our image of ourselves change as the wrinkled lump of gray meat in our skull becomes increasingly transparent to such exploratory methods? One recent discovery to confront is that the human brain can readily change its structure -- a phenomenon scientists call neuroplasticity. A few years ago, brain scans of London cabbies showed that the detailed mental maps they had built up in the course of navigating their city's complicated streets were apparent in their brains. Not only was the posterior hippocampus -- one area of the brain where spatial representations are stored -- larger in the drivers; the increase in size was proportional to the number of years they had been on the job. It may not come as a great surprise that interaction with the environment can alter our mental architecture. But there is also accumulating evidence that the brain can change autonomously, in response to its own internal signals. Last year, Tibetan Buddhist monks, with the encouragement of the Dalai Lama, submitted to functional magnetic resonance imaging as they practiced ''compassion meditation,'' which is aimed at achieving a mental state of pure loving kindness toward all beings. The brain scans showed only a slight effect in novice meditators. But for monks who had spent more than 10,000 hours in meditation, the differences in brain function were striking. Activity in the left prefrontal cortex, the locus of joy, overwhelmed activity in the right prefrontal cortex, the locus of anxiety. Activity was also heightened in the areas of the brain that direct planned motion, ''as if the monks' brains were itching to go to the aid of those in distress,'' Sharon Begley reported in The Wall Street Journal. All of which suggests, say the scientists who carried out the scans, that ''the resting state of the brain may be altered by long-term meditative practice.'' But there could be revelations in store that will force us to revise our self-understanding in far more radical ways. We have already had a hint of this in the so-called split-brain phenomenon. The human brain has two hemispheres, right and left. Each hemisphere has its own perceptual, memory and control systems. For the most part, the left hemisphere is associated with the right side of the body, and vice versa. The left hemisphere usually controls speech. Connecting the hemispheres is a cable of nerve fibers called the corpus callosum. Patients with severe epilepsy sometimes used to undergo an operation in which the corpus callosum was severed. (The idea was to keep a seizure from spreading from one side of the brain to the other.) After the operation, the two hemispheres of the brain could no longer directly communicate. Such patients typically resumed their normal lives without seeming to be any different. But under careful observation, they exhibited some very peculiar behavior. When, for example, the word ''hat'' was flashed to the left half of the visual field -- and hence to the right (speechless) side of the brain -- the left hand would pick out a hat from a group of concealed objects, even as the patient insisted that he had seen no word. If a picture of a naked woman was flashed to the left visual field of a male patient, he would smile, or maybe blush, without being able to say what he was reacting to -- although he might make a comment like, ''That's some machine you've got there.'' In another case, a female patient's right hemisphere was flashed a scene of one person throwing another into a fire. ''I don't know why, but I feel kind of scared,'' she told the researcher. ''I don't like this room, or maybe it's you getting me nervous.'' The left side of her brain, noticing the negative emotional reaction issuing from the right side, was making a guess about its cause, much the way one person might make a guess about the emotions of another. Each side of the brain seemed to have its own awareness, as if there were two selves occupying the same head. (One patient's left hand seemed somewhat hostile to the patient's wife, suggesting that the right hemisphere was not fond of her.) Ordinarily, the two selves got along admirably, falling asleep and waking up at the same time and successfully performing activities that required bilateral coordination, like swimming and playing the piano. Nevertheless, as laboratory tests showed, they lived in ever so slightly different sensory worlds. And even though both understood language, one monopolized speech, while the other was mute. That's why the patient seemed normal to family and friends. Pondering such split-brain cases, some scientists and philosophers have raised a disquieting possibility: perhaps each of us really consists of two minds running in harness. In an intact brain, of course, the corpus callosum acts as a constant two-way internal-communications channel between the two hemispheres. So our everyday behavior does not betray the existence of two independent streams of consciousness flowing along within our skulls. It may be, the philosopher Thomas Nagel has written, that ''the ordinary, simple idea of a single person will come to seem quaint some day, when the complexities of the human control system become clearer and we become less certain that there is anything very important that we are one of.'' It is sobering to reflect how ignorant humans have been about the workings of their own brains for most of our history. Aristotle, after all, thought the point of the brain was to cool the blood. The more that breakthroughs like the recent one in brain-scanning open up the mind to scientific scrutiny, the more we may be pressed to give up comforting metaphysical ideas like interiority, subjectivity and the soul. Let's enjoy them while we can. Jim Holt is a frequent contributor to the magazine. From checker at panix.com Sun May 8 14:58:40 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 10:58:40 -0400 (EDT) Subject: [Paleopsych] Book World: Once Upon a Time Message-ID: Once Upon a Time Washington Post Book World, 5.5.8 http://www.washingtonpost.com/wp-dyn/content/article/2005/05/05/AR2005050501385_pf.html Reviewed by Denis Dutton Sunday, May 8, 2005; BW08 THE SEVEN BASIC PLOTS Why We Tell Stories By Christopher Booker. Continuum. 728 pp. $34.95 In the summer of 1975, moviegoers flocked to see the story of a predatory shark terrorizing a little Long Island resort. The film told of how three brave men go to sea in a small boat and, after a bloody climax in which they kill the monster, return peace and security to their town -- not unlike, Christopher Booker observes, a tale enjoyed by Saxons dressed in animal skins, huddled around a fire some 1,200 years earlier. Beowulf also features a town terrorized by a monster, Grendel, who lives in a nearby lake and tears his victims to pieces. Again, the hero Beowulf returns peace to his town after a bloody climax in which the monster is slain. Such echoes have impelled Booker to chart what he regards as the seven plots on which all literature is built. Beowulf and "Jaws" follow the first and most basic of his plots, "Overcoming the Monster." It is found in countless stories from The Epic of Gilgamesh and "Little Red Riding Hood" to James Bond films such as "Dr. No." This tale of conflict typically recounts the hero's ordeals and an escape from death, ending with a community or the world itself saved from evil. Booker's second plot is "Rags to Riches." He places in this category "Cinderella," "The Ugly Duckling," David Copperfield and other stories that tell of modest, downtrodden characters whose special talents or beauty are at last revealed to the world for a happy ending. Next in Booker's taxonomy is "the Quest," which features a hero, normally joined by sidekicks, traveling the world and fighting to overcome evil and secure a priceless treasure (or in the case of Odysseus, wife and hearth). The hero not only gains the treasure he seeks, but also the girl, and they end as king and queen. Related to this is Booker's fourth category, "Voyage and Return," exemplified by Robinson Crusoe , Alice in Wonderland and The Time Machine . The protagonist leaves normal experience to enter an alien world, returning after what often amounts to a thrilling escape. In "Comedy," Booker suggests, confusion reigns until at last the hero and heroine are united in love. "Tragedy" portrays human overreaching and its terrible consequences. The last of the plots of his initial list is "Rebirth," which centers on characters such as Dickens's Scrooge, Snow White and Dostoyevsky's Raskolnikov. To this useful system he unexpectedly adds two more plots: "Rebellion" to cover the likes of 1984 and "Mystery" for the recent invention of the detective novel. Booker, a British columnist who was founding editor of Private Eye, possesses a remarkable ability to retell stories. His prose is a model of clarity, and his lively enthusiasm for fictions of every description is infectious. He covers Greek and Roman literature, fairy tales, European novels and plays, Arabic and Japanese tales, Native American folk tales, and movies from the silent era on. He is an especially adept guide through the twists and characters of Wagner's operas. His artfully entertaining summaries jogged many warm memories of half-forgotten novels and films. I wish that an equal amount of pleasure could be derived from the psychology on which he bases his hypothesis. Booker has been working on this project for 34 years, and his quaint psychological starting point sadly shows its age. He believes that Carl Jung's theory of archetypes and self-realization can explain story patterns. Alas, Jung serves him poorly. Malevolent characters, for example, are constantly described by Booker as selfish "Dark Figures" who symbolize overweening egotism. (Booker is from a generation of critics who used to think that simply identifying a symbol in literature can explain anything you please.) In Jungian terms, the dark power of the ego is the source of all evil, along with another of Booker's favorite Jungian ideas, the denial of the villain's "inner feminine." Granted, egotism may explain the wickedness of someone like Edmund in "King Lear." But Grendel? The shark in "Jaws"? Oedipus is arguably a more egotistical character than Iago, who in his devious cruelty is still far more evil. The malevolence of dinosaurs in Jurassic Park or the Cyclops in The Odyssey lies not in their egotism. These creatures just have a perfectly natural taste for mammalian flesh. They are frightening, dramatic threats, to be sure, but not symbols of anything human. Sometimes in fiction, as Freud might have said, a monster is just a monster. In Booker's account, denying your "inner feminine" is bad news, and all evildoers, including Lady Macbeth, are guilty of it. Not only do such Jungian clich?s wear thin, they get in the way of adequate interpretation. Having seduced so many women and killed the father of one, Don Giovanni will "never develop his inner feminine" and act with the strength of a mature man, according to Booker. This ignores a most piquant feature of Lorenzo Da Ponte's libretto: The Don stubbornly stands up to the Commendatore's ghost at the opera's end and is pulled down to hell on account of it. Booker's discussion of what he calls "the Rule of Three" reveals his obsessive, self-confirming method. From the three questions of Goldilocks and Red Riding Hood to Lear's three daughters, sets of three are ubiquitous in literature, Booker claims. "Once we become aware of the archetypal significance of three in storytelling," he explains, "we can see it everywhere, expressed in all sorts of different ways, large and small." Sure, and anyone who studies the personality types of astrology will see Virgos and Scorpios everywhere too. Relations among three, four or five characters in a narrative enable more dramatic possibilities than relations between two. This is a matter of ordinary logic, not literary criticism. The "archetype of three," as he calls it, is no archetype at all, though he contrives to find it where it is plainly absent. Scylla and Charybdis may look like two dangers to you and me, but the middle way between them actually makes, as Booker explains, three possibilities for Odysseus, thus saving his Rule of Three. That Jane Eyre spends three days running across the moors "conveys to us, by a kind of symbolic shorthand, just how tortuous and difficult" her escape is. But why three? If Jane had spent five days on the moors, or 40 days, she'd have been even more tuckered out. And while there are three bears, three chairs and three bowls of porridge in "Goldilocks and the Three Bears," there are actually four characters. The story would better support Booker's theory were it "Goldilocks and the Two Bears." But, like astrologers, he is not keen to consider negative evidence. The first thinker to tackle Booker's topic was Aristotle. Write a story about a character, Aristotle showed, and you face only so many logical alternatives. In tragedy, for instance, either bad things will happen to a good person (unjust and repugnant) or bad things happen to a bad person (just, but boring). Or good things happen to a bad person (unjust again). Tragedy needs bad things to happen to a basically good but flawed person: Though he may not have deserved his awful fate, Oedipus was asking for it. In the same rational spirit, Aristotle works out dramatic relations: A conflict between strangers or natural enemies is of little concern to us. What arouses interest is a hate-filled struggle between people who ought to love each other -- the mother who murders her children to punish her husband, or two brothers who fight to the death. Aristotle knew this for the drama of his age as much as soap-opera writers know it today. Booker has not discovered archetypes, hard-wired blueprints, for story plots, though he has identified the deep themes that fascinate us in fictions. Here's an analogy: Survey the architectural layout of most people's homes and you will find persistent patterns in the variety. Bedrooms are separated from kitchens. Kitchens are close to dining rooms. Front doors do not open onto children's bedrooms or bathrooms. Are these patterns Jungian room-plan archetypes? Hardly. Life calls for logical separations of rooms where families can sleep, cook, store shoes, bathe and watch TV. Room patterns follow not from mental imprints, but from the functions of the rooms themselves, which in turn follow from our ordinary living habits. So it is with stories. The basic situations of fiction are a product of fundamental, hard-wired interests human beings have in love, death, adventure, family, justice and adversity. These values counted as much in the Pleistocene era as today, which is why evolutionary psychologists study them intensively. Our fictions are populated with character-types relevant to these themes: beautiful young women, handsome strong men, courageous leaders, children needing protection, wise old people. Add to this threats and obstacles to the fulfillment of love and fortune, including both bad luck and villains, and you have the makings of literature. Story plots are not unconscious archetypes, but follow, as Aristotle realized, from human interests and the logic of what is possible. Booker ends his 700-page treatise with a diatribe against literature of the past two centuries. Modern fiction has "lost the plot," he argues. Moby-Dick initially may look like a heroic Overcoming the Monster tale, but in the end we do not know who is more evil, Captain Ahab or the whale who kills him. While the ambiguities of modernism trouble Booker, some of his readers will be even more disturbed to find "E.T." and Peter Jackson's "Lord of the Rings" movies extravagantly lauded in a book that disparages the complex moral pessimism of Chekhov's "Uncle Vanya" and the achievement of Marcel Proust's Remembrance of Times Past , which he dismisses as "the greatest monument to human egotism in the history of story-telling." Fail though it might in its ambition to offer a single key to literature, The Seven Basic Plots is nevertheless one of the most diverting works on storytelling I've ever encountered. Pity about the Jung, but there's no denying the charm of Booker's twice-told tales. ? Denis Dutton edits the journal Philosophy and Literature and the Web site Arts & Letters Daily. From checker at panix.com Sun May 8 17:50:14 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 13:50:14 -0400 (EDT) Subject: [Paleopsych] Black-White-East Asian IQ differences In-Reply-To: References: Message-ID: nicht die- se Toe- ne! G BbBbBbBbAA EE FF FF Greg, ad hominem arguments like these are unworthy of you. Besides, Black are indeed mentioned in the initial e-mail, along with Whites and East Asians. And you haven't addressed the question of how much money it might cost to do studies that meet your criteria of scientific adequacy to resolve these issues. On 2005-04-28, Greg Bear opined [message unchanged below]: > Date: Thu, 28 Apr 2005 17:11:41 -0700 > From: Greg Bear > Reply-To: The new improved paleopsych list > To: 'The new improved paleopsych list' > Subject: RE: [Paleopsych] Black-White-East Asian IQ differences > > Frank, what ARE you talking about? The initial discussion mentioned nothing > about blacks, but seemed to be about comparisons between the IQs of Asians > and so-called Whites--what brings out all this stuff about Uplift and > Blacks? And quoting the Constitution! My. > > The shoe seems to fit so well, it pinches. > > Are you a white male, middle-aged or older, mathematically adept, and proud > of your exceptional IQ? Then DO I HAVE A SCIENTIFIC SCAM FOR YOU! > > Not only does this scam claim to prove your nagging suspicions that blacks > are inferior to whites, but it's COMPLETELY GUILT-FREE, because it's > RATIONAL, based on PROVABLE MATHEMATICS! And better than that, it's > supported by the nagging suspicions of PEOPLE JUST LIKE YOU! People who grew > up in a different time. > > You don't answer any of my scientific objections. For a so-called scientific > forum, that's rather sad. > > I do enjoy Mr. Mencken, but we were talking about the biology of racial > differences, not TAXPAYER DOLLARS BEING WASTED ON SPONGING LOW-LIFES WHO > HAVEN'T A HOPE IN HELL OF EVER UNDERSTANDING WHY THEY'RE SO INFERIOR TO > ANGRY WHITE MALES. So let's not CHANGE THE SUBJECT. > > Sorry about the caps. In this sort of talk-radio atmosphere, they just > seemed appropriate. > > Greg > > -----Original Message----- > From: paleopsych-bounces at paleopsych.org > [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Premise Checker > Sent: Thursday, April 28, 2005 10:31 AM > To: The new improved paleopsych list > Subject: RE: [Paleopsych] Black-White-East Asian IQ differences > > Hold on a moment, Greg. What is the issue being addressed? It's why the > trillions of dollars taken from the taxpayers and spent on uplifting > Blacks has not been very successful and whether innate racial differences > constitute a large part of the explanation. Let me ask you, how much do > you think it would cost to get a good answer? (I have worked in the > program evaluation section of the U.S. Department of Education, and this > is one question we dare not address.) And if you don't think a good answer > can be had at a reasonable cost, do you think that no more money should be > spent on this uplift and either returned to the taxpayers or spent on > something that can be reliably evaluated? I am not sure what Mr. Mencken > called the Uplift is an appropriate function of the government. Certainly > not for the Federal government, since it is not among the 18 powers > granted to Congress under the Constitution, Art. 1, Sec. 8.: > > The United States Constitution: Article I, Section 8: > > Clause 1: The Congress shall have Power To lay and collect Taxes, > Duties, Imposts and Excises, to pay the Debts and provide for the common > Defence and general Welfare of the United States; but all Duties, > Imposts and Excises shall be uniform throughout the United States; > > Clause 2: To borrow Money on the credit of the United States; > > Clause 3: To regulate Commerce with foreign Nations, and among the > several States, and with the Indian Tribes; > > Clause 4: To establish an uniform Rule of Naturalization, and uniform > Laws on the subject of Bankruptcies throughout the United States; > > Clause 5: To coin Money, regulate the Value thereof, and of foreign > Coin, and fix the Standard of Weights and Measures; > > Clause 6: To provide for the Punishment of counterfeiting the Securities > and current Coin of the United States; > > Clause 7: To establish Post Offices and post Roads; > > Clause 8: To promote the Progress of Science and useful Arts, by > securing for limited Times to Authors and Inventors the exclusive Right > to their respective Writings and Discoveries; > > Clause 9: To constitute Tribunals inferior to the supreme Court; > > Clause 10: To define and punish Piracies and Felonies committed on the > high Seas, and Offences against the Law of Nations; > > Clause 11: To declare War, grant Letters of Marque and Reprisal, and > make Rules concerning Captures on Land and Water; > > Clause 12: To raise and support Armies, but no Appropriation of Money to > that Use shall be for a longer Term than two Years; > > Clause 13: To provide and maintain a Navy; > > Clause 14: To make Rules for the Government and Regulation of the land > and naval Forces; > > Clause 15: To provide for calling forth the Militia to execute the Laws > of the Union, suppress Insurrections and repel Invasions; > > Clause 16: To provide for organizing, arming, and disciplining, the > Militia, and for governing such Part of them as may be employed in the > Service of the United States, reserving to the States respectively, the > Appointment of the Officers, and the Authority of training the Militia > according to the discipline prescribed by Congress; > > Clause 17: To exercise exclusive Legislation in all Cases whatsoever, > over such District (not exceeding ten Miles square) as may, byCession of > particular States, and the Acceptance of Congress, become the Seat of > the Government of the United States, and to exercise like Authority over > all Places purchased by the Consent of the Legislature of the State in > which the Same shall be, for the Erection of Forts, Magazines, Arsenals, > dock-Yards, and other needful Buildings;--And > > Clause 18: To make all Laws which shall be necessary and proper for > carrying into Execution the foregoing Powers, and all other Powers > vested by this Constitution in the Government of the United States, or > in any Department or Officer thereof. > > Perhaps you think the Constitution should be amended or ignored. The > amazing thing is that http://www.ed.gov contains a statement that its > activities are unauthorized! > > Frank > > On 2005-04-27, Greg Bear opined [message unchanged below]: > >> Date: Wed, 27 Apr 2005 14:27:28 -0700 >> From: Greg Bear >> Reply-To: The new improved paleopsych list >> To: 'The new improved paleopsych list' >> Subject: RE: [Paleopsych] Black-White-East Asian IQ differences >> >> Must intrude here. This sort of nonsense is so unscientific as to be >> laughable. >> >> Major undefined terms: IQ (what is it really measuring?) >> Intelligence: In what environment does your IQ give you an advantage? >> Nature, society, Mad Max country? >> >> "Genetic"--where is the gene for intelligence, or the set of genes? Do > these >> genes differ between the races? We do not know. >> >> This general belief system, expressed with the utmost arrogance in THE > BELL >> CURVE, is generally held by a class of mathematically adept middle-aged > and >> older white males with pretensions to an understanding of some of the > major >> biological issues of our time. Their ignorance of genetics is profound. > Most >> of their language and conceptual structure refers to outmoded genetics of >> forty to fifty years ago--and they've never heard of epigenetics, the > study >> of how genes are switched on and off, activated and deactivated. > Indeed--not >> only do we now know that "defective" genes can be corrected in > non-Mendelian >> ways, having little to do with one's parentage, but even "perfect" genes > can >> be switched off in certain environments, after development and birth. >> >> Population groups put under pressure--through war, prejudicial treatment, >> incarceration, or outright persecution--are likely to witness major >> differences in the EXPRESSION of certain (possibly) genetic traits, which >> could statistically help them adapt to a dangerous and stressful >> environment. These adaptations may result in a skewing of IQ results, even >> should such tests be culturally neutral--which they are not--by focusing >> their nervous reaction to stimuli and deemphasizing their ability to focus >> on tasks of less immediate importance--that is, a written test. Fight or >> write, so to speak. >> >> We cannot test the Irish in 1840's Ireland or New York, or the Hungarians >> pressed over centuries by various contending hordes, or poor white trash > in >> the American South before the Civil War. Supposedly honorable white men in >> their day referred to these populations as "low" and cretinous, and > believed >> that intermarriage would not be advantageous. >> >> IQ matters most among equal populations in a fair and civilized society, > all >> things being equal socially; any other circumstance is skewed to those >> raised with various spoons of precious metals and family tradition firmly >> thrust into their mouths. 'Twas ever thus. >> >> So my questions of these researchers would be: "Which fifty percent of the >> genome would you blame? The right half, or the left? The one in the > middle? >> Which genes are you pointing to? Are you speaking of generalities, or >> specifics? If the latter, what specifically are you trying to say? >> >> "And why does this sound so much like the same sort of ignorant, > prejudicial >> crap promulgated throughout the ages by people of influence, to maintain >> their status through any means possible, fair or unfair?" >> >> White mathematically educated males of tested high IQ, trying to prove > that >> IQ is inbred and important... hm. Sounds like a class in search of >> justification to me. >> >> I'd challenge these folks to a duel on the public commons any day of the >> week. Easy money. Facts and native charm versus their almighty IQs. >> >> Greg Bear >> >> -----Original Message----- >> From: paleopsych-bounces at paleopsych.org >> [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of Michael > Christopher >> Sent: Wednesday, April 27, 2005 1:48 PM >> To: paleopsych at paleopsych.org >> Subject: [Paleopsych] Black-White-East Asian IQ differences >> >>>> Black-White-East Asian IQ differences at least >> 50% genetic, major law review journal concludes<< >> >> --If this is true, how should society change to deal >> with it? Also, what is the IQ difference for someone >> with a male or female parent of a different race, or >> for various blends? >> >> Michael > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > From checker at panix.com Sun May 8 19:16:22 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 15:16:22 -0400 (EDT) Subject: [Paleopsych] WP: Where Lawlessness May Roam Message-ID: Where Lawlessness May Roam The Washington Post, Sunday Outlook section, 5.5.8 Unconventional Wisdom http://www.washingtonpost.com/wp-dyn/content/article/2005/05/06/AR2005050601818_pf.html By Richard Morin Where Lawlessness May Roam We're certainly not encouraging it, but if you're thinking about going on a crime spree and you're scouting for locations, you might want to check out a 50-square-mile sliver of western Idaho. In this remote corner of Yellowstone National Park, a quirky confluence of constitutional technicalities and a goof by Congress more than a century ago may have produced a lawless oasis smack in the heart of God's Country, claims Brian C. Kalt, an associate professor of law at Michigan State University. Kalt insists that his reading of the law is correct -- at least in theory. "The courts may or may not agree that my loophole exists," he acknowledged in his essay "The Perfect Crime" in the forthcoming issue of the Georgetown Law Journal. Kalt says he's not interested in trying to help crooks, but rather in forcing Congress to tidy up the law books. "Crime is bad, after all. But so is violating the Constitution. If the loophole . . . does exist it should be closed, not ignored," he writes in an article that mixes serious scholarship with humor. At the heart of the problem is an obscure bit of legalese buried in the Sixth Amendment known as the "vicinage" requirement. (For non-lawyers, vicinage refers to the neighborhood where the crime took place, while venue refers to the location of the trial itself.) The amendment requires that jurors be "of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law." From a legal perspective, the problem with Yellowstone Park is that it does not quite fit in Wyoming: Nine percent of the park spills into Montana (about 260 square miles' worth) and Idaho (about 50 square miles). The park was established in 1872, well before the three states were added to the union, and Congress put the entire park in the judicial district of Wyoming -- the only federal court district that includes land in more than one state. At the same time, Kalt said, legislators unwittingly created a potential "Zone of Death." Here's how it might work: "Say that you are in the Idaho portion of Yellowstone, and you decide to spice up your vacation by going on a crime spree. You make some moonshine, you poach some wildlife, you strangle some people and steal their picnic baskets. You are arrested, arraigned in the park and bound over for trial in Cheyenne, Wyo., before a jury drawn from the Cheyenne area. "But Article III, Section 2 [of the Constitution] plainly requires that the trial be held in Idaho, the state in which the crime was committed. Perhaps if you fuss convincingly enough about it, the case would be sent to Idaho. But the Sixth Amendment then requires that the jury be from the state (Idaho) and the district (Wyoming) in which the crime was committed. In other words, the jury would have to be drawn from the Idaho portion of Yellowstone National Park, which, according to the 2000 Census, has a population of precisely zero. . . . Assuming that you do not feel like consenting to trial in Cheyenne, you should go free." In short, Congress goofed back in 1890 when they made Wyoming the 44th state. "It should either have shrunk the park or made Wyoming bigger to include all of the park," Kalt said. Ah, legal hindsight is always 20/20. Kalt said the vagaries of venue and vicinage requirements have let people get away with murder before. He quotes an English legal scholar who complained in 1548 that it "often happene[d]" that a murderer would strike his victim in one county, and "by Craft and Cautele [caution]" escape punishment by making sure that the victim died in the next county. "An English jury could only take cognizance of the facts that occurred in its own county, so no jury would be able to find that the killer had committed all of the elements of murder," Kalt wrote. (Rest easy. England closed that loophole centuries ago.) But, professor, that was then. Could you really get away with murder today in your Zone of Death? Perhaps not -- at least not completely. Kalt notes that it "would be hard to limit your criminality to that small space," so you could be charged with conspiracy for things you did elsewhere to further your rampage. Prosecutors also could charge you with lesser crimes punishable by less than six months in jail, which do not require a jury trial. Or the victims' families could sue the pants off you. But the biggest deterrent may be the loophole that allowed your crime binge in the first place. If friends and families of your victims got wind of your plans, they might turn the tables before you left the crime scene, giving you -- in Kalt's words -- "a dose of your own medicine, administering vigilante justice with similar impunity." The Unusual Name Game (Cont.) We all know it's tough being a boy named Sue. Now it turns out it's also a problem to be a classmate of a boy named Sue, according to Universi ty of Florida economist David N. Figlio. Figlio found that boys with first names typically given to girls were more likely to misbehave in junior high school than students with less distinctive monikers. He also discovered that boys in classes with boys with feminine-sounding names were more likely to have discipline problems and lower standardized test scores. He reports his findings in a new working paper published by the National Bureau of Economic Research. Figlio made news in this column two months ago with his finding that children with unusual names don't fare as well in class. In his latest analysis, he used detailed data collected on more than 76,000 students in the late 1990s from a large school district in Florida. In exchange for access to student records, including names and disciplinary histories, Figlio promised not to reveal the school district or otherwise identify individual students. Overall Figlio found that nearly 2 percent of all boys in his sample had names that were overwhelmingly given to girls. That means the typical Florida middle-schooler will share about one out of every three classes with a boy named Sue . . . or Ashley, Courtney or Shannon. But as the father of three boys, the Wiz just has to ask: How about girls with guy-sounding names? Any effects on the other students from going to school with a girl named Tyler or Sidney? "I did not look as carefully at the girl-named-Brad angle, in part because this is much more common," Figlio wrote in an e-mail. "Indeed, Ashley, Courtney and Shannon were all once boys' names!" The Four-Hour Workday? Men and women may disagree on a lot of things, but in one area of life they're in near-perfect agreement. If they had the choice, they would work fewer hours than they do now, according to a Washington Post-ABC News national poll conducted last month. Equal majorities of working men (55 percent) and women (56 percent) said they'd spend less time on the job if they could continue to have the same standard of living. Fewer than three in 10 said they wouldn't reduce the number of hours they spend on the job. But few men and women want to join the leisure class entirely: Slightly fewer than one in five women and men said they would quit working if they could afford to. For the survey, 650 working women and men were interviewed April 21-24. Margin of sampling error for the overall results is plus or minus 4 percentage points. [2]morinr at washpost.com From checker at panix.com Sun May 8 19:16:38 2005 From: checker at panix.com (Premise Checker) Date: Sun, 8 May 2005 15:16:38 -0400 (EDT) Subject: [Paleopsych] Book World: (Odgen Nash) A Gleeful Splash of Ogden Nash Message-ID: This is just for fun. A Gleeful Splash of Ogden Nash Washington Post Book World, 5.5.8 http://www.washingtonpost.com/wp-dyn/content/article/2005/05/05/AR2005050501359_pf.html By Jonathan Yardley OGDEN NASH The Life and Work of America's Laureate of Light Verse By Douglas M. Parker. Ivan R. Dee. 316 pp. $27.50 At the end of the 1920s Ogden Nash was in his late twenties, living in New York City, working as a copywriter in the advertising department of Doubleday, the prominent book publisher, and trying his hand at poetry. It didn't take long, Douglas M. Parker writes, for him to reach "the important conclusion that he simply lacked the talent to become a serious poet: 'There was a ludicrous aspect to what I was trying to do; my emotional and naked beauty stuff just didn't turn out as I had intended.' " Instead he ventured into light verse, which enjoyed a more significant readership then than it does today. This was one of his earliest efforts: The turtle lives twixt plated decks That practically conceal its sex. I think it clever of the turtle In such a fix to be so fertile. The poem "made a remarkable impression on the humorist Corey Ford" and others as well. Soon Nash came up with this: The hunter crouches in his blind Mid camouflage of every kind. He conjures up a quaking noise To lend allure to his decoys. This grownup man, with pluck and luck Is hoping to outwit a duck. For my money, poetry doesn't get much better than that, whether "light" or "serious," and Nash did just that for four more decades, until his death in Baltimore on May 19, 1971. It was often said during his lifetime that he and Robert Frost were the only American poets who were able to support themselves and their families on the income from their work as poets, a claim that almost certainly cannot be made for a single American poet today, with the possible exception of Billy Collins. In just about all other respects Nash and Frost could not have been more different, but we can look back on them now as the last vestiges of an age when poetry still mattered in the United States, not just to academics and other poets but to the great mass of ordinary readers. To say that Nash mattered in my own family is gross understatement. My parents -- like Nash, members of the educated but far from wealthy middle class -- awaited each new issue of the New Yorker with the eager expectation that a new Nash poem would be found therein. For a couple of summers my family vacationed on New Hampshire's tiny coastline, where my father chatted up the great man on the beach. I caught the infection as a teenager and in high-school senior English wrote my class paper on Nash. My teacher, whom I revered, declared that "your comments are delicate and restrained," that "you express your admiration for Mr. Nash tastefully and with tact," and handed me an A-, a truly rare event in my sorry academic history. That same teacher also noted, tactfully, that Nash's "poetic credo is perhaps stated in 'Very Like a Whale' and perhaps will interest you." This poem is indeed a key to Nash. It begins, "One thing that literature would be greatly the better for/ Would be a more restrained employment by authors of simile and metaphor," takes note of Byron's "the Assyrian came down like a wolf on the fold" and then takes exception to it -- "No, no, Lord Byron, before I'll believe that this Assyrian was actually like a wolf I must have some kind of proof;/ Did he run on all fours and did he have a hairy tail and a big red mouth and big white teeth and did he say Woof woof?" -- and closes with a flourish: That's the kind of thing that's being done all the time by poets, from Homer to Tennyson; They're always comparing ladies to lilies and veal to venison. And they always say things like that the snow is a white blanket after a winter storm. Oh it is, is it, all right then, you sleep under a six-inch blanket of snow and I'll sleep under a half-inch blanket of unpoetical blanket material and we'll see which one keeps warm, And after that maybe you'll begin to comprehend dimly What I mean by too much metaphor and simile. It's all right there: Nash's irreverence, his delayed and often improbable rhymes ("dimly" and "simile"), his long, death-defying lines of verse, his delight in tweaking the pompous and pretentious. He liked to say that since he never could be anything more than a bad good poet he would settle for being a good bad poet, but there was nothing bad about his verse, as was commonly recognized by other poets, writers and critics. W.H. Auden thought he was "one of the best poets in America," Clifton Fadiman praised his "dazzling assortment of puns, syntactical distortions and word coinages," and when Scott Fitzgerald's daughter sent her father a bad imitation of Nash, he replied: "Ogden Nash's poems are not careless, they all have an extraordinary inner rhythm. They could not possibly be written by someone who in his mind had not calculated the feet and meters to the last iambus or trochee. His method is simply to glide a certain number of feet and come up smack against his rhyming line. Read over a poem of his and you will see what I mean." Indeed. That astute judgment is borne out in just about everything Nash wrote, as well as in the utter failure of all those -- their numbers were (and are) uncountable -- who tried to imitate him. To say that he was the best American light poet of his or any other day is true beyond argument, but it is scarcely the whole story. He was one of the best American poets of his or any other day, period, and it is a great injustice that critics customarily pigeonhole (and dismiss) him as a mere entertainer because he committed the unpardonable sin of being funny. Nash emerges, in Parker's capable if conventional biography, as a decent man whose inner life probably was a lot more complicated than his verse suggests. He was born into comfortable circumstances in suburban New York, but those circumstances changed dramatically with his father's business failure. Nash put in only a year at college before going to New York City and the real world, but he was exceptionally well read and universally esteemed among his many friends for the brilliance of his mind. He tended to drink a bit too much and was prone to depression, especially in later life, but people loved to be with him. "We hung onto him," one friend said. "He was a great lifesaver for everybody. . . . He was lovely and amusing and fun." The great love of his life was Frances Leonard, a belle of Baltimore whom he met there in 1928, courted assiduously (sometimes desperately) and at last married three years later. She was charming, beautiful and, when the occasion called for it, difficult. He learned how to deal with her moods, and his "devotion to Frances never wavered." They had two daughters, whom he adored and about whom he wrote many poems, some of them agreeably sentimental, some of them funny, all of them astute: I have a funny daddy Who goes in and out with me, And everything that baby does My daddy's sure to see And everything that baby says, My daddy's sure to tell You must have read my daddy's verse I hope he fries in hell. Though Nash earned a decent income off the poems he sold to the New Yorker, the Saturday Evening Post and other magazines, he and Frances had expensive tastes, and he had ambitions beyond poetry. Like many other writers of his day, he wanted to succeed in the Broadway theater. Unlike most others, he actually did, with "One Touch of Venus," a musical by Kurt Weill for which he wrote the lyrics and collaborated with S.J. Perelman on the book. The show opened in October 1943 and ran for an impressive 567 performances. One of the songs, "Speak Low," remains a classic of cabaret and jazz and has been recorded by many of the country's best singers. Strictly for money, Nash went onto the lecture circuit in 1945. His "tours would occupy Nash for several weeks a year for nearly twenty years and have a significant impact on his life and health." The tours were exhausting, but audiences invariably were large and welcoming; Nash was gratified by this direct contact with his readers and kept on the circuit long after its effect on his health had become deleterious. Never robust, by the time he hit his sixties he suffered from numerous ailments, many of them intestinal and some of them debilitating. Toward the end of his life Nash agreed to deliver the commencement address at his daughter Linell's boarding school. Perhaps subconsciously aware of the approaching end, he made it "his own valedictory." He spoke up for humor: "It is not brash, it is not cheap, it is not heartless. Among other things I think humor is a shield, a weapon, a survival kit. . . . So here we are several billion of us, crowded into our global concentration camp for the duration. How are we to survive? Solemnity is not the answer, any more than witless and irresponsible frivolity is. I think our best chance lies in humor, which in this case means a wry acceptance of our predicament. We don't have to like it but we can at least recognize its ridiculous aspects, one of which is ourselves." Today, when we need to laugh perhaps more than ever before, we can only thank God for Ogden Nash. ? Jonathan Yardley's e-mail address is yardleyj at washpost.com. From he at psychology.su.se Mon May 9 13:36:26 2005 From: he at psychology.su.se (Hannes Eisler) Date: Mon, 9 May 2005 15:36:26 +0200 Subject: [Paleopsych] fads and atoms In-Reply-To: <1ee.3b3eba41.2fad6251@aol.com> References: <1ee.3b3eba41.2fad6251@aol.com> Message-ID: What about incorrectly folded prions? >Content-Type: text/html; charset="UTF-8" >Content-Language: en > >The following article hits the motherlode when it comes to our past >discussions of Ur patterns, iteration, and fracticality. Ur >patterns are those that show up on multiple levels of emergence, >patterns that make anthropomorphism a reasonable way of doing >science, patterns that explain why a metaphor can capture in its >word-picture the underlying structure of a whirlwind, a brain-spin, >or a culture-shift. > > > >Here's how a pattern in the molecules of magnets repeats itself in >the mass moodswings of human beings. Howard > >etrieved May 6, 2005, from the World Wide Web >http://www.newscientist.com/article.ns?id=mg18624984.200 HOME |NEWS >|EXPLORE BY SUBJECT |LAST WORD |SUBSCRIBE |SEARCH |ARCHIVE |RSS >|JOBS Click to PrintOne law rules dedicated followers of fashion 06 >May 2005 Exclusive from New Scientist Print Edition Mark Buchanan >FADS, fashions and dramatic shifts in public opinion all appear to >follow a physical law: one of the laws of magnetism. Quentin >Michard of the School of Industrial Physics and Chemistry in Paris >and Jean-Philippe Bouchaud of the Atomic Energy Commission in >Saclay, France, were trying to explain three social trends: >plummeting European birth rates in the late 20th century, the rapid >adoption of cellphones in Europe in the 1990s and the way people >clapping at a concert suddenly stop doing so. In each case, they >theorised, individuals not only have their own preferences, but also >tend to imitate others. "Imitation is deeply rooted in biology as a >survival strategy," says Bouchaud. In particular, people frequently >copy others who they think know something they don't. To model the >consequences of imitation, the researchers turned to the physics of >magnets. An applied magnetic field will coerce the spins of atoms in >a magnetic material to point in a certain direction. And often an >atom's spin direction pushes the spins of neighbouring atoms to >point in a similar direction. And even if an applied field changes >direction slowly, the spins sometimes flip all together and quite >abruptly. The physicists modified the model such that the atoms >represented people and the direction of the spin indicated a >person's behaviour, and used it to predict shifts in public >opinion. In the case of cellphones, for example, it is clear that >as more people realised how useful they were, and as their price >dropped, more people would buy them. But how quickly the trend took >off depended on how strongly people influenced each other. The >magnetic model predicts that when people have a strong tendency to >imitate others, shifts in behaviour will be faster, and there may >even be discontinuous jumps, with many people adopting cellphones >virtually overnight. More specifically, the model suggests that the >rate of opinion change accelerates in a mathematically predictable >way, with ever greater numbers of people changing their minds as the >population nears the point of maximum change. Michard and Bouchaud >checked this prediction against their model and found that the >trends in birth rates and cellphone usage in European nations >conformed quite accurately to this pattern. The same was true of the >rate at which clapping died away in concerts. Close this window >Printed on Sat May 07 01:01:50 BST 2005 > > >---------- >Howard Bloom >Author of The Lucifer Principle: A Scientific Expedition Into the >Forces of History and Global Brain: The Evolution of Mass Mind From >The Big Bang to the 21st Century >Visiting Scholar-Graduate Psychology Department, New York >University; Core Faculty Member, The Graduate Institute >www.howardbloom.net >www.bigbangtango.net >Founder: International Paleopsychology Project; founding board >member: Epic of Evolution Society; founding board member, The Darwin >Project; founder: The Big Bang Tango Media Lab; member: New York >Academy of Sciences, American Association for the Advancement of >Science, American Psychological Society, Academy of Political >Science, Human Behavior and Evolution Society, International Society >for Human Ethology; advisory board member: Youthactivism.org; >executive editor -- New Paradigm book series. >For information on The International Paleopsychology Project, see: >www.paleopsych.org >for two chapters from >The Lucifer Principle: A Scientific Expedition Into the Forces of >History, see www.howardbloom.net/lucifer >For information on Global Brain: The Evolution of Mass Mind from the >Big Bang to the 21st Century, see www.howardbloom.net > > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych -- ------------------------------------- Prof. Hannes Eisler Department of Psychology Stockholm University S-106 91 Stockholm Sweden e-mail: he at psychology.su.se fax : +46-8-15 93 42 phone : +46-8-163967 (university) +46-8-6409982 (home) internet: http://www.psychology.su.se/staff/he -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Mon May 9 21:04:35 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:04:35 -0400 (EDT) Subject: [Paleopsych] NYTDBR: That 'Prozac' Man Defends the Gravity of a Disease Message-ID: That 'Prozac' Man Defends the Gravity of a Disease New York Times Daily Book Review, 5.5.9 http://www.nytimes.com/2005/05/09/books/09masl.html By [2]JANET MASLIN In his new book, Peter D. Kramer tells a story about traveling to promote the best-known of his earlier books, "Listening to Prozac," and regularly encountering the same kind of wiseguy in lecture audiences. Wherever he went, somebody would ask him whether the world would be shorter on Impressionist masterpieces if Prozac had been prescribed for Vincent van Gogh. Sunflowers and starry nights aside, this anecdote is revealing. It conveys both the facts that "Listening to Prozac" made a mental health celebrity out of Dr. Kramer (who is a clinical professor of psychiatry at Brown University) and that the book's success left him uneasy. He became a target, not only of New Yorker cartoons (one of which featured a Prozac-enhanced Edgar Allan Poe being nice to a raven) but of condescension from his professional peers. He found out that there was no intellectual advantage to be gained from pointing the way to sunnier moods. "Against Depression" is a defensive maneuver against such vulnerability. With both a title and an argument that summon Susan Sontag (in "Against Interpretation" and "Illness as Metaphor"), the author argues against the idea that depression connotes romance or creativity. While fully acknowledging depression's seductiveness (Marlene Dietrich is one of his prototypes of glamorous apathy), and grasping how readily the connection between gloom and spiritual depth has been made, Dr. Kramer argues for a change in priorities. He maintains that depression's physiology and pathology matter more than its cachet. Dr. Kramer makes this same point over and over in "Against Depression." It may be self-evident, but it's not an idea that easily sinks in. As this book points out, the tacit glorification of depression inspires entire art forms: "romantic poetry, religious memoir, inspirational tracts, the novel of youthful self-development, grand opera, the blues." There isn't much comparable magnetism in the realms of resilience, happiness and hope. What's more, he says, our cultural embrace of despair has a respected pedigree. Depression is the new tuberculosis: "an illness that signifies refinement," as opposed to one that signifies unpleasantness and pain. In a book that mixes medical theory, case histories and the occasional flash of autobiography, Dr. Kramer speaks of having been immersed in depression - "not my own" - when inundated with memoirs about the depressed and their pharmacological adventures. He finds there is a lot more confessional writing of this sort than there is about suffering from, say, kidney disease. But depression, in his view, is as dangerous and deserving of treatment as any other long-term affliction. When regarded in purely medical terms, evaluated as a quantifiable form of degeneration, depression loses its stylishness in a hurry. Here, matters grow touchy: the author is careful to avoid any remedial thoughts that might appear to promote the interests of drug companies. So there are no miracle cures here; there is just the hope that an embrace of strength and regeneration can supplant the temptation to equate despair with depth. "Against Depression" returns repeatedly to this central, overriding premise. Perhaps Dr. Kramer's talk-show-ready scare tactics are essential to his objectives. "The time to interrupt the illness is yesterday," he writes, building the case for why even seemingly brief interludes of depression can signal a relentless pattern of deterioration in a patient's future. For anyone who has spent even two straight weeks feeling, for instance, sad, lethargic, guilty, alienated and obsessed with trifles, "Against Depression" has unhappy news. The author does not stop short of declaring that "depression is the most devastating disease known to humankind." But this claim, like much of the medical data discussed here, is open to interpretation and heavily dependent on the ways in which individual factors are defined. How far do the incapacitating properties of depression extend? Do they lead only to sadness and paralysis, or also to self-destructive behavior, addictions, failures, job losses and patterns passed down to subsequent generations? Whatever the case, Dr. Kramer is clearly well armed for the debate he will incite. While its medical information, particularly about depression-related damage to the brain, is comparatively clear-cut, it is in the realm of culture that "Against Depression" makes its strongest case. In these matters, Dr. Kramer is angry and defensive: he finds it outrageous that William Styron's "Darkness Visible" endows depression with such vague witchcraft ("a toxic and unnamable tide," "this curious alteration of consciousness") or that Cynthia Ozick can complain that John Updike's "fictive world is poor in the sorrows of history." He himself finds Updike's world rich in life-affirming attributes that tend to be underrated. He wonders how much of the uniformly acknowledged greatness of Picasso's blue period has to do with its connection with the suicide of one of Picasso's friends. By the same token, he is amazed by a museum curator's emphasis on the bleakest work of Bonnard, though this painter strikes Dr. Kramer as "a man for whom fruit is always ripe." Similar material, with the potential to illustrate the high status of low moods, is endless. There is a whole chapter on Sylvia Plath that the author didn't even bother to write. There is more breadth of evidence than innovative thinking in "Against Depression." Nonetheless, this book successfully advances the cartography of a (quite literally) gray area between physical and mental illness. And in the process it settles a few scores for the author, whose last book was a novel about a radical blowing up trophy houses on Cape Cod. Here is his chance to assert that he wrote his senior thesis on death in Dickens's writing; he listened to a lot of Mozart and Schubert in college; that he, too, has succumbed to the erotic power of bored, affectless, emotionally unavailable women in candlelit rooms. But he wrote this book in a state of reasonable contentment. He finds life well worth living. He's tired - in ways that have potent ramifications for all of us - of being treated as a lightweight for that. From checker at panix.com Mon May 9 21:04:56 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:04:56 -0400 (EDT) Subject: [Paleopsych] CHE: Understanding 'The Sociopath Next Door' Message-ID: Understanding 'The Sociopath Next Door' The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36a01202.htm VERBATIM By PETER MONAGHAN Martha Stout, a former clinical instructor in psychiatry at Harvard University. Recent studies suggest that one in 25 Americans is a sociopath, without conscience and ready to prey on others. But not all of them are the cunning killers of television crime dramas, says Ms. Stout, who dissects the phenomenon in The Sociopath Next Door: The Ruthless Versus the Rest of Us (Broadway Books). Q. Aren't many sociopaths likely to be in prison, and not among us? A. It turns out very few sociopaths apparently are in jail, and, as a matter of fact, people who are in jail are not all sociopaths. And most sociopaths are not violent. Q. But 4 percent seems high, no? A. Statistical studies are difficult to interpret... but my colleagues tend to tell me that they think it's an even larger number than that. ... When you realize that the absence of conscience can motivate lesser behaviors than going out and being a serial killer, the statistic starts to make more sense. ... We're talking about the boss who ridicules people just to make them jump, or the spouse who abuses the other spouse just to make him or her jump. Q. And that's where sociopathy enters everyday life? A. Exactly. Most sociopaths are just like everybody else. They're average people with average intelligence or sometimes even less-than-average intelligence. And the games they play are much lesser, more personal, and private games. Q. Does culture drive sociopathy? A. It appears to be about 50 percent inheritable; as for the other 50 percent that seems to be caused by the environment, nobody has really explained that. Sociopaths are not abused more as children than other groups. So probably the cultural explanation is a good one. In certain Far Eastern countries, notably Japan and Taiwan, the observed rates of sociopathy are far less -- and in those cultures there's more an emphasis on contributing to the group, and on respect for life, while our culture has its capitalistic emphasis on winning at all costs. Q. If not violent, sociopaths are generally getting their own way, right? A. Exactly. I hear comments such as, "This was the most charming personI ever met, the sexiest, the most intense. ..." I call this a predatory charisma that is difficult to explain but definitely exists. They're also very good at faking the emotions that the rest of us actually feel, such that they look normal. Q. Is it treatable or curable? A. Unfortunately not. We don't know how to instill conscience where there is none. ... Sociopaths seldom come into treatment unless they've been court-referred, and they don't seem to be in any kind of psychological pain. From checker at panix.com Mon May 9 21:05:06 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:05:06 -0400 (EDT) Subject: [Paleopsych] CHE: 2 Books Explore the Sins of Anthropologists Past and Present Message-ID: 2 Books Explore the Sins of Anthropologists Past and Present The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36a01701.htm HOT TYPE By DAVID GLENN INHUMAN ANTHROPOLOGY: One day in 1997, Gretchen E. Schafft, an applied anthropologist in residence at George Washington University, paid a visit to the Smithsonian Institution's National Anthropological Archives. Her goal that day was relatively prosaic. She wanted to read some World War II-era correspondence among American anthropologists. She wondered how much they had known at that time about the crimes committed by some of their German counterparts who had lent their services to the Nazi regime. What Ms. Schafft found instead were 75 boxes full of material produced in Poland by the Nazi anthropologists themselves. The material had been seized by U.S. soldiers in 1945 and given to the Smithsonian by the Pentagon two years later. No Smithsonian staff member had ever cataloged the boxes, which had apparently gone unnoticed for 50 years. The collection was difficult to stomach. It included human hair samples, fingerprints, photographs, drawings of head circumferences, and other artifacts of the Nazi regime's mania for categorizing human bodies. The Nazis were obsessed with salvaging, as they saw it, the German and other allegedly Nordic elements of the Polish population. If, in 1940, a Polish child's hair was sufficiently blond, and the shape of the head sufficiently "Aryan," he or she was likely to be forcibly sent west for "Germanization." Young people deemed purely Polish by the Nazis were shipped to work camps. Jews and Roma, of course, faced worse. In her new book, From Racism to Genocide: Anthropology in the Third Reich (University of Illinois Press), Ms. Schafft explores how the principles of early-20th-century physical anthropology, both scientific and pseudoscientific, were put to work by the Nazis. Several months after the invasion of Poland, Hitler's aides established the Institute for German Work in the East, which employed scholarly anthropologists to complete such tasks as "racial-biological investigation of groups whose value cannot immediately be determined" and "racial-biological investigation of Polish resistance members." Why were these anthropologists -- many of whom had received serious training at Germany's best universities -- willing to enlist in such projects? "I think, first of all, that they really were ideologically in tune with the government," Ms. Schafft says. "And, secondly, I think they believed that measurement data was somehow sacrosanct. To some extent, I think we still believe that. I think that's a very dangerous belief. Measurement data without context has to be viewed very suspiciously." A few years after her discovery at the Smithsonian, Ms. Schafft was contacted by a physical anthropologist who wanted to use the Nazis' data to shed light on "patterns of migration and population settlement." She resisted, arguing that the information had been collected through cruel means and for evil purposes, and is in any case highly suspect. The Nazi anthropologists often seem to have been absurdly insensitive to context. For example, they drew sweeping conclusions about alleged Russian physical and social traits on the basis of studies of half-starved Soviet soldiers in prisoner-of-war camps. "The data in and of themselves were useless," she says. "We shouldn't give the Nazis a second opportunity by rehashing these old data." The data will, however, be preserved for other purposes. The Nazi materials will soon be returned to Jagiellonian University, in Poland. (The Smithsonian will retain a digitized copy.) "What will be of most use to the people of Poland," Ms. Schafft says, "are, first, the records of Jews and others interviewed at the Tarn?w ghetto. Those will give some families the last indication of where their relatives were. And, second, the amazing photographs of people in villages throughout Poland. There are portraits of hundreds, if not more than a thousand, identifiable individuals." In a small way, she hopes, maintaining the collection in Poland will preserve the memory of a few of the victims of science -- and politics -- gone mad. *** Some related moral dilemmas are chewed over in Biological Anthropology and Ethics: From Repatriation to Genetic Identity (State University of New York Press), a collection edited by Trudy R. Turner, a professor of anthropology at the University of Wisconsin at Milwaukee. The book's 20 essays span a range of topics from the treatment of primates in the field (when is it acceptable to use anesthesia and radio collars?) to the sharing of data with colleagues (how quickly should scholars give their results to the major international DNA databases?). Some of the most contentious debates, however, concern the ground rules for working with human remains. Ever since 1990, when Congress passed the Native American Graves Protection and Repatriation Act, or Nagpra, anthropologists have argued about how the law's provisions should be understood and enforced. Frederika A. Kaestle, an assistant professor of anthropology at Indiana University at Bloomington and a contributor to the book, says the most difficult debates concern the study of human remains that are more than 7,000 years old. In such cases, it is often impossible to determine any direct ancestry or cultural affiliation with modern American Indian groups. In her own scholarship, Ms. Kaestle interprets the law's provisions very strictly, she says. "I won't work with remains that were found on private land. Because that material isn't covered under Nagpra it's a little too iffy for me ethically." She is optimistic that even if the law is tightened, as some American Indian advocates have proposed, it will still be feasible for scholars to do DNA studies of ancient remains. "The climate is changing a bit," she says. "Some Native American groups are not only accepting but promoting this work as something that they're interested in." From checker at panix.com Mon May 9 21:04:46 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:04:46 -0400 (EDT) Subject: [Paleopsych] CHE: Novel Perspectives on Bioethics Message-ID: Novel Perspectives on Bioethics The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36b00601.htm By MARTHA MONTELLO On March 16, the Kansas Legislature heatedly debated a bill that would criminalize all stem-cell research in the state. Evangelical-Christian politicians and conservative lawmakers argued with molecular biologists and physicians from the University of Kansas' medical school about the morality of therapeutic cloning. Up against a substantial audience of vocal religious conservatives, William B. Neaves, CEO and president of the Stowers Institute for Medical Research, a large, privately financed biomedical-research facility in Kansas City, began his impassioned defense of the new research by giving his credentials as "a born-again Christian for 30 years." Barbara Atkinson, executive vice chancellor of the University of Kansas Medical Center, tried to articulate the difference between "a clump of cells in a petri dish" and what several hostile representatives repeatedly interrupted to insist is "early human life." Clearly, in this forum, language mattered. Each word carried wagonloads of moral resonance. I am a literature professor. I was at the hearing because I am also chairwoman of the pediatric-ethics committee at the University of Kansas Medical Center. I listened to the debates get more and more heated as the positions got thinner and more polarized, and I kept thinking that these scientists and lawmakers needed to read more fiction and poetry. Leon R. Kass, chairman of the President's Council on Bioethics, apparently feels the same way. He opened the council's first session by asking members to read Hawthorne's story "The Birthmark,"and he has since published an anthology of literature and poetry about bioethics issues. The fight in Kansas (the bill was not put to a vote) is in some ways a microcosm of what has been happening around the country. From Kevorkian to Schiavo, cloning to antidepressants, issues of bioethics increasingly underlie controversies that dominate public and political discussion. Decisions about stem-cell research, end-of-life choices, organ transplantation, and mind- and body-enhancing drugs, among others, have become flash points for front-page news day after day. At the same time, some good literary narratives have emerged over the past few years that reveal our common yet deeply individual struggles to find an ethics commensurate with rapid advances in the new science and technologies. Kazuo Ishiguro's elegiac, disturbing new novel, Never Let Me Go, re-imagines our world in a strange, haunting tale of mystery, horror, love, and loss. Set in "England, 1990s," the story is pseudohistorical fiction with a hazy aura of scientific experimentation. A typical Ishiguro narrator, Kathy H. looks back on her first three decades, trying to puzzle out their meaning and discern the vague menace of what lies ahead. In intricate detail she sifts through her years at Hailsham, an apparently idyllic, if isolated, British boarding school, "in a smooth hollow with fields rising on all sides." Kathy and the other students were nurtured by watchful teachers and "guardians," who gave them weekly medical checks, warned them about the dangers of smoking, and monitored their athletics triumphs and adolescent struggles. Sheltered and protected, she and her friends Ruth and Tommy always knew that they were somehow special, that their well-being was important to the society somewhere outside, although they understood that they would never belong there. From the opening pages, a disturbing abnormality permeates their enclosed world. While the events at Hailsham are almost absurdly trivial -- Tommy is taunted on the soccer field, Laura gets caught running through the rhubarb garden, Kathy loses a favorite music tape -- whispered secrets pass among guardians and teachers, and the atmosphere is ominous -- as Kathy puts it, "troubling and strange." The children have no families, no surnames, no possessions but castoffs -- other people's junk. Told with a cool dispassion through a mist of hints, intuitions, and guesses, Kathy's memories gradually lift the veil on a horrifying reality: These children were cloned, created solely to become organ donors. Once they leave Hailsham (with its Dickensian reverberations of Havisham, that ghostly abuser of children) they will become "caregivers," then "donors," and if they live to make their "fourth donation," will "complete." The coded language that Kathy has learned to describe her fate flattens the unthinkable and renders it almost ordinary, simply what is, so bloodlessly that it heightens our sense of astonishment. What makes these doomed clones so odd is that they never try to escape their fate. Almost passive, they move in a fog of self-reinforced ignorance, resigned to the deadly destiny for which they have been created. However, in a dramatic scene near the end of the novel, Kathy and Tommy do try to discover, from one of the high-minded ladies who designed Hailsham, if a temporary "deferral" is possible. It is too late for any of them now, the woman finally divulges. Once the clones were created, years ago during a time of rapid scientific breakthroughs, their donations became the necessary means of curing previously incurable conditions. Society has become dependent on them. Now there is no turning back.The only way people can accept the program is to believe that these children are not fully human. Although "there were arguments" when the program began, she tells them, people's primary concern now is that their own family members not die from cancer, heart disease, diabetes, or motor-neuron diseases. People outside prefer to believe that the transplanted organs come from nowhere, or at least from beings less than human. Readers of Ishiguro's fiction will recognize his mastery in creating characters psychologically maimed by an eerie atrocity. From his debut novel, A Pale View of Hills (Putnam, 1982), Ishiguro's approach to horror has been oblique, restrained, and enigmatic. The war-ravaged widow from Nagasaki in that work presages the repressed English butler of The Remains of the Day (Random House, 1990) and Kathy herself, all long-suffering victims with wasted lives whose sense of obligation robs them of happiness. Their emotions reined in, their sight obscured, they are subject to wistful landscapes, long journeys, and a feeling of being far from the possibility of home and belonging. Never Let Me Go, however, ventures onto new terrain for Ishiguro by situating itself within current controversies about scientific research. Taking on some of the moral arguments about genetic engineering, the novel inevitably calls into question whether such fiction adds to the debates or clouds them -- and whether serious fiction about bioethics is enriched by the currency of its topic or hampered by it. Here Ishiguro's novel joins company with others that are centered in contemporary bioethics issues and might be considered a genre of their own. A decade ago, Doris Betts penetrated the intricate emotions around living donors' organ transplantation in her exquisitely rendered Souls Raised From the Dead. The novel offered a human dimension and nuanced depth to this area of medical-ethics deliberations, which were making headline news. In Betts's story, a dying young daughter needs as close a match as possible for a new kidney. Her parents face complexities and contradictions behind informed consent and true autonomy that are far more subtle, wrenching, and real than any medical document or philosophy-journal article can render. Betts does justice to the medical and moral questions surrounding decisions that physicians, patients, and families must make regarding potential organ donations. What makes the book so compelling, though, is its focus on the various and often divergent emotional strategies that parents and children use to cope with fear, sacrifice, and impending loss. The 13-year-old Mary Grace, her parents, and grandparents reveal themselves as fully rounded, noninterchangeable human beings who come to their decisions and moral understandings over time, within their own unique personal histories and relationships with each other. As the therapeutic possibilities of transplant surgery were breaking new ground in hospitals across the country, surgeons, families, and hospital ethics committees grappled with dilemmas about how to make good choices between the medical dictum to "do no harm" and the ethical responsibility to honor patients' sovereignty over their own bodies. Betts's novel captured the difficulty of doing the right thing for families enduring often inexpressible suffering: How much sacrifice can we expect of one family member to save another? The ethical complexities regarding organ donations, and particularly the dilemmas associated with decisions to conceive children as donors, are escalating. Four years ago The New York Times reported on two families who each conceived a child to save the life of another one. Fanconi anemia causes bone-marrow failure and eventually leukemia and other kinds of cancer. Children born with the disease rarely live past early childhood. Their best chance of survival comes from a bone-marrow transplant from a perfectly matched sibling. Many Fanconi parents have conceived another child in the hope that luck would give them an ideal genetic match. These two couples, however, became the first to use new reproductive technologies to select from embryos resulting from in vitro fertilization, so they could be certain that this second baby would be a perfect match. When the article appeared in the Times, many people wondered if it is wrong to create a child for "spare parts." News reports conjured up fears of "Frankenstein medicine." State and federal legislatures threatened laws to ban research using embryos. A fictional version of this dilemma appears in Jodi Picoult's novel My Sister's Keeper. Picoult, a novelist drawn to such charged topics as teen suicide and statutory rape, takes up this bioethics narrative of parents desperate to save a sick child through the promise of genetic engineering. Conceived in that way, Anna Fitzgerald has served since her birth as the perfectly matched donor for her sister, Kate, who has leukemia, supplying stem cells, bone marrow, and blood whenever needed. Now, though, as her sister's organs begin to fail, the feisty Anna balks when she is expected to donate a kidney. Through alternating points of view, Picoult exposes the family's moral, emotional, and legal dilemmas, asking if it can be right to use -- and perhaps sacrifice -- one child to save the life of another. The story draws the reader in with its interesting premise -- one sister's vital needs pitted against the other's -- but ultimately disintegrates within a melodramatic plot that strands its underdeveloped characters. Why is the girls' mother so blind and deaf to Anna's misgivings about her role as donor? How can we possibly believe the contrived ending, which circumvents the parents' need to make a difficult moral choice? Ultimately the novel trivializes what deserves to be portrayed as a profoundly painful Sophie's choice, using the contentious bioethics issue as grist for a kind of formulaic writing. While authors like Betts and Picoult have examined ethical dilemmas of the new science in a style that might be called realistic family drama, others lean toward science fiction, imagining dystopian futures that are chillingly based on the present. Often prescient, they reflect our unarticulated fears, mirroring our rising anxiety about where we are going and who we are becoming. In addressing concerns about cloning, artificial reproduction, and organ donation, these novels join an even broader, older genre, the dystopian novels of the biological revolution. In 1987 Walker Percy published The Thanatos Syndrome, a scathing fictional exploration of what the then-new psychotropic drugs might mean to our understanding of being human. In this last and darkest novel by the physician-writer, the psychiatrist Tom More stumbles on a scheme to improve human behavior by adding heavy sodium to the water supply. After all, the schemers argue, what fluoride has done for oral hygiene, we might do for crime, disease, depression, and poor memory! More is intrigued but ultimately aghast at the consequences: humans reduced to lusty apes with no discernible soul or even self-consciousness. Percy cleverly captures many of our qualms about such enhancement therapies in a fast-paced plot that reads like a thriller. Many readers, however, feel that this sixth and final novel is the least compelling of Percy's oeuvre, emphasizing his moral outrage over the excesses of science at the expense of a protagonist's spiritual and emotional journey that had previously been the hallmark of his highly acclaimed fiction. With less dark humor but equal verve, Margaret Atwood's Oryx and Crake chronicles the creation of a would-be paradise shaped and then obliterated by genetic manipulation. Echoes of her earlier best seller The Handmaid's Tale (Houghton Mifflin, 1986) reverberate through this postapocalyptic world set in an indeterminate future, where Snowman, the proverbial last man alive, describes how the primal landscape came to be after the evisceration of bioengineering gone awry. A modern-day Robinson Crusoe, Snowman is marooned on a parched beach, stranded between the polluted water and a chemical wasteland that has been stripped of humankind by a virulent plague. Once he melts away, even the vague memories of what was will have disappeared. As in other works of science fiction, while its plot complications drive the narrative, its powerful conceptual framework dominates the stage. For all it lacks in character complexity and realistic psychological motivations, this 17th book of Atwood's fiction has a captivating Swiftian moral energy, announced in the opening quotation, from Gulliver's Travels: "My principal design was to inform you, and not to amuse you."Readers, however, might wish that Atwood had made a stronger effort to amuse us. Her ability to sustain our interest is challenged by the story's unremitting bleakness and the lack of real moral depth to its few characters. Even with its weaknesses, Atwood's is a powerful cautionary tale, similar in some ways to Caryl Churchill's inventive play A Number (2002). That drama is constructed of a series of dialogues in which a son confronts his father (named Salter) with the news that he is one of "a number" of clones (all named Bernard). Years ago, grieving the death of his wife, Salter was left to raise a difficult son, lost to him in some deep way, whom he finally put into "care." Sometime later, wanting a replacement for the lost son, he had the boy cloned. Without his knowledge, 19 others were created, too. Now Salter hears not only the emotional pain and anger of his original troubled son, but also the harrowing psychological struggles of several of the cloned Bernards. Salter responds with a mix of anguish and resignation as he faces the consequences of decisions he once made without much thought. This strange play winds through an ethical maze as each of the characters desperately tries to come to some livable terms with what genetic engineering has wrought. The drama is inventive in both its staccato elliptical dialogues and its sheer number of existential and ethical ideas. In the end, though, the characters never emerge as human, never engage us sufficiently to make us care about their ordeals with selfhood and love. When Salter says to one of the cloned Bernards, "What they've done they've damaged your uniqueness, weakened your identity," it is difficult to believe that they were ever capable of possessing either. Although Churchill's nightmare may seem especially odd, her tale of violence, deception, and loss resonates with those of Betts, Ishiguro, and Picoult. What if you might lose your child? If the means were available, would you take any chance, do anything, to save her? Or, if lost to you, to bring him back? All of these stories have in common their underlying questions about where bioengineering is leading us, what kinds of choices it asks us to make, and where the true costs and benefits lie. What makes the stories different from other forms of ethical inquiry is their narrative form, their way of knowing as literature. John Gardner reminds us that novels are a form of moral laboratory. In the pages of well-written fiction, we explore the way a unique human being in a certain set of circumstance makes moral decisions and lives out their consequences. Some of the novels being written now offer valuable cautionary tales about what is at stake in our current forays into new science and technology, asking us, as Ishiguro does in Never Let Me Go, What is immutable? What endures? What is essential about being human? Where does the essential core of identity lie? Does it derive from nature or nurture, from our environment or genetics? But the best go further. As Ishiguro's does, they take the bioethics issue as a fundamental moral challenge. Instead of using an aspect of bioethics as an engine to drive the plot, some authors succeed in using it as a prism that shines new light onto timeless questions about what it means to be fully human. At its heart, Ishiguro's tale has very little to do with the specific current controversies over cloning or genetic engineering or organ transplantation, any more than The Remains of the Day has to do with butlering or A Pale View of Hills has to do with surviving the atomic bomb. By the end of the novel, we discover that Never Let Me Go is, if cautionary, also subtler and more subversive than we suspected. Tommy and Ruth are already gone, and Kathy herself is ready to begin the "donations" that will lead to her own "completion." During one of her long road trips, she stops the car for "the only indulgent thing" she's ever done in a life defined by duty and "what we're supposed to be doing." Looking out over an empty plowed field, just this once she allows herself to feel an inkling of what she's lost and all she will never have. At this moment, we realize ourselves in Kathy, and we see her foreshortened and stunted life as not so very different from our own. The biological revolution's greatest surprise of all may be that its dilemmas are not really new. Instead, it may simply deepen the ones we've always faced about how to find meaning in our own lives and the lives of others. Martha Montello is an associate professor in the department of history and philosophy of medicine and director of the Writing Resource Center in the School of Medicine at the University of Kansas. She also lectures on literature and ethics at the Harvard-MIT Division of Health Sciences & Technology, and co-edited Stories Matter: The Role of Narrative in Medical Ethics (Routledge, 2002). WORKS DISCUSSED IN THIS ESSAY My Sister's Keeper, by Jodi Picoult (Atria, 2004) Never Let Me Go, by Kazuo Ishiguro (Knopf, 2005) A Number, by Caryl Churchill (a 2002 play published by Theatre Communications Group in 2003) Oryx and Crake, by Margaret Atwood (Nan A. Talese, 2003) Souls Raised From the Dead, by Doris Betts (Knopf, 1994) The Thanatos Syndrome, by Walker Percy (Farrar, Straus and Giroux, 1987) From checker at panix.com Mon May 9 21:05:22 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:05:22 -0400 (EDT) Subject: [Paleopsych] CHE: 'The Internet and the Madonna: Religious Visionary Experience on the Web' Message-ID: 'The Internet and the Madonna: Religious Visionary Experience on the Web' The Chronicle of Higher Education, 5.5.13 http://chronicle.com/weekly/v51/i36/36a01501.htm By NINA C. AYOUB Minutes after the death of John Paul II, church officials sent a mass e-mail message to the news media alerting them. The high-tech missive was fitting for a pope who had embraced the Internet. But while the Vatican is now virtual, the Catholic devout are even more "wired," especially in regard to Marian apparitions. As Paolo Apolito notes in The Internet and the Madonna: Religious Visionary Experience on the Web (University of Chicago Press), there are just a handful of church-approved sightings of the Virgin Mary. However, beyond Lourdes, Fatima, and other recognized visionary sites, there have been a string of claimed appearances that have gained renown. Perhaps the most famous is Medjugorje, a village in Bosnia, where six children said they saw Mary on a hillside. While pre-World Wide Web, the Medjugorje sighting in 1981 was the first to take rapid advantage of a globalized media, says the scholar, an Italian anthropologist at the University of Salerno and the University of Rome III. Within a decade, some 10 million pilgrims had descended on the site. Medjugorje became the reference point for something of an apparition boom, Mr. Apolito says, creating also a "mechanism of reciprocal confirmation," kind of an "I'll consider your apparition if you'll consider mine." Today, he writes, the world of visionaries blends a neo-Baroque belief in miracles and wonders with elaborate use of the latest technologies. Yet the technical is moving the epiphanic person to the periphery: "Technology, by allowing and legitimizing every form of the extraordinary, winds up imposing the wonder of itself. ... " To demonstrate some of that usurping, Mr. Apolito turns first to photography and the urge to document. For example, a visionary communing with Mary on a hillside may also have to compete with the whirr of hundreds of cameras, each trying to capture an apparition or at least a dancing sun. There are also perils for the worshipful Web surfer, says Mr. Apolito. Even on Web rings of sites devoted to the Virgin, it can be a short trip from the devout to the debauched. Four clicks, he found, as he followed links via generic site banners. There are also parody sites such as the Miraculous Winking Jesus as well as sites that use apparition-talk to express violent anti-Catholicism. But beyond porn, parody, and bigotry, the horizontal landscape of the Web risks something else: disconnect. The scholar, who did fieldwork at Oliveto Citra, the Italian site of a claimed apparition, explores what gets lost when a visionary experience loses the context of local religious culture and is fragmented in the ether. References 3. http://www.amazon.com/exec/obidos/ASIN/0226021505/thechroniclofhig 4. http://bn.bfast.com/booklink/click?sourceid=18526&ISBN=0226021505 5. http://www.powells.com/cgi-bin/biblio?isbn=0226021505 From checker at panix.com Mon May 9 21:05:51 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:05:51 -0400 (EDT) Subject: [Paleopsych] Telegraph: What's 'national' about national arts organisations? Message-ID: What's 'national' about national arts organisations? http://www.telegraph.co.uk/arts/main.jhtml?xml=/arts/2005/05/07/baoh07.xml&sSheet=/arts/2005/05/07/ixartleft.html (Filed: 07/05/2005) Andrew O'Hagan investigates What is the purpose of a national theatre, a national opera or ballet company, a national orchestra, or a national gallery? What is the meaning of the word "national" in those famous organisations? Is it simply a matter of pride and funding, an indication that those particular institutions have the backing of an entire nation, its hopes and dreams of excellence? Or is it more complicated than that: do we expect these arts organisations, above all others, to embody in their work something essential about the nation? Should the Welsh National Opera, for instance, seek to capture a vision of international musical quality, or a vision of what it really means to be Welsh - or both? In 1899, WB Yeats, Augusta Gregory and Edward Martyn, the founders of the Irish National Theatre, declared that the job of the new theatre was "to bring upon the stage the deeper thoughts and emotions of Ireland". This was almost 20 years before Ireland's war of independence, but the Abbey, the theatre that grew out of their declaration, would provide the platform and the occasion for many of the great debates about freedom, responsibility, religion and modernity, debates that shaped the new nation and are still shaping it today. In Ireland, a taxpayer-funded conversation is seen to exist between art and the state, a conversation whose difficulties are part of its richness. This was true in the Czech Republic (which ended up with a playwright for a president); it was true in Spain after the death of Franco, which invested in the arts as a way of opening up freedom of expression; it was true in parts of Australia, where national museums began to blush at the idea of excluding aboriginal art; and it is true in post-war Germany and post-glasnost Russia, where national cultural institutions have allowed not just a conversation but a means of national cleansing about the repressions and horrors of the past. In each of those places, national art institutions played a part in making life new. What about Britain? Do we have reason to believe that cultural institutions bearing the word "national" or "royal" or "British" or "English" or "Scottish" or "Welsh" are engaging us in questions about who we are or who we are becoming? To some people's minds, such an effort would be spurious in the extreme. To them, the purpose of the Royal Opera House is to furnish a version of, say, Das Rheingold which fulfils the virtues of the work and stands up well to international standards. These are the things one can rely on a national opera company to do. The task bears some comparison with football in its modern form. A club such as Celtic has many international players, it is super-funded and super-commercial, super-branded, it can stand up to international competition on the field, yet, one might ask, what has any of this got to do with Glasgow? Does the team have anything to do with Glasgow? What relationship does the corporate image bear to the traditions that made the team and the community that supports it? Like the few crown jewels or the odd Stone of Scone, national arts companies are often, I feel, adornments that nations want to have in order to seem more like nations, but which can't bear the self-questioning that should come with a truly alive national company. What is achieved, for example, by the excellent Scottish Ballet being called Scottish Ballet instead of the Ashley Page Dance Company, which is more descriptive of who they are? The company is based in Glasgow, as it has been since 1969 when Peter Darrell took his Western Ballet Theatre there from Bristol. The company is not Scottish in its bones, so why should it matter that the company hangs on to the national title? It seems to matter, though. People want to believe that their national arts organisations speak volumes about the civilised nature of the country they live in or come from, the country whose name the company bears. It is like a highest form of cultural branding: your country is a logo, the ultimate stamp of quality. You see how this has been taken to extremes in America, where the use of that word, "America", immediately seems to confer on what follows an almost unutterable level of power and prestige. Patriotism has, in other words, taken over the meaning of the word "national": we use it to denote a settled imperial excellence, not the situating of a higher conversation between the arts and the state. That conversation exists, of course, in the streets, in the newspapers, but is not advertised by national companies as part of what they do. Perhaps the concentration on building "partnerships" and sponsorships has represented a form of privatisation of British culture by stealth; none of the cultural boffins I spoke to this week could quite define what it was that made the British Museum "British" or the National Gallery "National": they spoke of British values, but couldn't really say how these affected the life of the institutions. (It's interesting that the same people have no hesitation when asked the same question about the BBC.) My own guess is that the National Gallery of Art in Washington, for all its Van Goghs and Matisses, tells a different story from the National Gallery in London, for all its Van Goghs and Matisses. They each tell a particular story, but we might ask for much more of the particularities. We might say, in the spirit of the Irish, what does this material and the manner of its housing have to do with us? These days, we may mean less than we think when we speak of "national" this and that. I mean, would it be problematic if English National Ballet, who are near bankruptcy but are otherwise a good and fully-functioning company with no very well-defined home, were to become, as has been suggested, the National Ballet of Wales? It's rather like the situation at the founding of Scottish Ballet, except that, this time, many people are hitting the roof at the idea that the Welsh National Ballet Company might suffer from being, well, a bit un-Welsh. But why shouldn't a newly-designated ENB be good at getting into the intellectual scrum of Wales's modern make-up? Peter Darrell, who founded Scottish Ballet, was born in Surrey, and that didn't stop him dealing with the wealth of Scotland's folk heritage in his ballets. In fact, as in most areas of national life, a bit of outsiderism can sharpen the instincts, so long as they don't assume, like most modern footballers, that one piece of ground is the same as another. Each piece of ground is different, and the arts should be an ongoing investigation of that difference, an attempt to beautify and enrich the native gardens without becoming too conscious of the fences that surround them. From checker at panix.com Mon May 9 21:05:42 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:05:42 -0400 (EDT) Subject: [Paleopsych] Science Daily: Moderate Alcohol Consumption Enhances The Formation Of New Nerve Cells Message-ID: Moderate Alcohol Consumption Enhances The Formation Of New Nerve Cells http://www.sciencedaily.com/releases/2005/05/050508211456.htm Moderate alcohol consumption over a relatively long period of time can enhance the formation of new nerve cells in the adult brain. The new cells could prove important in the development of alcohol dependency and other long-term effects of alcohol on the brain. The findings are published by Karolinska Institutet. The study, which was carried out on mice, examined alcohol consumption corresponding to that found in normal social situations. The results show that moderate drinking enhances the formation of new cells in the adult brain. The cells survive and develop into nerve cells in the normal manner. No increase in neuronal atrophy, however, could be demonstrated. It is generally accepted these days that new nerve cells are continually being formed in the adult brain. One suggestion is that these new neurons could be important for memory and learning. The number of new cells formed is governed by a number of factors such as stress, depression, physical activity and antidepressants. "We believe that the increased production of new nerve cells during moderate alcohol consumption can be important for the development of alcohol addiction and other long-term effects of alcohol on the brain," says associate professor Stefan Bren?. "It is also possible that it is the ataractic effect of moderate alcohol consumption that leads to the formation of new brain cells, much in the same way as with antidepressive drugs." The researchers are now following up these exciting findings to understand the role that the new nerve cells thus formed play in cerebral activity. Publication: Moderate ethanol consumption increases hippocampal cell proliferation and neurogenesis in the adult mouse Aberg E, Hofstetter C, Olson L, Bren? S. Int J Neuropsychopharm Online May 21, 2005, see http://journals.cambridge.org From checker at panix.com Mon May 9 21:06:13 2005 From: checker at panix.com (Premise Checker) Date: Mon, 9 May 2005 17:06:13 -0400 (EDT) Subject: [Paleopsych] BBC: Lords back 'designer baby' choice Message-ID: Lords back 'designer baby' choice http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/health/4492345.stm 5.4.28 The creation of "designer babies" to treat sick siblings is lawful, the Law Lords have ruled, upholding an earlier court decision. The case centred on six-year-old Zain Hashmi, whose parents wanted a baby with a specific tissue type to help treat his debilitating blood disorder. His parents had begun treatment to create a baby, but have so far failed. Campaigners had asked the Lords to overturn the appeal court's 2003 ruling that allowed the couple to proceed. The group Comment on Reproductive Ethics (Core) asked the House of Lords to examine the Human Fertilisation and Embryology Act 1990 and to decide whether tissue-typing of the sort used by the Hashmis was legal. On Thursday, five Law Lords ruled unanimously that the practice of such tissue typing could be authorised by the Human Fertilisation and Embryology Authority (HFEA). HAVE YOUR SAY Modifying nature's course will lead to further and more severe disorders in the long term James Anthony, Cheshire The ruling, saying that HFEA was acting lawfully and appropriately in considering and granting a licence for pre-implantation tissue-typing, was welcomed by the authority. "We are pleased with the clarity that this ruling brings for patients," the HFEA said. Mrs Hashmi said the family appreciated the support they had received throughout the legal process. "It's nice to know that society has now embraced the technology to cure the sick and take away the pain. "We feel this ruling marks a new era and we are happy to move forwards now. We hope and pray that we get what we need for Zain." To stay alive Zain, who suffers from beta thalassaemia major, currently has to have blood transfusions every month, and drugs fed by a drip for 12 hours a day. Technology now allows doctors to select embryos with perfect tissue for a transplant operation. The best of ends, namely to cure a sick child, does not justify the means Life Anti-abortion group The High Court had imposed a ban on the treatment in December 2002 but this was overturned in the Court of Appeal. That decision allowed parents Raj and Shahana to go ahead with treatment to produce a sibling with the same tissue type as their son. In theory, this would have allowed them to take stem cells from the new baby's umbilical cord and transplant them into Zain. Tragically, however, Mrs Hashmi has had a series of miscarriages. The ruling "saddened" anti-abortion campaigners Life. "Today's decision from the House of Lords takes us further down the slippery slope in creating human beings to provide spare parts for another. "The best of ends, namely to cure a sick child, does not justify the means." From shovland at mindspring.com Wed May 11 00:12:54 2005 From: shovland at mindspring.com (Steve Hovland) Date: Tue, 10 May 2005 17:12:54 -0700 Subject: [Paleopsych] New research raises questions about buckyballs and the environment Message-ID: <01C55583.81CE12D0.shovland@mindspring.com> In a challenge to conventional wisdom, scientists have found that buckyballs dissolve in water and could have a negative impact on soil bacteria. The findings raise new questions about how the nanoparticles might behave in the environment and how they should be regulated, according to a report scheduled to appear in the June 1 print issue of the American Chemical Society's peer-reviewed journal Environmental Science & Technology. ACS is the world's largest scientific society. A buckyball is a soccer ball-shaped molecule made up of 60 carbon atoms. Also known as fullerenes, buckyballs have recently been touted for their potential applications in everything from drug delivery to energy transmission. Yet even as industrial-scale production of buckyballs approaches reality, little is known about how these nano-scale particles will impact the natural environment. Recent studies have shown that buckyballs in low concentrations can affect biological systems such as human skin cells, but the new study is among the earliest to assess how buckyballs might behave when they come in contact with water in nature. Scientists have generally assumed that buckyballs will not dissolve in water, and therefore pose no imminent threat to most natural systems. "We haven't really thought of water as a vector for the movement of these types of materials," says Joseph Hughes, Ph.D., an environmental engineer at Georgia Tech and lead author of the study. But Hughes and his collaborators at Rice University in Texas have found that buckyballs combine into unusual nano-sized clumps - which they refer to as "nano-C60" - that are about 10 orders of magnitude more soluble in water than the individual carbon molecules. In this new experiment, they exposed nano-C60 to two types of common soil bacteria and found that the particles inhibited both the growth and respiration of the bacteria at very low concentrations - as little as 0.5 parts per million. "What we have found is that these C60 aggregates are pretty good antibacterial materials," Hughes says. "It may be possible to harness that for tremendously good applications, but it could also have impacts on ecosystem health." Scientists simply don't know enough to accurately predict what impact buckyballs will have on the environment or in living systems, which is exactly why research of this type needs to be done in the early stages of development, Hughes says. He suggests that his findings clearly illustrate the limitations of current guidelines for the handling and disposal of buckyballs, which are still based on the properties of bulk carbon black. "No one thinks that graphite and diamond are the same thing," Hughes says. They're both bulk carbon, but they are handled in completely different ways. The same should be true for buckyballs, according to Hughes. These particles are designed to have unique surface chemistries, and they exhibit unusual properties because they are at the nanometer scale - one billionth of a meter, the range where molecular interactions and quantum effects take place. It is precisely these characteristics that make them both so potentially useful and hazardous to biological systems. "I think we should expect them to behave differently than our current materials, which have been studied based on natural bulk forms," Hughes says. "Learning that C60 behaves differently than graphite should be no surprise." Overall, the toxicological studies that have been reported in recent years are a signal that the biological response to these materials needs to be considered. "That doesn't mean that we put a halt on nanotechnology," Hughes says. "Quite the opposite." "As information becomes available, we have to be ready to modify these regulations and best practices for safety," he continues. "If we're doing complementary studies that help to support this line of new materials and integrate those into human safety regulations, then the industry is going to be better off and the environment is going to be better off." The American Chemical Society is a nonprofit organization, chartered by the U.S. Congress, with an interdisciplinary membership of more than 158,000 chemists and chemical engineers. It publishes numerous scientific journals and databases, convenes major research conferences and provides educational, science policy and career programs in chemistry. Its main offices are in Washington, D.C., and Columbus, Ohio. From shovland at mindspring.com Wed May 11 13:51:53 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 11 May 2005 06:51:53 -0700 Subject: [Paleopsych] A Vision of Terror Message-ID: <01C555F5.EAC11CC0.shovland@mindspring.com> By John Gartner May 10, 2005 Page 1 of next A new generation of software called Starlight 3.0, developed for the Department of Homeland Security by the Pacific Northwest National Laboratory (PNNL), can unravel the complex web of relationships between people, places, and events. And other new software can even provide answers to unasked questions. Anticipating terrorist activity requires continually decoding the meaning behind countless emails, Web pages, financial transactions, and other documents, according to Jim Thomas, director of the National Visualization and Analytics Center (NVAC) in Richland, Washington. Federal agencies participating in terrorism prevention monitor computer networks, wiretap phones, and scour public records and private financial transactions into massive data repositories. "We need technologies to deal with complex, conflicting, and sometimes deceptive information," says Thomas at NVAC, which was founded last year to detect and reduce the threats of terrorist attacks. In September 2005, NVAC, a division of the PNNL, will release its Starlight 3.0 visual analytics software, which graphically displays the relationships and interactions between documents containing text, images, audio, and video. The previous generation of software was not fully visual and contained separate modules for different functions. It has been redesigned with an enhanced graphical interface that allows intelligence personnel to analyze larger datasets interactively, discard unrelated content, and add new streams of data as they are received, according to John Risch, a chief scientist at Pacific Northwest National Laboratory. Starlight quadruples the number of documents that can be analyzed at one time -- from the previous 10,000 to 40,000 -- depending on the type of files. It also permits multiple visualizations to be opened simultaneously, which allows officers for the first time to analyze geospatial data within the program. According to Risch, a user will be able to see not only when but where and in what proximity to each other activities occurred. "For tracking terrorist networks, you can simultaneously bring in telephone intercepts, financial transactions, and other documents?all into one place, which wasn't possible before," Risch says. The Windows-based program describes and stores data in the XML (extensible markup language) format and automatically converts data from other formats, such as databases and audio transcriptions. Risch says that as the volume of data being collected increases, the software has to be more efficient in visually representing the complex relationships between documents. "Starlight can show all the links found on a Web page, summarize the topics discussed on those pages and how they are connected [to the original page]." From shovland at mindspring.com Wed May 11 16:38:10 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 11 May 2005 09:38:10 -0700 Subject: [Paleopsych] Monoatomic Elements Message-ID: <01C5560D.25BF1310.shovland@mindspring.com> Monoatomic elements are nothing more than elements which are chemically isolated, i.e. instead of 60 atoms of Carbon are 34 atoms of Silicon being bound together in something called a Buckministerfullerene or a knobbier version of the same. The significance lies in the fact that when a single element metal progresses from a normal metallic state to a monoatomic state, it passes through a series of chemically different states. These include: . An alloy of numerous atoms of the same element, which exhibit all the characteristics normally associated with the metal: electrical conductivity, color, specific gravity, density, and so forth. The atom's intrinsic temperature might be room temperature. . A combination of significantly fewer atoms of the same element, which no longer exhibit all of the characteristics normally associated with the metal. For example, the electrical conductivity or color might change. The atom's intrinsic temperature drops, for example, to 50 to 100 oK (or about two hundred degrees below zero oC). . A Microcluster of far few atoms -- typically on the order of less than one hundred atoms, and as few as a dozen or so atoms. The metal characteristics begin to fall off one by one until the so-called metal is hardly recognized. The intrinsic temperature has now fallen to the range of 10 to 20 oK, only slightly above Absolute Zero. . A Monoatomic form of the element -- in which each single atom is chemically inert and no longer possesses normal metallic characteristics; and in fact, may exhibit extraordinary properties. The atom's intrinsic temperature is now about 1 oK, or close enough to Absolute Zero that Superconductivity is a virtually automatic condition. A case in point is Gold. Normally a yellow metal with a precise electrical conductivity and other metallic characteristics, the metallic nature of gold begins to change as the individual gold atoms form chemical combinations of increasingly small numbers. At a microcluster stage, there might be 13 atoms of gold in a single combination. Then, dramatically, at the monoatomic state, gold becomes a forest green color, with a distinctly different chemistry. It's electrical conductivity goes to zero even as its potential for Superconductivity becomes maximized. Monoatomic gold can exhibit substantial variations in weight, as if it were no longer fully extant in space-time. Other elements which have many of these same properties are the Precious Metals , which include Ruthenium, Rhodium, Palladium, Silver, Osmium, Iridium, Platinum, and Gold. All of these elements have to greater or lesser degree, the same progression as gold does in continuously reducing the number of atoms chemically connected. Many of these precious elements are found in the same ore deposits, and in their monoatomic form are often referred to as the White Powder of Gold . Monoatomic elements apparently exist in nature in abundance. Precious Metal ores are, however, not always assayed so as to identify them as such. Gold miners, for example, have found what they termed "ghost gold" -- "stuff" that has the same chemistries as gold, but which were not yellow, did not exhibit normal electrical conductivity, and were not identifiable with ordinary emission spectroscopy. Thus they were more trouble than they were worth, and generally discounted. However, in a technique called "fractional vaporization", the monoatomic elements can be found and clearly identified via a more advanced emission spectroscopy. This fact was first discussed by David Radius Hudson , who was attempting to separate gold and silver from raw ore -- but was hindered by the ghost gold which had no apparent intrinsic value. The process involved placing a sample on a standard carbon electrode, running a second carbon electrode down to a position just above the first, and then striking a Direct Current arc across the electrodes. The electrical intensity of the arc would ionize the elements in the sample such that each of the elements would give off specific, identifying frequencies of light. By measuring the specific frequencies of light (the spectrum of the element or elements), one could then identify which elements were in the sample. Typically, such spectroscopic analysis involves striking the arc for 10 to 15 seconds, at the end of which, the carbon electrodes are effectively burned away. According to the majority of American spectroscopists, any sample can be ionized and read within those 15 seconds. In the advanced technique, the carbon electrodes are sheathed with an inert gas (such as Argon). This allows the emission spectroscopy process to be continued far beyond the typical 15 seconds, in order to fully identify all of the elements in their various forms. When this was done, in the first seconds, the ghost gold might be identified as iron, silicon, and aluminum. But as the process continued for as long as 300 seconds, palladium began to be read at about 90 seconds, platinum at 110 seconds, ruthenium at 130 seconds, rhodium at 145 seconds, iridium at 190 seconds, and osmium at 220 seconds. These latter readings were the monoatomic elements. Commercially available grades of these metals were found to be including only about 15% of the emission spectroscopic readings. The mining activity of what is considered the best deposit in the world for six of these elements (Pd, Pt, Os, Ru, Ir, and Rh) yields one-third of one ounce of all these precious metals per ton of ore. But this is based on the standard spectroscopic analysis. When the burn is continued for up to 300 seconds, the same ores might easily yield emission lines suggesting: 6 to 8 ounces of palladium, 12 to 13 ounces of platinum, 150 ounces of osmium, 250 ounces of ruthenium, 600 ounces of iridium, and 1200 ounces of rhodium! Over 2200 ounces per ton, instead 1/3 of 1 ounce per ton! [Keep in mind that rhodium typically sells for $3,000/ounce, while gold sells for $300/ounce!] The distinguishing characteristic between the first and second readings of the emission spectroscopy for the precious metals is that all of them come in two basic forms. The first is the traditional form of metals: yellow Gold, for example. The second is the very non-traditional form of the metal: the monoatomic state. The chemistries and physics of these two different states of these metals are radically different. More importantly, when the atoms are in the monoatomic state, things really begin to get interesting! A key to understanding monoatomic elements is to recognize that the monoatomic state results in a rearrangement of the electronic and nuclear orbits within the atom itself. This is the derivation of the term: Orbitally-Rearranged Monoatomic Element (ORME ). A monoatomic state implies a situation where an atom is "free from the influence of other atoms." Is this, perhaps, a violation of some very basic, absolutely fundamental law of the universe -- which says that nothing is separate? If such a law constituted reality, then a necessary condition for monoatomic elements to even exist would require them to be superconductive, just in order to link them through all distance and time to other superconducting monoatomic elements. This would be necessary in order to prevent separation. The question is whether separation is but the Ultimate Illusion? From waluk at earthlink.net Wed May 11 19:13:56 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 11 May 2005 12:13:56 -0700 Subject: [Paleopsych] Monoatomic Elements In-Reply-To: <01C5560D.25BF1310.shovland@mindspring.com> References: <01C5560D.25BF1310.shovland@mindspring.com> Message-ID: <42825974.1010401@earthlink.net> Steve Hovland wrote: >>A key to understanding monoatomic elements is to recognize that the monoatomic state results in a rearrangement of the electronic and nuclear orbits within the atom itself. This is the derivation of the term: Orbitally-Rearranged Monoatomic Element (ORME ). A monoatomic state implies a situation where an atom is "free from the influence of other atoms." Is this, perhaps, a violation of some very basic, absolutely fundamental law of the universe -- which says that nothing is separate? If such a law constituted reality, then a necessary condition for monoatomic elements to even exist would require them to be superconductive, just in order to link them through all distance and time to other superconducting monoatomic elements. This would be necessary in order to prevent separation. The question is whether separation is but the Ultimate Illusion?>> Hi Steve, this is absolutely fascinating. Could you possibly elaborate on your last sentence....IOW, are you saying that separation doesn't actually occur and is only virtual? Regards, Gerry Reinhart-Waller -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Thu May 12 00:38:26 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:38:26 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: The Tipping Point Message-ID: The Tipping Point http://www.nytimes.com/2005/05/11/opinion/11board.html By BELINDA BOARD London JOHN BOLTON, President Bush's nominee to be ambassador to the United Nations, has been described as dogmatic, abusive to his subordinates and a bully. Yet Mr. Bush has said that John Bolton is the right man at the right time. Can these seemingly contradictory statements both be accurate? Yes. The reality is that sometimes the characteristics that make someone successful in business or government can render them unpleasant personally. What's more astonishing is that those characteristics when exaggerated are the same ones often found in criminals. There has been anecdotal and case-study evidence suggesting that successful business executives share personality characteristics with psychopaths. The question is, are the characteristics that make up personality disorders fundamentally different from the characteristics of extreme personalities we see in everyday life, or do they differ only in degree? In 2001, I compared the personality traits of 39 high-ranking business executives in Britain with psychiatric patients and criminals with a history of mental health problems. The business managers completed a standard clinical personality-disorder diagnostic questionnaire and then were interviewed. The information on personality disorders among criminals and psychiatric patients had been gathered by local clinics. Our sample was small, but the results were definitive. If personality and its pathology are distinct from each other, we should have found different levels of personality disorders in these diverse populations. We didn't. The character disorders of the business managers blended together with those of the criminals and mental patients. In fact, the business population was as likely as the prison and psychiatric populations to demonstrate the traits associated with narcissistic personality disorder: grandiosity, lack of empathy, exploitativeness and independence. They were also as likely to have traits associated with compulsive personality disorder: stubbornness, dictatorial tendencies, perfectionism and an excessive devotion to work. But there were some significant differences. The executives were significantly more likely to demonstrate characteristics associated with histrionic personality disorder, like superficial charm, insincerity, egocentricity and manipulativeness. They were also significantly less likely to demonstrate physical aggression, irresponsibility with work and finances, lack of remorse and impulsiveness. What does this tell us? It tells us that if reports of Mr. Bolton's behavior are accurate then both his supporters and critics could be right. It also tells us that characteristics of personality disorders can be found throughout society and are not just concentrated in psychiatric or prison hospitals. Each characteristic by itself isn't necessarily a bad thing. Take a basic characteristic like influence and it's an asset in business. Add to that a smattering of egocentricity, a soup?on of grandiosity, a smidgen of manipulativeness and lack of empathy, and you have someone who can climb the corporate ladder and stay on the right side of the law, but still be a horror to work with. Add a bit more of those characteristics plus lack of remorse and physical aggression, and you have someone who ends up behind bars. As we all know, public figures can exhibit extreme characteristics. Often it is these characteristics that have propelled them to prominence, yet these same behaviors can cause untold human wreckage. What's important is the degree to which a person has each ingredient or characteristic and in what configuration. Congress will try to decide whether Mr. Bolton has the right combination. Belinda Board is a clinical psychologist based at the University of Surrey and a consultant on organizational psychology. From checker at panix.com Thu May 12 00:38:37 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:38:37 -0400 (EDT) Subject: [Paleopsych] NYT: AIDS Now Compels Africa to Challenge Widows' 'Cleansing' Message-ID: AIDS Now Compels Africa to Challenge Widows' 'Cleansing' http://www.nytimes.com/2005/05/11/international/africa/11malawi.html By SHARON LaFRANIERE MCHINJI, Malawi - In the hours after James Mbewe was laid to rest three years ago, in an unmarked grave not far from here, his 23-year-old wife, Fanny, neither mourned him nor accepted visits from sympathizers. Instead, she hid in his sister's hut, hoping that the rest of her in-laws would not find her. But they hunted her down, she said, and insisted that if she refused to exorcise her dead husband's spirit, she would be blamed every time a villager died. So she put her two small children to bed and then forced herself to have sex with James's cousin. "I cried, remembering my husband," she said. "When he was finished, I went outside and washed myself because I was very afraid. I was so worried I would contract AIDS and die and leave my children to suffer." Here and in a number of nearby nations including Zambia and Kenya, a husband's funeral has long concluded with a final ritual: sex between the widow and one of her husband's relatives, to break the bond with his spirit and, it is said, save her and the rest of the village from insanity or disease. Widows have long tolerated it, and traditional leaders have endorsed it, as an unchallenged tradition of rural African life. Now AIDS is changing that. Political and tribal leaders are starting to speak out publicly against so-called sexual cleansing, condemning it as one reason H.I.V. has spread to 25 million sub-Saharan Africans, killing 2.3 million last year alone. They are being prodded by leaders of the region's fledging women's rights movement, who contend that lack of control over their sex lives is a major reason 6 in 10 of those infected in sub-Saharan Africa are women. But change is coming slowly, village by village, hut by hut. In a region where belief in witchcraft is widespread and many women are taught from childhood not to challenge tribal leaders or the prerogatives of men, the fear of flouting tradition often outweighs even the fear of AIDS. "It is very difficult to end something that was done for so long," said Monica Nsofu, a nurse and AIDS organizer in the Monze district in southern Zambia, about 200 miles south of the capital, Lusaka. "We learned this when we were born. People ask, Why should we change?" In Zambia, where one out of five adults is now infected with the virus, the National AIDS Council reported in 2000 that this practice was very common. Since then, President Levy Mwanawasa has declared that forcing new widows into sex or marriage with their husband's relatives should be discouraged, and the nation's tribal chiefs have decided not to enforce either tradition, their spokesman said. Still, a recent survey by Women and Law in Southern Africa found that in at least one-third of the country's provinces, sexual "cleansing" of widows persists, said Joyce MacMillan, who heads the organization's Zambian chapter. In some areas, the practice extends to men. Some Defy the Risk Even some Zambian volunteers who work to curb the spread of AIDS are reluctant to disavow the tradition. Paulina Bubala, a leader of a group of H.I.V.-positive residents near Monze, counsels schoolchildren on the dangers of AIDS. But in an interview, she said she was ambivalent about whether new widows should purify themselves by having sex with male relatives. Her husband died of what appeared to be AIDS-related symptoms in 1996. Soon after the funeral, both Ms. Bubala and her husband's second wife covered themselves in mud for three days. Then they each bathed, stripped naked with their dead husband's nephew and rubbed their bodies against his. Weeks later, she said, the village headman told them this cleansing ritual would not suffice. Even the stools they sat on would be considered unclean, he warned, unless they had sex with the nephew. "We felt humiliated," Ms. Bubala said, "but there was nothing we could do to resist, because we wanted to be clean in the land of the headman." The nephew died last year. Ms. Bubala said the cause was hunger, not AIDS. Her husband's second wife now suffers symptoms of AIDS and rarely leaves her hut. Ms. Bubala herself discovered she was infected in 2000. But even the risk of disease does not dent Ms. Bubala's belief in the need for the ritual's protective powers. "There is no way we are going to stop this practice," she said, "because we have seen a lot of men and women who have gone mad" after spouses died. Ms. Nsofu, the nurse and AIDS organizer, argues that it is less important to convince women like Ms. Bubala than the headmen and tribal leaders who are the custodians of tradition and gatekeepers to change. "We are telling them, 'If you continue this practice, you won't have any people left in your village,' " she said. She cites people, like herself, who have refused to be cleansed and yet seem perfectly sane. Sixteen years after her husband died, she argues, "I am still me." Ms. Nsofu said she suggested to tribal leaders that sexual cleansing most likely sprang not from fears about the vengeance of spirits, but from the lust of men who coveted their relatives' wives. She proposes substituting other rituals to protect against dead spirits, like chanting and jumping back and forth over the grave or over a cow. Headman Is a Firm Believer Like their counterparts in Zambia, Malawi's health authorities have spoken out against forcing widows into sex or marriage. But in the village of Ndanga, about 90 minutes from the nation's largest city, Blantyre, many remain unconvinced. Evance Joseph Fundi, Ndanga's 40-year-old headman, is courteous, quiet-spoken and a firm believer in upholding the tradition. While some widows sleep with male relatives, he said, others ask him to summon one of the several appointed village cleansers. In the native language of Chewa, those men are known as fisis or hyenas because they are supposed to operate in stealth and at night. Mr. Fundi said one of them died recently, probably of AIDS. Still, he said with a charming smile, "We can not abandon this because it has been for generations." Since 1953, Amos Machika Schisoni has served as the principal village cleanser. He is uncertain of his age and it is not easily guessed at. His hair is grizzled but his arms are sinewy and his legs muscled. His hut of mud bricks, set about 50 yards from a graveyard, is even more isolated than most in a village of far-flung huts separated by towering weeds and linked by dirt paths. What Tradition Dictates He and the headman like to joke about the sexual demands placed upon a cleanser like Mr. Schisoni, who already has three wives. He said tradition dictates that he sleep with the widow, then with each of his own wives, and then again with the widow, all in one night. Mr. Schisoni said that the previous headman chose him for his sexual prowess after he had impregnated three wives in quick succession. Now, Mr. Schisoni, said he continues his role out of duty more than pleasure. Uncleansed widows suffer swollen limbs and are not free to remarry, he said. "If we don't do it, the widow will develop the swelling syndrome, get diarrhea and die and her children will get sick and die," he said, sitting under an awning of drying tobacco leaves. "The women who do this do not die." His wives support his work, he said, because they like the income: a chicken for each cleansing session. He insisted that he cannot wear a condom because "this will provoke some other unknown spirit." He is equally adamant in refusing an H.I.V. test. "I have never done it and I don't intend to do it," he said. To protect himself, he said, he avoids widows who are clearly quite sick . Told that even widows who look perfectly healthy can transmit the virus, Mr. Schisoni shook his head. "I don't believe this," he said. At the traditional family council after James Mbewe was killed in a truck accident in August 2002, Fanny Mbewe's mother and brothers objected to a cleanser, saying the risk of AIDS was too great. But Ms. Mbewe's in-laws insisted, she said. If a villager so much as dreamed of her husband, they told her, the family would be blamed for allowing his spirit to haunt their community on the Malawi-Zambia border. Her husband's cousin, to whom she refers only as Loimbani, showed up at her hut at 9 o'clock at night after the burial. "I was hiding my private parts," she said in an interview in the office of Women's Voice, a Malawian human rights group. "You want to have a liking for a man to have sex, not to have someone force you. But I had no choice, knowing the whole village was against me." Loimbani, she said, was blas?. "He said: 'Why are you running away? You know this is our culture. If I want, I could even make you my second wife." He did not. He left her only with the fear that she will die of the virus and that her children, now 8 and 10, will become orphans. She said she is too fearful to take an H.I.V. test. "I wish such things would change," she said. From checker at panix.com Thu May 12 00:38:50 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:38:50 -0400 (EDT) Subject: [Paleopsych] CHE: NIH Continues to Place a Low Priority on Research on Gender Differences Message-ID: NIH Continues to Place a Low Priority on Research on Gender Differences, Health-Advocacy Group Says News bulletin from the Chronicle of Higher Education, 5.5.11 http://chronicle.com/prm/daily/2005/05/2005051102n.htm [53]By SILLA BRUSH Washington Research on the biological and health differences between men and women remains a low priority at the National Institutes of Health, according to a report released on Tuesday by the Society for Women's Health Research, despite what the society says is increasing evidence of the importance of such research. The society, a Washington-based advocacy organization, says research on sexual differences is necessary in all types of biological studies. But from 2000 to 2003, only about 3 percent of all grants awarded by the NIH went to projects on the differences between men and women, according to the report, although there was nearly a 20-percent increase in the total number of NIH grants. "Given the growing body of literature on sex differences, external reports about NIH practices, and the NIH's internal efforts to promote this research, we had hoped to see higher and increasing levels of funding for this important area of research," Sherry A. Marts, the society's vice president for scientific affairs and an author of the report, said in a written statement. Donald M. Ralbovsky, a spokesman for the NIH, said the agency was reviewing the report. He declined to comment further. The National Institute on Alcohol Abuse and Alcoholism awarded 8 percent of its grants to such research, the highest level of any NIH center, according to the report. The centers that have the most money and support the most grants each year, such as the National Cancer Institute and the National Heart, Lung, and Blood Institute, however, ranked low in their support for studies on sexual and gender differences. From 2000 to 2003, the centers that financed the highest percentage of studies on sexual differences also cut back on their support. To increase the level of support, the society recommends updating NIH guidelines to promote that type of research and issuing an NIH-wide public announcement inviting applications for the research. The full text of the report, "National Institutes of Health: Intramural and Extramural Support for Research on Sex Differences, 2000-2003," is available on the center's [71]Web site. _________________________________________________________________ Background articles from The Chronicle: * [73]Study Challenges View That Clinical Trials Have Focused on Men (5/11/2001) * [74]More Research Needed on Women, Study Finds (5/19/2000) * [75]Studies of Women's Health Produce a Wealth of Knowledge on the Biology of Gender Differences (6/25/1999) References 53. mailto:silla.brush at chronicle.com 71. http://www.womenshealthresearch.org/press/CRISPreport.pdf 73. http://chronicle.com/weekly/v47/i35/35a01801.htm 74. http://chronicle.com/weekly/v46/i37/37a04403.htm 75. http://chronicle.com/weekly/v45/i42/42a01901.htm E-mail me if you have problems getting the referenced articles. From checker at panix.com Thu May 12 00:39:11 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:39:11 -0400 (EDT) Subject: [Paleopsych] Joel Kotkin: Cities: Places Sacred, Safe, and Busy Message-ID: Joel Kotkin: Cities: Places Sacred, Safe, and Busy http://www.americancity.org/article.php?id_article=119 [spacer.gif] Humankinds greatest creation has always been its cities. They represent the ultimate handiwork of our imagination as a species, compressing and unleashing the creative urges of humanity. From the earliest beginnings, when only a tiny fraction of humans lived in cities, they have been the places that generated most of mankinds art, religion, culture, commerce, and technology. Although many often mistakenly see cities as largely a Western phenomenon, with one set of roots, urbanism has worn many different guises. Over the past five to seven millennia, cities have been built in virtually every part of the world from the highlands of Peru to the tip of southern Africa and the coasts of Australia. Some cities started as little more than overgrown villages that, over time, developed momentum and mass. Others have reflected the conscious vision of a high priest, ruler, or business elite, following a general plan to fulfill some greater divine, political, or economic purpose. The oldest permanent urban footprints are believed to be in Mesopotamia, the land between the Tigris and Euphrates River. From those roots sprang a plethora of metropolises that represent the founding experiences of the Western urban heritage, including Ur, Agade, Babylon, Nineveh, Memphis, Knossos, and Tyre. But many other cities sprang up largely independent of these early Mesopotamian and Mediterranean settlements. Some of these, such as Mohenjo-daro and Harrapa in India and Changan in China, achieved a scale and complexity equal to any of their Western contemporaries. All of these cities, numerous and various, are however reflective of some greater universal human aspiration. The key to understanding that universal aspiration lies in the words of the Greek historian Herodotus. While traveling in the 5th century B.C. to places both thriving and struggling, he wrote, For most of those which were great once are small today; and those that used to be small were great in my own time. Cities throughout history have risen and fallen. The critical questions of Herodotus time still remain: what makes cities great, and what leads to their gradual demise? I argue that three critical factors have determined the overall health of cities: the sacredness of place, the ability to provide security and project power, and the animating role of commerce. Where these factors are present, urban culture flourishes. When these elements weaken, cities dissipate and eventually recede out of history. The Sacredness of Place Religious structurestemples, cathedrals, mosques, and pyramidshave long dominated the landscape and imagination of great cities. These buildings suggested that the city was also a sacred place, connected directly to divine forces controlling the world. This was true not only in Mesopotamia, but also in the great capital cities of China, in Athens, in Rome, in the city-states of the Italian Renaissance, and among the far-flung urban centers of the classical Islamic world. In our own, much more secularly oriented time, the role of religion and of sacred place is often downgraded and even ignoredlikely at our own great peril, as evidenced by the downfall of overly secular cities from ancient Greece to the centers of Soviet society. Yet even the most secular of cities still seek to recreate the sense of sacred place through towering commercial buildings and evocative cultural structures. Such sights inspire a sense of civic patriotism or awe, albeit without the comforting suggestion of divine guidance. A striking landscape, historian Kevin Lynch once suggested, is the skeleton in which city dwellers construct their socially important myths. The Need for Security Cities must first and foremost be safe. Many contemporary urban areas, notably in western Europe, North America, and East Asia, have taken this precept for granted, but the threat posed by general disorder in many Third World cities and by Islamic terror around the globe may once again focus urbanites on the fundamental issue of security. An increased focus on safety would be in keeping with historic norms. Many cities, observed historian Henry Pirenne, first arose as places of refuge from marauding nomads, or from general lawlessness. When a citys ability to guarantee the safety of its citizens and institutions has declined, as at the end of the western Roman empire or in crime-infested late-20th Century American inner cities, urbanities have tended to retreat to the hinterland or to migrate to another, safer city. The Role of Commerce Yet sanctity and safety alone cannot create great cities. Priests, soldiers, and bureaucrats may provide the prerequisites for urban success, but they cannot themselves produce enough wealth to sustain large populations for a long period of time. Great cities can flourish as administrative, cultural, or artistic centers for only as long as they either create wealth or can extract it from other places. Over time, virtually every parasitic urban economyincluding the most effective of all, ancient Romehas declined as it lost the ability to siphon off the resources of its periphery. Cities that generated their own wealth have proven far more sustainable. The self-sustaining city has required an active economy of artisans, merchants, working people, and sadly, in most places and most of history, slaves. Such people, necessarily the vast majority of urbanites, have, since the advent of capitalism, emerged as the primary creators of the city itself. The Islamic City To understand how the three critical factors have worked throughout historyand to understand the challenges facing cities around the world today, in the first era in history in which the majority of people live in urban areaswe must look beyond the Western context that has been the focus of most urban historians in America. Only two of the worlds twenty largest metropolitan areas, New York and Los Angeles, are fundamentally Western cities. Most of the worlds fastest growing cities, such as those in the Islamic world, and many of the most increasingly influential ones, notably in East Asia, have developed in strikingly different historical contexts. Islam started out as a profoundly urban faith. Mohammed was a successful merchant in Mecca, a long established trading and religious center on the barren Arabian peninsula. Mecca had been influenced by first Hellenistic and then Roman rulers; its varied population included pagans, Jews, and after the 2nd Century, Christians as well. The old clan loyalties of the desert culture posed a distinct threat to this nascent urban community. Meccans lacked the common ethos and rule of law applicable to unrelated people that had held cities together since Mesopotamian times. In this respect, Mohammeds great achievement was to supplant Bedouin clan ties with a sense of universal moral valuesimilar to the role played by the Catholic Church to Europe in the Middle Ages. The Muslim epoch which followed Mohammeds death in 632 represented a new beginning in urban history. Islam broke dramatically with traditions of classical urbanism such as Socrates, who saw people in the city as a primary source of knowledge. Islam fostered a sophisticated urban culture but did not worship the city for its own sake. Religious concerns, the integration of the daily lives of men with a transcendent God, overshadowed those of municipal affairs. The primacy of faith was evident in Islamic cities. Instead of the classical emphasis on public buildings and spaces, mosques now arose at the center of urban life. Todays West sees Islam as intolerant of modernity and cosmopolitanismpartially because of an actual threat of Islamic terrorists and partially because of more general stereotypes. Yet early Muslim civilization promoted something far different than intolerant jihads. The early Islamic conquerors sought to incorporate newly acquired citiesDamascus, Jerusalem, and Carthageinto what they believed to be a spiritually superior urban civilization. Other peoples of the bookas Muslims considered Jews and Christianswere allowed to practice their faiths with considerable freedom. The Koran simply suggested that these dhimmis be made tributaries to the new regime, and thus humbled. Otherwise, their rights were assured. This relative toleration led some Jews and even Christians to welcome, and even assist, in the Muslim takeover of their cities. The cosmopolitan and orderly character of Islamic urban life also spurred the growth of trade, as well as the elevation of the arts and sciences. In the newly conquered cities, the Arab suq (market) improved on the Greco-Roman agora. Rulers developed elaborate commercial districts, with large buildings shaded from the hot desert sun, including storerooms and hostels for visiting merchants. The new rulers also built large libraries, universities, and hospitals at a pace not seen since Roman times, across a remarkable archipelago of new urban centers. From Cordoba in Spainwhich one German nun described as the jewel of the worldto Cairo in Egypt, Baghdad in Iraq, Shiraz in Persia, and Delhi in India, Islamic cities provided a model of urbanity at a time when much of Europes once great urban civilization was largely in disrepair. The subsequent decline of Islamic cities, dating to perhaps as early as the 17th Century, represents one of the great urban tragedies of the last millennium. Under assault from technologically and economically aggressive Western societies, the great Islamic cities generally fell behind, most particularly in the wake of the Industrial Revolution. Even the great windfall offered by the presence of massive reserves of energy has failed to reverse this decline. Despite the expenditure of billions in petro-dollars, most of the worlds largest Muslim citiesfrom Cairo and Baghdad to Tehran, Lahore, and Jakartacontinue to lag behind Western cities in economics, technology, and social development. These societies have generally failed to understand the importance to city life of the free flow of commerce, a decent regime of law, and a sense of moral order that tolerates the existence of the non-orthodox. East Asia Revives the Urban Society East Asia, the home of one of the worlds other great urban traditions, presents a far more hopeful picture of urban prospects. After a precipitous decline that started in the 17th Century, cities in East Asia have in recent times enjoyed a remarkable resurgence. Today many of the worlds most prosperous citiesincluding the largest, Tokyoare located in East Asia. Japan forged the first great expression of modern Asian urbanism, consciously blending imported technology and city planning techniques with a uniquely Asian sense of civic values and order. Its model of urban development has inspired others in Asia. This is particularly evident in the sprawling metropolitan areas of Seoul, a onetime Japanese colonial capital which has thrived under American military and economic protection over the past four decades. Yet arguably the most critical evolution has been the one that took place in Chinese-dominated urban spheres of East Asia. Although old imperial cities such as Beijing continued to decline throughout much of the 20th Century, modern Chinese urbanism evolved, often dramatically, in cities such as Hong Kong, Shanghai, and Singapore under powerful western influence. Until the Communist takeover in 1949, Shanghai was the greatest of these cities: a corrupt but powerful industrial and commercial center. Subsequently the two colonial cities, Hong Kong and Singapore, showed the way to a new model of Chinese-based urbanity. Although Hong Kong expanded more rapidly at first, it may well be that Singapore developed the urban archetype that, over time, would dominate much of East Asia. At the time of its independence in 1965, the prospects for the tiny, 225-square-mile Republic appeared dubious at best. The city suffered all the usual problems associated with developing countries: large, crowded slums, criminal gangs, and a relatively unskilled population. The country also faced the hostility from neighboring, far more populous, and predominately Muslim Malaysia from which it had broken away. Singapores great achievement lay in employing its new sovereign power to construct one of the stunning urban success stories of the late-20th Century. Under the authoritarian leadership of Cambridge-educated Lee Kuan Yew, tenements were replaced by planned apartment complexes; congested streets were supplanted by a modern road system under which ran an advanced subway system; and crime, once rampant, was nearly eliminated. The key to Singapores success lay in economic growth. Lee and his government worked assiduously to exploit Singapores natural advantage as a harbor and transit center for trans-Asian trade. Moving rapidly from low-wage industries like textiles to high-technology and service industries, Singapore by the end of the 20th Century boasted one of the worlds best educated and economically productive populations. Class divisions remained, but most now achieved a standard of living and wealth unimaginable for masses in other cities of the post-colonial world. Income levels, barely $800 per person in 1964, had risen to over $23,000 in 1999. Critically, Lee was not only interested in improving the short-run economic prospects for his tiny city-state; he wanted to develop a new Asian urban culture capable of competing globally well into the 21st Century. Having given them a clean city, modern amenities, and a strong economy, one of his ministers declared, we are now thinking of what culture we should give them. By the mid-1980s, Lee had decided what kind of culture he wanted for his people: one built on the bedrock of the citys Asian, and particularly Chinese, values. The self-described Anglophile now promoted an essentially Confucian ethos based on respect for the authority of a wise and powerful mandarin elite. Without this culture, he suggested, Singapore would soon degenerate into what he scathingly described as another Third World society. By the 1980s, even Chinas Communist leaders, long contemptuous of their capitalist-minded overseas brethren and hostile to Western notions of urbanism, began to embrace the Singaporean model. In 1992, Chinas paramount leader, Deng Xiaoping, openly expressed particular admiration for Singapores approach to social order as the best blueprint for the rapid development of Chinas own cities. Under Dengs Four Modernizations, Beijing gradually loosened its strict control over municipalities. Local officials now encouraged private initiative and outside investment. The creation of special economic zones, such as that in Shenzen between Hong Kong and Canton, attracted the largest amounts of foreign capital, much of it from Hong Kong, Taipei, and Singapore. Within fifteen years, the area around the Pearl River Delta had, much like British Midlands in the mid-19th Century, become not only the countrys workshop but rapidly the workshop of the world. In less than a generation, Chinas predominately rural society is now being rapidly urbanized. Streets which only two decades ago were filled with bicycles are now choked with automobile traffic. New modern office buildings, hotels, and high-rise apartments dwarf the old Stalinist-style state buildings along the major boulevards. Public markets have reappeared, offering an ever-wider variety of meats, vegetables, and fruits to an increasingly affluent public. Chinese cities, notably Shanghai, now are the stage for some of the worlds most ambitious infrastructure projects and most spectacular new skyscrapers. The Urban Future The Need for Commerce Today most governments, private corporations, and non-profits around the world focus on creating both a dynamic economy and reducing the age-old scourge of poverty. In this respect, the brightest immediate prospects for the urban future lie in East Asia. In contrast, the commercial vitality of many older European cities, and at least some in the New World and Australia, seem likely to be ever more challengedeven in the highest value-added activitiesby urban centers not only in China, but in India and other Asian countries. Far more distressing are the economic prospects of the cities of the Third World. These continue to struggle with the historically unprecedented condition of rapid demographic expansion and weak, even negative, economic growth. Until the poverty of these citieswhether in the Middle East, Africa, South America, and parts of Southeast Asiais adequately addressed, there seems no way for them to develop successful urban centers. The Continuing Importance of Security In addition to the economic challenge, the worlds cities also face the challenge of maintaining both law and order. Urbanites, to be truly productive, must feel at least somewhat secure in their persons. They also need to depend on a responsible authority capable of administering contracts and enforcing basic codes of commercial behavior. Today, fear of both crime and capricious authority slows the movement of foreign capital to many Third World cities. Even in relatively peaceful countries, kleptocratic bureaucracies deflect business investment to safer and less congenitally larcenous places. Yet the greatest threat to the urban future comes from Islamic terrorism. In the years following the 2001 attack on New York and Washington, D.C., both individuals and businesses have begun to rethink locating close to prime potential terrorist targets in high-profile urban locations. To the already difficult challenges posed by changing economic and social trends, cities around the world now have to contend with the constant threat of physical obliteration. The Sacred Place Despite such threats, the urban ideal has demonstrated a remarkable resilience. Fear rarely is enough to stop the determined builders of cities. For all the cities that have been ruined permanently by war, pestilence, or natural disaster, many othersCarthage, Rome, London, and Tokyohave been rebuilt, often more than once. Even amidst mounting terrorist threats, city officials and developers not only in New York but in London, Tokyo, Shanghai, and other major cities continue to plan new office towers and other superlative edifices. Today as much as when cities originated, the value people place on the urban experience over time will prove more important than any assemblage of new buildings. Whether in the traditional urban core or in the expanding periphery, issues of identity and community still largely determine which places ultimately succeed and which do not. As progenitors of a new kind of human existence, the earliest city-dwellers found themselves confronting vastly different problems than faced in prehistoric nomadic communities and agricultural villages. Urbanites had to learn how to co-exist and interact with strangers from outside their clan or tribe. This required new ways to codify behavior and determine commonly acceptable behavior in family life, commerce, and social discourse. Today, the lack of a shared moral order could prove as dangerous to the future cities as the most hideous terrorist threats. Cities in the modern West, as historian Daniel Bell has suggested, have depended on a broad adherence to classical and Enlightenment ideals: due process, freedom of belief, the basic rights of property. To shatter these essential principles, whether in the name of the marketplace, multicultural separatism, or religious dogma, would render the contemporary city in the West helpless to meet the enormous challenges before it. Yet history tells us that the West represents only one road to successful urbanism. History abounds with models developed under explicit pagan, Muslim, Buddhist, and Hindu auspices. We cannot ignore that notable success in city-building has occurred in recent years under neo-Confucian belief systems, amalgamating modernity and tradition. Over time these systems must also find ways to deal with the ill-effects of unrestrained market capitalism on society and, particularly in China itself, the self-interested corruption of the ruling authoritarian elite. It is to be hoped that the Islamic world, having found Western values wanting, may discover in their own glorious pastreplete with cosmopolitan values and belief in scientific progressthe means to salvage their troubled urban civilization. From that model they may learn that successful cities must adapt their moral order to accommodate differing populations. Cities can only thrive by occupying a sacred place that both orders and inspires the complex natures of gathered masses of people. For five thousand years or more, the human attachment to cities has served as the primary forum for political and material progress. It is in the city, this ancient confluence of the sacred, safe, and busy, where humanitys future will be shaped for centuries to come. This article is excerpted from Joel Kotkins new book, The City: A Global History, recently published by Modern Library. ed. From checker at panix.com Thu May 12 00:40:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:40:51 -0400 (EDT) Subject: [Paleopsych] Edge 160: The Science of Gender and Science: Pinker vs. Spelke: A Debate Message-ID: The Science of Gender and Science: Pinker vs. Spelke: A Debate http://www.edge.org/documents/archive/edge160.html Edge 160-- May 10, 2005 (21.150 words) ...on the research on mind, brain, and behavior that may be relevant to gender disparities in the sciences, including the studies of bias, discrimination and innate and acquired difference between the sexes. Harvard University o Mind/Brain/Behavior Initiative [After this is a profile of John Brockman, then Linda S. Gottfredson responding to Simon Baron-Cohen, and some Edge books.] Harvard University o Mind/Brain/Behavior Initiative The Mind Brain and Behavior Inter-Faculty Initiative (MBB), under the leadership of Co-Directors Marc D. Hauser and Elizabeth Spelke, is a university-wide community that studies the structure, function, evolution, development, and pathology of the nervous system, in relation to decision-making and behavior. _________________________________________________________________ Introduction On April 22, 2005, Harvard University's Mind/Brain/Behavior Initiative (MBB) held a defining debate on the public discussion that began on January 16th with the public comments by Lawrence Summers, president of Harvard, on sex differences between men and women and how they may relate to the careers of women in science. The debate at MBB, "The Gender of Gender and Science" was "on the research on mind, brain, and behavior that may be relevant to gender disparities in the sciences, including the studies of bias, discrimination and innate and acquired difference between the sexes". It's interesting to note that since the controversy surrounding Summers' remarks began, there has been an astonishing absence of discussion of the relevant science...you won't find it in the hundreds and hundreds of articles in major newspapers; nor will find it in the Harvard faculty meetings where the president of the leading University in America was indicted for presenting controversial ideas. Scientists debate continually, and reality is the check. They may have egos as large as those possessed by the iconic figures of the academic humanities, but they handle their hubris in a very different way. They can be moved by arguments, because they work in an empirical world of facts, a world based on reality. There are no fixed, unalterable positions. They are both the creators and the critics of their shared enterprise. Ideas come from them and they also criticize one another's ideas. Through the process of creativity and criticism and debates, they decide which ideas get weeded out and which become part of the consensus that leads to the next level of discovery. But unlike just about anything else said about Summers' remarks, the debate, "The Science of Gender and Science", between Harvard psychology professors Steven Pinker and Elizabeth Spelke, focused on the relevant scientific literature. It was both interesting on facts but differing in interpretation. Both presented scientific evidence with the realization and understanding that there was nothing obvious about how the data was to be interpreted. Their sharp scientific debate informed rather than detracted. And it showed how a leading University can still fulfill its role of providing a forum for free and open discussion on controversial subjects in a fair-minded way. It also had the added benefit that the participants knew what they were talking about. Who won the debate? Make up your own mind. Watch the video, listen to the audio, read the text and check out the slide presentations. There's a lesson here: let's get it right and when we do we will adjust our attitudes. That's what science can do, and that's what Edge offers by presenting Pinker vs. Spelke to a wide public audience. [18]--John Brookman's biography STEVEN PINKER is the Johnstone Family Professor in the Department of Psychology at Harvard University. His research has won prizes from the National Academy of Sciences and the Royal Institution of Great Britain, and he is the author of six books, including The Language Instinct, How the Mind Works, Words and Rules, and The Blank Slate. [19]Steven Pinker's Edge Bio Page ELIZABETH S. SPELKE is Berkman Professor of Psychology at Harvard University, where she is Co-Director of the Mind, Brain, and Behavior Initiative. A member of the National Academy of Sciences and the American Academy of Arts and Sciences, she is cited by Time Magazine as one of America's Best in Science and Medicine. [20]Elizabeth Spelke's Edge Bio Page [EDITOR'S NOTE: Pinker and Spelke each made presentations of about 40 minutes, without interruption, from each other or from the audience. They then responded to each other's presentations. By mutual agreement, Pinker made the first presentation. This Edge presentation includes: the transcribed text; streaming audio of the full debate; 6-minute video clips from Pinker and Spelke's opening statements; a 20-minute video clip of the their closing discussion; and online versions of the speakers' slide presentations. There are two options for viewing the slides: Clicking on the links immediately below brings up the file of either Pinker or Spelke's complete slide presentation. Or, the individual slides are also included for reference as expandable thumbnails in the margin of the transcript.] [bracket_l.gif] [21]PINKER slide presentation [bracket_r.gif] [bracket_l.gif] [22]SPELKE slide presentation [bracket_r.gif] [banner.audio.gif] [banner.video.gif] [headphonelogo.gif] Steven Pinker [40 minutes] [23]Streaming Audio [headphonelogo.gif] Elizabeth Spelke [45 minutes] [24]Streaming Audio [headphonelogo.gif] Concluding Discussion [20 minutes] [25]Streaming Audio [26][qtsmall.gif] Steven Pinker: Opening Remarks [6 minute video] [27]Broadband | [28]Modem [29][qtsmall.gif] Elizabeth Spelke: Opening Remarks [6 minute video] [30]Broadband | [31]Modem [32][qtsmall.gif] Steven Pinker & Elizabeth Spelke: Concluding Discussion [20 minute video] [33]Broadband | [34]Modem [35][mbb.jpg] [36][qtsmall.gif] The complete video is also available for download through Harvard's MBB website ([37]click here). [gender_logo.jpg] Steven Pinker _________________________________________________________________ (STEVEN PINKER:) Thanks, Liz, for agreeing to this exchange. It's a privilege to be engaged in a conversation with Elizabeth Spelke. We go back a long way. We have been colleagues at MIT, where I helped attract her, and at Harvard, where she helped to attract me. With the rest of my field, I have enormous admiration for Elizabeth's brilliant contributions to our understanding of the origins of cognition. But we do find ourselves with different perspectives on a recent issue. [38][pinker_Page_01.jpg] [These are Steve's Power Point pages.] For those of you who just arrived from Mars, there has been a certain amount of discussion here at Harvard on a particular datum, namely the under-representation of women among tenure-track faculty in elite universities in physical science, math, and engineering. Here are some recent numbers: [39][pinker_Page_02.jpg] As with many issues in psychology, there are three broad ways to explain this phenomenon. One can imagine an extreme "nature" position: that males but not females have the talents and temperaments necessary for science. Needless to say, only a madman could take that view. The extreme nature position has no serious proponents. [40][pinker_Page_03.jpg] There is an extreme "nurture" position: that males and females are biologically indistinguishable, and all relevant sex differences are products of socialization and bias. Then there are various intermediate positions: that the difference is explainable by some combination of biological differences in average temperaments and talents interacting with socialization and bias. [41][pinker_Page_04.jpg] Liz has embraced the extreme nurture position. There is an irony here, because in most discussions in cognitive science she and I are put in the same camp, namely the "innatists," when it comes to explaining the mind. But in this case Liz has said that there is "not a shred of evidence" for the biological factor, that "the evidence against there being an advantage for males in intrinsic aptitude is so overwhelming that it is hard for me to see how one can make a case at this point on the other side," and that "it seems to me as conclusive as any finding I know of in science." [42][pinker_Page_05.jpg] Well we certainly aren't seeing the stereotypical gender difference in confidence here! Now, I'm a controversial guy. I've taken many controversial positions over the years, and, as a member of Homo sapiens, I think I am right on all of them. But I don't think that in any of them I would say there is "not a shred of evidence" for the other side, even if I think that the evidence favors one side. I would not say that the other side "can't even make a case" for their position, even if I think that their case is not as good as the one I favor. And as for saying that a position is "as conclusive as any finding in science" -- well, we're talking about social science here! This statement would imply that the extreme nurture position on gender differences is more conclusive than, say the evidence that the sun is at the center of the solar system, for the laws of thermodynamics, for the theory of evolution, for plate tectonics, and so on. These are extreme statements -- especially in light of the fact that an enormous amount of research, summarized in these and many other literature reviews, in fact points to a very different conclusion. I'll quote from one of them, a book called Sex Differences in Cognitive Ability by Diane Halpern. She is a respected psychologist, recently elected as president of the American Psychological Association, and someone with no theoretical axe to grind. She does not subscribe to any particular theory, and has been a critic, for example, of evolutionary psychology. And here what she wrote in the preface to her book: [43][pinker_Page_06.jpg] [44][pinker_Page_07.jpg] "At the time I started writing this book it seemed clear to me that any between sex differences in thinking abilities were due to socialization practices, artifacts, and mistakes in the research. After reviewing a pile of journal articles that stood several feet high, and numerous books and book chapters that dwarfed the stack of journal articles, I changed my mind. The literature on sex differences in cognitive abilities is filled with inconsistent findings, contradictory theories, and emotional claims that are unsupported by the research. Yet despite all the noise in the data, clear and consistent messages could be heard. There are real and in some cases sizable sex differences with respect to some cognitive abilities. Socialization practices are undoubtedly important, but there is also good evidence that biological sex differences play a role in establishing and maintaining cognitive sex differences, a conclusion I wasn't prepared to make when I began reviewing the relevant literature." [45][pinker_Page_08.jpg] This captures my assessment perfectly. Again for the benefit of the Martians in this room: This isn't just any old issue in empirical psychology. There are obvious political colorings to it, and I want to begin with a confession of my own politics. I am a feminist. I believe that women have been oppressed, discriminated against, and harassed for thousands of years. I believe that the two waves of the feminist movement in the 20th century are among the proudest achievements of our species, and I am proud to have lived through one of them, including the effort to increase the representation of women in the sciences. But it is crucial to distinguish the moral proposition that people should not be discriminated against on account of their sex -- which I take to be the core of feminism -- and the empirical claim that males and females are biologically indistinguishable. They are not the same thing. Indeed, distinguishing them is essential to protecting the core of feminism. Anyone who takes an honest interest in science has to be prepared for the facts on a given issue to come out either way. And that makes it essential that we not hold the ideals of feminism hostage to the latest findings from the lab or field. Otherwise, if the findings come out as showing a sex difference, one would either have to say, "I guess sex discrimination wasn't so bad after all," or else furiously suppress or distort the findings so as to preserve the ideal. The truth cannot be sexist. Whatever the facts turn out to be, they should not be taken to compromise the core of feminism. [46][pinker_Page_09.jpg] Why study sex differences? Believe me, being the Bobby Riggs of cognitive science is not my idea of a good time. So should I care about them, especially since they are not the focus of my own research? First, differences between the sexes are part of the human condition. We all have a mother and a father. Most of us are attracted to members of the opposite sex, and the rest of us notice the difference from those who do. And we can't help but notice the sex of our children, friends, and our colleagues, in every aspect of life. Also, the topic of possible sex differences is of great scientific interest. Sex is a fundamental problem in biology, and sexual reproduction and sex differences go back a billion years. There's an interesting theory, which I won't have time to explain, which predicts that there should be an overall equal investment of organisms in their sons and daughters; neither sex is predicted to be superior or inferior across the board. There is also an elegant theory, namely Bob Trivers' theory of differential parental investment, which makes highly specific predictions about when you should expect sex differences and what they should look like. [47][pinker_Page_10.jpg] [48][pinker_Page_11.jpg] The nature and source of sex differences are also of practical importance. Most of us agree that there are aspects of the world, including gender disparities, that we want to change. But if we want to change the world we must first understand it, and that includes understanding the sources of sex differences. Let's get back to the datum to be explained. In many ways this is an exotic phenomenon. It involves biologically unprepared talents and temperaments: evolution certainly did not shape any part of the mind to do the work of a professor of mechanical engineering at MIT, for example. The datum has nothing to do with basic cognitive processes, or with those we use in our everyday lives, in school, or even in most college courses, where indeed there are few sex differences. Also, we are talking about extremes of achievement. Most women are not qualified to be math professors at Harvard because most men aren't qualified to be math professors at Harvard. These are extremes in the population. And we're talking about a subset of fields. Women are no under-represented to nearly the same extent in all academic fields, and certainly not in all prestigious professions. Finally, we are talking about a statistical effect. This is such a crucial point that I have to discuss it in some detail. [49][pinker_Page_12.jpg] Women are nowhere near absent even from the field in which they are most under-represented. The explanations for sex differences must be statistical as well. And here is a touchstone for the entire discussion: These are two Gaussian or normal distributions; two bell curves. The X axis stands for any ability you want to measure. The Yaxis stands for the proportion of people having that ability. The overlapping curves are what you get whenever you compare the sexes on any measure in which they differ. In this example, if we say that this is the male curve and this is the female curve, the means may be different, but at any particular ability level there are always representatives of both genders. [50][pinker_Page_13.jpg] So right away a number of public statements that have been made last couple of months can be seen as red herrings, and should never have been made by anyone who understands the nature of statistical distributions. This includes the accusation that President Summers implied that "50% of the brightest minds in America do not have the right aptitude for science," that "women just can't cut it," and so on. These statements are statistically illiterate, and have nothing to do with the phenomena we are discussing. [51][pinker_Page_14.jpg] [52][pinker_Page_15.jpg] [53][pinker_Page_16.jpg] There are some important corollaries of having two overlapping normal distributions. One is that a normal distribution falls off according to the negative exponential of the square of the distance from the mean. That means that even when there is only a small difference in the means of two distributions, the more extreme a score, the greater the disparity there will be in the two kinds of individuals having such a score. That is, the ratios get more extreme as you go farther out along the tail. If we hold a magnifying glass to the tail of the distribution, we see that even though the distributions overlap in the bulk of the curves, when you get out to the extremes the difference between the two curves gets larger and larger. [54][pinker_Page_17.jpg] For example, it's obvious that distributions of height for men and women overlap: it's not the case that all men are taller than all women. But while at five foot ten there are thirty men for every woman, at six feet there are two thousand men for every woman. Now, sex differences in cognition tend not to be so extreme, but the statistical phenomenon is the same. [55][pinker_Page_18.jpg] [56][pinker_Page_19.jpg] A second important corollary is that tail ratios are affected by differences in variance. And biologists since Darwin have noted that for many traits and many species, males are the more variable gender. So even in cases where the mean for women and the mean for men are the same, the fact that men are more variable implies that the proportion of men would be higher at one tail, and also higher at the other. As it's sometimes summarized: more prodigies, more idiots. [57][pinker_Page_20.jpg] With these statistical points in mind, let me begin the substance of my presentation by connecting the political issue with the scientific one. Economists who study patterns of discrimination have long argued (generally to no avail) that there is a crucial conceptual difference between difference and discrimination. A departure from a 50-50 sex ratio in any profession does not, by itself, imply that we are seeing discrimination, unless the interests and aptitudes of the two groups are equated. Let me illustrate the point with an example, involving myself. I work in a scientific field -- the study of language acquisition in children -- that is in fact dominated by women. Seventy-five percent of the members the main professional association are female, as are a majority of the keynote speakers at our main conference. I'm here to tell you that it's not because men like me have been discriminated against. I decided to study language development, as opposed to, say, mechanical engineering, for many reasons. The goal of designing a better automobile transmission does not turn me on as much as the goal of figuring out how kids acquire language. And I don't think I'd be as good at designing a transmission as I am in studying child language. Now, all we need to do to explain sex differences without invoking the discrimination or invidious sexist comparisons is to suppose that whatever traits I have that predispose me to choose (say) child language over (say) mechanical engineering are not exactly equally distributed statistically among men and women. For those of you out there -- of either gender -- who also are not mechanical engineers, you should understand what I'm talking about. [58][pinker_Page_21.jpg] Okay, so what are the similarities and differences between the sexes? There certainly are many similarities. Men and women show no differences in general intelligence or g -- on average, they are exactly the same, right on the money. Also, when it comes to the basic categories of cognition -- how we negotiate the world and live our lives; our concept of objects, of numbers, of people, of living things, and so on -- there are no differences. [59][pinker_Page_22.jpg] Indeed, in cases where there are differences, there are as many instances in which women do slightly better than men as ones in which men do slightly better than women. For example, men are better at throwing, but women are more dexterous. Men are better at mentally rotating shapes; women are better at visual memory. Men are better at mathematical problem-solving; women are better at mathematical calculation. And so on. But there are at least six differences that are relevant to the datum we have been discussing. The literature on these differences is so enormous that I can only touch on a fraction of it. I'll restrict my discussion to a few examples in which there are enormous data sets, or there are meta-analyses that boil down a literature. [60][pinker_Page_23.jpg] The first difference, long noted by economists studying employment practices, is that men and women differ in what they state are their priorities in life. To sum it up: men, on average, are more likely to chase status at the expense of their families; women give a more balanced weighting. Once again: Think statistics! The finding is not that women value family and don't value status. It is not that men value status and don't value family. Nor does the finding imply that every last woman has the asymmetry that women show on average or that every last man has the asymmetry that men show on average. But in large data sets, on average, an asymmetry what you find. [61][pinker_Page_24.jpg] Just one example. In a famous long-term study of mathematically precocious youth, 1,975 youngsters were selected in 7th grade for being in the top 1% of ability in mathematics, and then followed up for more than two decades. These men and women are certainly equally talented. And if anyone has ever been encouraged in math and science, these kids were. Both genders: they are equal in their levels of achievement, and they report being equally satisfied with the course of their lives. Nonetheless there are statistical differences in what they say is important to them. There are some things in life that the females rated higher than males, such as the ability to have a part-time career for a limited time in one's life; living close to parents and relatives; having a meaningful spiritual life; and having strong friendships. And there are some things in life that the males rated higher than the females. They include having lots of money; inventing or creating something; having a full-time career; and being successful in one's line of work. It's worth noting that studies of highly successful people find that single-mindedness and competitiveness are recurring traits in geniuses (of both sexes). [62][pinker_Page_25.jpg] Here is one other figure from this data set. As you might expect, this sample has a lot of people who like to work Herculean hours. Many people in this group say they would like to work 50, 60, even 70 hours a week. But there are also slight differences. At each one of these high numbers of hours there are slightly more men than women who want to work that much. That is, more men than women don't care about whether they have a life. [63][pinker_Page_26.jpg] Second, interest in people versus things and abstract rule systems. There is a staggering amount of data on this trait, because there is an entire field that studies people's vocational interests. I bet most of the people in this room have taken a vocational interest test at some point in their lives. And this field has documented that there are consistent differences in the kinds of activities that appeal to men and women in their ideal jobs. I'll just discuss one of them: the desire to work with people versus things. There is an enormous average difference between women and men in this dimension, about one standard deviation. [64][pinker_Page_27.jpg] And this difference in interests will tend to cause people to gravitate in slightly different directions in their choice of career. The occupation that fits best with the "people" end of the continuum is "director of a community services organization." The occupations that fit best with the "things" end are physicist, chemist, mathematician, computer programmer, and biologist. We see this consequence not only in the choice of whether to go into science, but also in the choice which branch of science the two sexes tend to go into. Needless to say, from 1970 to 2002 there was a huge increase in the percentage of university degrees awarded to women. But the percentage still differs dramatically across fields. Among the Ph.Ds awarded in 2001, for example, in education 65% of the doctorates went to women; in the social sciences, 54%; in the life sciences, 47%; in the physical sciences, 26%; in engineering, 17%. This is completely predictable from the difference in interests between people and living things, on the one hand, and inanimate objects, on the other. And the pattern is pretty much the same in 1980 and 2001, despite the change in absolute numbers. [65][pinker_Page_28.jpg] [66][pinker_Page_29.jpg] Third, risk. Men are by far the more reckless sex. In a large meta-analysis involving 150 studies and 100,000 participants, in 14 out of 16 categories of risk-taking, men were over-represented. The two sexes were equally represented in the other two categories, one of which was smoking, for obvious reasons. And two of the largest sex differences were in "intellectual risk taking" and "participation in a risky experiment." We see this sex difference in everyday life, in particular, in the following category: the Darwin Awards, "commemorating those individuals who ensure the long-term survival of our species by removing themselves from the gene pool in a sublimely idiotic fashion." Virtually all -- perhaps all -- of the winners are men. [67][pinker_Page_30.jpg] [68][pinker_Page_31.jpg] Fourth, three-dimensional mental transformations: the ability to determine whether the drawings in each of these pairs the same 3-dimensional shape. Again I'll appeal to a meta-analysis, this one containing 286 data sets and 100,000 subjects. The authors conclude, "we have specified a number of tests that show highly significant sex differences that are stable across age, at least after puberty, and have not decreased in recent years." Now, as I mentioned, for some kinds of spatial ability, the advantage goes to women, but in "mental rotation,"spatial perception," and "spatial visualization" the advantage goes to men. [69][pinker_Page_32.jpg] [70][pinker_Page_33.jpg] Now, does this have any relevance to scientific achievement? We don't know for sure, but there's some reason to think that it does. In psychometric studies, three-dimensional spatial visualization is correlated with mathematical problem-solving. And mental manipulation of objects in three dimensions figures prominently in the memoirs and introspections of most creative physicists and chemists, including Faraday, Maxwell, Tesla, K?ekul?, and Lawrence, all of whom claim to have hit upon their discoveries by dynamic visual imagery and only later set them down in equations. A typical introspection is the following: "The cyclical entities which seem to serve as elements in my thought are certain signs and more or less clear images which can be voluntarily reproduced and combined. This combinatory play seems to be the essential feature in productive thought before there is any connection with logical construction in words or other kinds of signs." The quote comes from this fairly well-known physicist. [71][pinker_Page_34.jpg] [72][pinker_Page_35.jpg] [73][pinker_Page_36.jpg] [74][pinker_Page_37.jpg] Fifth, mathematical reasoning. Girls and women get better school grades in mathematics and pretty much everything else these days. And women are better at mathematical calculation. But consistently, men score better on mathematical word problems and on tests of mathematical reasoning, at least statistically. Again, here is a meta analysis, with 254 data sets and 3 million subjects. It shows no significant difference in childhood; this is a difference that emerges around puberty, like many secondary sexual characteristics. But there are sizable differences in adolescence and adulthood, especially in high-end samples. Here is an example of the average SAT mathematical scores, showing a 40-point difference in favor of men that's pretty much consistent from 1972 to 1997. In the Study of Mathematically Precocious Youth (in which 7th graders were given the SAT, which of course ordinarily is administered only to older, college-bound kids), the ratio of those scoring over 700 is 2.8 to 1 male to female. (Admittedly, and interestingly, that's down from 25 years ago, when the ratio was 13-to1, and perhaps we can discuss some of the reasons.) At the 760 cutoff, the ratio nowadays is 7 males to 1 female. [75][pinker_Page_38.jpg] Now why is there a discrepancy with grades? Do SATs and other tests of mathematical reasoning aptitude underpredict grades, or do grades overpredict high-end aptitude? At the Radical Forum Liz was completely explicit in which side she takes, saying that "the tests are no good," unquote. But if the tests are really so useless, why does every major graduate program in science still use them -- including the very departments at Harvard and MIT in which Liz and I have selected our own graduate students? [76][pinker_Page_39.jpg] I think the reason is that school grades are affected by homework and by the ability to solve the kinds of problems that have already been presented in lecture and textbooks. Whereas the aptitude tests are designed to test the application of mathematical knowledge to unfamiliar problems. And this, of course, is closer to the way that math is used in actually doing math and science. Indeed, contrary to Liz, and the popular opinion of many intellectuals, the tests are surprisingly good. There is an enormous amount of data on the predictive power of the SAT. For example, people in science careers overwhelmingly scored in 90th percentile in the SAT or GRE math test. And the tests predict earnings, occupational choice, doctoral degrees, the prestige of one's degree, the probability of having a tenure-track position, and the number of patents. Moreover this predictive power is the same for men and for women. As for why there is that underprediction of grades -- a slight under-prediction, one-tenth of a standard deviation -- the Educational Testing Service did a study on that phenomenon, and were able to explain the mystery by a combination of the choice of major, which differs between the sexes, and the greater conscientiousness of women. [77][pinker_Page_40.jpg] Finally there's a sex difference in variability. It's crucial here to look at the right samples. Estimates of variance depend highly on the tails of the distribution, which by definition contain smaller numbers of people. Since people at the tails of the distribution in many surveys are likely to be weeded out for various reasons, it's important to have large representative samples from national populations. In this regard the gold standard is the Science paper by Novell and Hedges, which reported six large stratified probability samples. They found that in 35 out of 37 tests, including all of the tests in math, space, and science, the male variance was greater than the female variance. [78][pinker_Page_41.jpg] One other data set meeting the gold standard is displayed in this graph, showing the entire population of Scotland, who all took an intelligence test in a single year. The X axis represents IQ, where the mean is 100, and the Yaxis represents the proportion of men versus women. As you can see these are extremely orderly data. In the middle part of the range, females predominate; at both extremes, males slightly predominate. Needless to say, there is a large percentage of women at both ends of the scale -- but there is also large sex difference. [79][pinker_Page_42.jpg] Now the fact that these six gender differences exist does not mean that they are innate. This of course is a much more difficult issue to resolve. A necessary preamble to this discussion is that nature and nurture are not alternatives; it is possible that the explanation for a given sex difference involves some of each. The only issue is whether the contribution of biology is greater than zero. I think that there are ten kinds of evidence that the contribution of biology is greater than zero, though of course it is nowhere near 100 percent. [80][pinker_Page_43.jpg] [81][pinker_Page_44.jpg] First, there are many biological mechanisms by which a sex difference could occur. There are large differences between males and females in levels of sex hormones, especially prenatally, in the first six months of life, and in adolescence. There are receptors for hormones all over the brain, including the cerebral cortex. There are many small differences in men's and women's brains, including the overall size of the brain (even correcting for body size), the density of cortical neurons, the degree of cortical asymmetry, the size of hypothalamic nuclei, and several others. [82][pinker_Page_45.jpg] Second, many of the major sex differences -- certainly some of them, maybe all of them, are universal. The idea that there are cultures out there somewhere in which everything is the reverse of here turns out to be an academic legend. In his survey of the anthropological literature called Human Universals, the anthropologist Donald Brown points out that in all cultures men and women are seen as having different natures; that there is a greater involvement of women in direct child care; more competitiveness in various measures for men than for women; and a greater spatial range traveled by men compared to by women. In personality, we have a cross-national survey (if not a true cross-cultural one) in Feingold's meta-analysis, which noted that gender differences in personality are consistent across ages, years of data collection, educational levels, and nations. When it comes to spatial manipulation and mathematical reasoning, we have fewer relevant data, and we honestly don't have true cross-cultural surveys, but we do have cross-national surveys. David Geary and Catherine Desoto found the expected sex difference in mental rotation in ten European countries and in Ghana, Turkey, and China. Similarly, Diane Halpern, analyzing results from ten countries, said that "the majority of the findings show amazing cross-cultural consistency when comparing males and females on cognitive tests." [83][pinker_Page_46.jpg] Third, stability over time. Surveys of life interests and personality have shown little or no change in the two generations that have come of age since the second wave of feminism. There is also, famously, resistance to change in communities that, for various ideological reasons, were dedicated to stamping out sex differences, and found they were unable to do so. These include the Israeli kibbutz, various American Utopian communes a century ago, and contemporary androgynous academic couples. [84][pinker_Page_47.jpg] In tests of mental rotation, the meta-analysis by Voyer et al found no change over time. In mathematical reasoning there has been a decline in the size of the difference, although it has certainly not disappeared. Fourth, many sex differences can be seen in other mammals. It would be an amazing coincidence if these differences just happened to be replicated in the arbitrary choices made by human cultures at the dawn of time. There are large differences between males and females in many mammals in aggression, in investment in offspring, in play aggression play versus play parenting, and in the range size, which predicts a species' sex differences in spatial ability (such as in solving mazes), at least in polygynous species, which is how the human species is classified. Many primate species even show a sex difference in their interest in physical objects versus conspecifics, a difference seen their patterns of juvenile play. Among baby vervet monkeys, the males even prefer to play with trucks and the females with other kinds of toys! [85][pinker_Page_48.jpg] [86][pinker_Page_49.jpg] Fifth, many of these differences emerge in early childhood. It is said that there is a technical term for people who believe that little boys and little girls are born indistinguishable and are molded into their natures by parental socialization. The term is "childless." Some sex differences seem to emerge even in the first week of life. Girls respond more to sounds of distress, and girls make more eye contact than boys. And in a study that I know Liz disputes and that I hope we'll talk about, newborn boys were shown to be more interested in looking at a physical object than a face, whereas newborn girls were shown to be more interested in looking at a face than a physical object. A bit later in development there are vast and robust differences between boys and girls, seen all over the world. Boys far more often than girls engage in rough-and-tumble play, which involves aggression, physical activity, and competition. Girls spend a lot more often in cooperative play. Girls engage much more often in play parenting. And yes, boys the world over turn anything into a vehicle or a weapon, and girls turn anything into a doll. There are sex differences in intuitive psychology, that is, how well children can read one another's minds. For instance, several large studies show that girls are better than boys in solving the "false belief task," and in interpreting the mental states of characters in stories. [87][pinker_Page_50.jpg] Sixth, genetic boys brought up as girls. In a famous 1970s incident called the John/Joan case, one member of a pair of identical twin boys lost his penis in a botched circumcision (I was relieved to learn that this was not done by a moyl, but by a bumbling surgeon). Following advice from the leading gender expert of the time, the parents agreed to have the boy castrated, given female-specific hormones, and brought up as a girl. All this was hidden from him throughout his childhood. When I was an undergraduate the case was taught to me as proof of how gender roles are socially acquired. But it turned out that the facts had been suppressed. When "Joan" and her family were interviewed years later, it turned out that from the youngest ages he exhibited boy-typical patterns of aggression and rough-and-tumble play, rejected girl-typical activities, and showed a greater interest in things than in people. At age 14, suffering from depression, his father finally told him the truth. He underwent further surgery, married a woman, adopted two children, and got a job in a slaughterhouse. This is not just a unique instance. In a condition called cloacal exstrophy, genetic boys are sometimes born without normal male genitalia. When they are castrated and brought up as girls, in 25 out of 25 documented instances they have felt that they were boys trapped in girls' bodies, and showed male-specific patterns of behavior such as rough-and-tumble play. [88][pinker_Page_51.jpg] Seventh, a lack of differential treatment by parents and teachers. These conclusions come as a shock to many people. One comes from Lytton and Romney's meta-analysis of sex-specific socialization involving 172 studies and 28,000 children, in which they looked both at parents' reports and at direct observations of how parents treat their sons and daughters -- and found few or no differences among contemporary Americans. In particular, there was no difference in the categories "Encouraging Achievement" and "Encouraging Achievement in Mathematics." There is a widespread myth that teachers (who of course are disproportionately female) are dupes who perpetuate gender inequities by failing to call on girls in class, and who otherwise having low expectations of girls' performance. In fact Jussim and Eccles, in a study of 100 teachers and 1,800 students, concluded that teachers seemed to be basing their perceptions of students on those students' actual performances and motivation. [89][pinker_Page_52.jpg] Eighth, studies of prenatal sex hormones: the mechanism that makes boys boys and girls girls in the first place. There is evidence, admittedly squishy in parts, that differences in prenatal hormones make a difference in later thought and behavior even within a given sex. In the condition called congenital adrenal hyperplasia, girls in utero are subjected to an increased dose of androgens, which is neutralized postnatally. But when they grow up they have male-typical toy preferences -- trucks and guns -- compared to other girls, male-typical play patterns, more competitiveness, less cooperativeness, and male-typical occupational preferences. However, research on their spatial abilities is inconclusive, and I cannot honestly say that there are replicable demonstrations that CAH women have male-typical patterns of spatial cognition. [90][pinker_Page_53.jpg] Similarly, variations in fetal testosterone, studied in various ways, show that fetal testosterone has a nonmonotic relationship to reduced eye contact and face perception at 12 months, to reduced vocabulary at 18 months, to reduced social skills and greater narrowness of interest at 48 months, and to enhanced mental rotation abilities in the school-age years. [91][pinker_Page_54.jpg] Ninth, circulating sex hormones. I'm going to go over this slide pretty quickly because the literature is a bit messy. Though it's possible that all claims of the effects of hormones on cognition will turn out to be bogus, I suspect something will be salvaged from this somewhat contradictory literature. There are, in any case, many studies showing that testosterone levels in the low-normal male range are associated with better abilities in spatial manipulation. And in a variety of studies in which estrogens are compared or manipulated, there is evidence, admittedly disputed, for statistical changes in the strengths and weaknesses in women's cognition during the menstrual cycle, possibly a counterpart to the changes in men's abilities during their daily and seasonal cycles of testosterone. [92][pinker_Page_55.jpg] My last kind of evidence: imprinted X chromosomes. In the past fifteen years an entirely separate genetic system capable of implementing sex differences has been discovered. In the phenomenon called genetic imprinting, studied by David Haig and others, a chromosome such as the X chromosome can be altered depending on whether it was passed on from one's mother or from one's father. This makes a difference in the condition called Turner syndrome, in which a child has just one X chromosome, but can get it either from her mother or her father. When she inherits an X that is specific to girls, on average she has a better vocabulary and better social skills, and is better at reading emotions, at reading body language, and at reading faces. [93][pinker_Page_56.jpg] A remark on stereotypes, and then I'll finish. Are these stereotypes? Yes, many of them are (although, I must add, not all of them -- for example, women's superiority in spatial memory and mathematical calculation. There seems to be a widespread assumption that if a sex difference conforms to a stereotype, the difference must have been caused by the stereotype, via differential expectations for boys and for girls. But of course the causal arrow could go in either direction: stereotypes might reflect differences rather than cause them. In fact there's an enormous literature in cognitive psychology which says that people can be good intuitive statisticians when forming categories and that their prototypes for conceptual categories track the statistics of the natural world pretty well. For example, there is a stereotype that basketball players are taller on average than jockeys. But that does not mean that basketball players grow tall, and jockeys shrink, because we expect them to have certain heights! Likewise, Alice Eagly and Jussim and Eccles have shown that most of people's gender stereotypes are in fact pretty accurate. Indeed the error people make is in the direction of underpredicting sex differences. [94][pinker_Page_57.jpg] To sum up: I think there is more than "a shred of evidence" for sex differences that are relevant to statistical gender disparities in elite hard science departments. There are reliable average difference in life priorities, in an interest in people versus things, in risk-seeking, in spatial transformations, in mathematical reasoning, and in variability in these traits. And there are ten kinds of evidence that these differences are not completely explained by socialization and bias, although they surely are in part. [95][pinker_Page_58.jpg] A concluding remark. None of this provides grounds for ignoring the biases and barriers that do keep women out of science, as long as we keep in mind the distinction between fairness on the one hand and sameness on the other. And I will give the final word to Gloria Steinem: "there are very few jobs that actually require a penis or a vagina, and all the other jobs should be open to both sexes." _________________________________________________________________ [gender_logo.jpg] Elizabeth Spelke _________________________________________________________________ [96][spelke_Page_01.jpg] [These are Liz's Power Point slides.] (ELIZABETH SPELKE:) Thanks, especially to Steve; I'm really glad we're able to have this debate, I've been looking forward to it. [97][spelke_Page_02.jpg] I want to start by talking about the points of agreement between Steve and me, and as he suggested, there are many. If we got away from the topic of sex and science, we'd be hard pressed to find issues that we disagree on. Here are a few of the points of agreement that are particularly relevant to the discussions of the last few months. First, we agree that both our society in general and our university in particular will be healthiest if all opinions can be put on the table and debated on their merits. We also agree that claims concerning sex differences are empirical, they should be evaluated by evidence, and we'll all be happier and live longer if we can undertake that evaluation as dispassionately and rationally as possible. We agree that the mind is not a blank slate; in fact one of the deepest things that Steve and I agree on is that there is such a thing as human nature, and it is a fascinating and exhilarating experience to study it. And finally, I think we agree that the role of scientists in society is rather modest. Scientists find things out. The much more difficult questions of how to use that information, live our lives, and structure our societies are not questions that science can answer. Those are questions that everybody must consider. So where do we disagree? [98][spelke_Page_03.jpg] We disagree on the answer to the question, why in the world are women scarce as hens' teeth on Harvard's mathematics faculty and other similar institutions? In the current debate, two classes of factors have been said to account for this difference. In one class are social forces, including overt and covert discrimination and social influences that lead men and women to develop different skills and different priorities. In the other class are genetic differences that predispose men and women to have different capacities and to want different things. In his book, The Blank Slate, and again today, Steve argued that social forces are over-rated as causes of gender differences. Intrinsic differences in aptitude are a larger factor, and intrinsic differences in motives are the biggest factor of all. Most of the examples that Steve gave concerned what he takes to be biologically based differences in motives. My own view is different. I think the big forces causing this gap are social factors. There are no differences in overall intrinsic aptitude for science and mathematics between women and men. Notice that I am not saying the genders are indistinguishable, that men and women are alike in every way, or even that men and women have identical cognitive profiles. I'm saying that when you add up all the things that men are good at, and all the things that women are good at, there is no overall advantage for men that would put them at the top of the fields of math and science. On the issue of motives, I think we're not in a position to know whether the different things that men and women often say they want stem only from social forces, or in part from intrinsic sex differences. I don't think we can know that now. I want to start with the issue that's clearly the biggest source of debate between Steve and me: the issue of differences in intrinsic aptitude. This is the only issue that my own work and professional knowledge bear on. Then I will turn to the social forces, as a lay person as it were, because I think they are exerting the biggest effects. Finally, I'll consider the question of intrinsic motives, which I hope we'll come back to in our discussion. [99][spelke_Page_04.jpg] Over the last months, we've heard three arguments that men have greater cognitive aptitude for science. The first argument is that from birth, boys are interested in objects and mechanics, and girls are interested in people and emotions. The predisposition to figure out the mechanics of the world sets boys on a path that makes them more likely to become scientists or mathematicians. The second argument assumes, as Galileo told us, that science is conducted in the language of mathematics. On the second claim, males are intrinsically better at mathematical reasoning, including spatial reasoning. The third argument is that men show greater variability than women, and as a result there are more men at the extreme upper end of the ability distribution from which scientists and mathematicians are drawn. Let me take these claims one by one. [100][spelke_Page_05.jpg] The first claim, as Steve said, is gaining new currency from the work of Simon Baron-Cohen. It's an old idea, presented with some new language. Baron-Cohen says that males are innately predisposed to learn about objects and mechanical relationships, and this sets them on a path to becoming what he calls "systematizers." Females, on the other hand, are innately predisposed to learn about people and their emotions, and this puts them on a path to becoming "empathizers." Since systematizing is at the heart of math and science, boys are more apt to develop the knowledge and skills that lead to math and science. To anyone as old as I am who has been following the literature on sex differences, this may seem like a surprising claim. The classic reference on the nature and development of sex differences is a book by Eleanor Maccoby and Carol Jacklin that came out in the 1970s. They reviewed evidence for all sorts of sex differences, across large numbers of studies, but they also concluded that certain ideas about differences between the genders were myths. At the top of their list of myths was the idea that males are primarily interested in objects and females are primarily interested in people. They reviewed an enormous literature, in which babies were presented with objects and people to see if they were more interested in one than the other. They concluded that there were no sex differences in these interests. Nevertheless, this conclusion was made in the early 70s. At that time, we didn't know much about babies' understanding of objects and people, or how their understanding grows. Since Baron-Cohen's claims concern differential predispositions to learn about different kinds of things, you could argue that the claims hadn't been tested in Maccoby and Jacklin's time. What does research now show? [101][spelke_Page_06.jpg] Let me take you on a whirlwind tour of 30 years of research in one powerpoint slide. From birth, babies perceive objects. They know where one object ends and the next one begins. They can't see objects as well as we can, but as they grow their object perception becomes richer and more differentiated. Babies also start with rudimentary abilities to represent that an object continues to exist when it's out of view, and they hold onto those representations longer, and over more complicated kinds of changes, as they grow. Babies make basic inferences about object motion: inferences like, the force with which an object is hit determines the speed with which it moves. These inferences undergo regular developmental changes over the infancy period. In each of these cases, there is systematic developmental change, and there's variability. Because of this variability, we can compare the abilities of male infants to females. Do we see sex differences? The research gives a clear answer to this question: We don't. [102][spelke_Page_07.jpg] Male and female infants are equally interested in objects. Male and female infants make the same inferences about object motion, at the same time in development. They learn the same things about object mechanics at the same time. Across large numbers of studies, occasionally a study will favor one sex over the other. For example, girls learn that the force with which something is hit influences the distance it moves a month earlier than boys do. But these differences are small and scattered. For the most part, we see high convergence across the sexes. Common paths of learning continue through the preschool years, as kids start manipulating objects to see if they can get a rectangular block into a circular hole. If you look at the rates at which boys and girls figure these things out, you don't find any differences. We see equal developmental paths. I think this research supports an important conclusion. In discussions of sex differences, we need to ask what's common across the two sexes. One thing that's common is infants don't divide up the labor of understanding the world, with males focusing on mechanics and females focusing on emotions. Male and female infants are both interested in objects and in people, and they learn about both. The conclusions that Maccoby and Jacklin drew in the early 1970s are well supported by research since that time. [103][spelke_Page_08.jpg] Let me turn to the second claim. People may have equal abilities to develop intuitive understanding of the physical world, but formal math and science don't build on these intuitions. Scientists use mathematics to come up with new characterizations of the world and new principles to explain its functioning. Maybe males have an edge in scientific reasoning because of their greater talent for mathematics. [104][spelke_Page_09.jpg] As Steve said, formal mathematics is not something we have evolved to do; it's a recent accomplishment. Animals don't do formal math or science, and neither did humans back in the Pleistocene. If there is a biological basis for our mathematical reasoning abilities, it must depend on systems that evolved for other purposes, but that we've been able to harness for the new purpose of representing and manipulating numbers and geometry. [105][spelke_Page_10.jpg] Research from the intersecting fields of cognitive neuroscience, neuropsychology, cognitive psychology, and cognitive development provide evidence for five "core systems" at the foundations of mathematical reasoning. The first is a system for representing small exact numbers of objects -- the difference between one, two, and three. This system emerges in human infants at about five months of age, and it continues to be present in adults. The second is a system for discriminating large, approximate numerical magnitudes -- the difference between a set of about ten things and a set of about 20 things. That system also emerges early in infancy, at four or five months, and continues to be present and functional in adults. The third system is probably the first uniquely human foundation for numerical abilities: the system of natural number concepts that we construct as children when we learn verbal counting. That construction takes place between about the ages of two and a half and four years. The last two systems are first seen in children when they navigate. One system represents the geometry of the surrounding layout. The other system represents landmark objects. All five systems have been studied quite extensively in large numbers of male and female infants. We can ask, are there sex differences in the development of any of these systems at the foundations of mathematical thinking? Again, the answer is no. I will show you data from just two cases. [106][spelke_Page_11.jpg] The first is the development of natural number concepts, constructed by children between the ages of two and four. At any particular time in this period, you'll find a lot of variability. For example, between the ages of three and three and a half years, some children have only figured out the meaning of the word "one" and can only distinguish the symbolic concept one from all other numbers. Other kids have figured out the meanings of all the words in the count list up to "ten" or more, and they can use all of them in a meaningful way. Most kids are somewhere in between: they have figured out the first two symbols, or the first three, and so forth. When you compare children's performance by sex, you see no hint of a superiority of males in constructing natural number concepts. [107][spelke_Page_12.jpg] The other example comes from studies that I think are the closest thing in preschool children to the mental rotation tests conducted with adults. In these studies, children are brought into a room of a given shape, something is hidden in a corner, and then their eyes are closed and they're spun around. They have to remember the shape of the room, open their eyes, and figure out how to rotate themselves back to the object where it was hidden. If you test a group of 4 year olds, you find they can do this task well above chance but not perfectly; there's a range of performance. When you break that performance down by gender, again there is not a hint of an advantage for boys over girls. [108][spelke_Page_13.jpg] These findings and others support two important points. First, indeed there is a biological foundation to mathematical and scientific reasoning. We are endowed with core knowledge systems that emerge prior to any formal instruction and that serve as a basis for mathematical thinking. Second, these systems develop equally in males and females. Ten years ago, the evolutionary psychologist and sex difference researcher, David Geary, reviewed the literature that was available at that time. He concluded that there were no sex differences in "primary abilities" underlying mathematics. What we've learned in the last ten years continues to support that conclusion. [109][spelke_Page_14.jpg] Sex differences do emerge at older ages. Because they emerge later in childhood, it's hard to tease apart their biological and social sources. But before we attempt that task, let's ask what the differences are. I think the following is a fair statement, both of the cognitive differences that Steve described and of others. When people are presented with a complex task that can be solved through multiple different strategies, males and females sometimes differ in the strategy that they prefer. For example, if a task can only be solved by representing the geometry of the layout, we do not see a difference between men and women. But if the task can be accomplished either by representing geometry or by representing individual landmarks, girls tend to rely on the landmarks, and boys on the geometry. To take another example, when you compare the shapes of two objects of different orientations, there are two different strategies you can use. You can attempt a holistic rotation of one of the objects into registration with the other, or you can do point-by-point featural comparisons of the two objects. Men are more likely to do the first; women are more likely to do the second. Finally, the mathematical word problems on the SAT-M very often allow multiple solutions. Both item analyses and studies of high school students engaged in the act of solving such problems suggest that when students have the choice of solving a problem by plugging in a formula or by doing Ven diagram-like spatial reasoning, girls tend to do the first and boys tend to do the second. [110][spelke_Page_15.jpg] Because of these differences, males and females sometimes show differing cognitive profiles on timed tests. When you have to solve problems fast, some strategies will be faster than others. Thus, females perform better at some verbal, mathematical and spatial tasks, and males perform better at other verbal, mathematical, and spatial tasks. This pattern of differing profiles is not well captured by the generalization, often bandied about in the popular press, that women are "verbal" and men are "spatial." There doesn't seem to be any more evidence for that than there was for the idea that women are people-oriented and men are object-oriented. Rather the differences are more subtle. Does one of these two profiles foster better learning of math than the other? In particular, is the male profile better suited to high-level mathematical reasoning? [111][spelke_Page_16.jpg] At this point, we face a question that's been much discussed in the literature on mathematics education and mathematical testing. The question is, by what yardstick can we decide whether men or women are better at math? Some people suggest that we look at performance on the SAT-M, the quantitative portion of the Scholastic Assessment Test. But this suggestion raises a problem of circularity. The SAT test is composed of many different types of items. Some of those items are solved better by females. Some are solved better by males. The people who make the test have to decide, how many items of each type to include? Depending on how they answer that question, they can create a test that makes women look like better mathematicians, or a test that makes men look like better mathematicians. What's the right solution? Books are devoted to this question, with much debate, but there seems to be a consensus on one point: The only way to come up with a test that's fair is to develop an independent understanding of what mathematical aptitude is and how it's distributed between men and women. But in that case, we can't use performance on the SAT to give us that understanding. We've got to get that understanding in some other way. So how are we going to get it? A second strategy is to look at job outcomes. Maybe the people who are better at mathematics are those who pursue more mathematically intensive careers. But this strategy raises two problems. First, which mathematically intensive jobs should we choose? If we choose engineering, we will conclude that men are better at math because more men become engineers. If we choose accounting, we will think that women are better at math because more women become accountants: 57% of current accountants are women. So which job are we going to pick, to decide who has more mathematical talent? These two examples suggest a deeper problem with job outcomes as a measure of mathematical talent. Surely you've got to be good at math to land a mathematically intensive job, but talent in mathematics is only one of the factors influencing career choice. It can't be our gold standard for mathematical ability. So what can be? I suggest the following experiment. We should take a large number of male students and a large number of female students who have equal educational backgrounds, and present them with the kinds of tasks that real mathematicians face. We should give them new mathematical material that they have not yet mastered, and allow them to learn it over an extended period of time: the kind of time scale that real mathematicians work on. We should ask, how well do the students master this material? The good news is, this experiment is done all the time. It's called high school and college. Here's the outcome. In high school, girls and boys now take equally many math classes, including the most advanced ones, and girls get better grades. In college, women earn almost half of the bachelor's degrees in mathematics, and men and women get equal grades. Here I respectfully disagree with one thing that Steve said: men and women get equal grades, even when you only compare people within a single institution and a single math class. Equating for classes, men and women get equal grades. The outcome of this large-scale experiment gives us every reason to conclude that men and women have equal talent for mathematics. Here, I too would like to quote Diane Halpern. Halpern reviews much evidence for sex differences, but she concludes, "differences are not deficiencies." Men and women have equal aptitude for mathematics. Yes, there are sex differences, but they don't add up to an overall advantage for one sex over the other. [112][spelke_Page_17.jpg] [113][spelke_Page_18.jpg] Let me turn to the third claim, that men show greater variability, either in general or in quantitative abilities in particular, and so there are more men at the upper end of the ability distribution. I can go quickly here, because Steve has already talked about the work of Camilla Benbow and Julian Stanley, focusing on mathematically precocious youth who are screened at the age of 13, put in intensive accelerated programs, and then followed up to see what they achieve in mathematics and other fields. As Steve said, students were screened at age 13 by the SAT, and there were many more boys than girls who scored at the highest levels on the SAT-M. In the 1980s, the disparity was almost 13 to 1. It is now substantially lower, but there still are more boys among the very small subset of people from this large, talented sample who scored at the very upper end. Based on these data, Benbow and Stanley concluded that there are more boys than girls in the pool from which future mathematicians will be drawn. But notice the problem with this conclusion: It's based entirely on the SAT-M. This test, and the disparity it revealed, are in need of an explanation, a firmer yardstick for assessing and understanding gender differences in this talented population. [114][spelke_Page_19.jpg] Fortunately, Benbow, Stanley and Lubinski have collected much more data on these mathematically talented boys and girls: not just the ones with top scores on one timed test, but rather the larger sample of girls and boys who were accelerated and followed over time. Let's look at some of the key things that they found. First, they looked at college performance by the talented sample. They found that the males and females took equally demanding math classes and majored in math in equal numbers. More girls majored in biology and more boys in physics and engineering, but equal numbers of girls and boys majored in math. And they got equal grades. The SAT-M not only under-predicts the performance of college women in general, it also under-predicted the college performance of women in the talented sample. These women and men have been shown to be equally talented by the most meaningful measure we have: their ability to assimilate new, challenging material in demanding mathematics classes at top-flight institutions. By that measure, the study does not find any difference between highly talented girls and boys. [115][spelke_Page_20.jpg] [116][spelke_Page_22.jpg] So, what's causing the gender imbalance on faculties of math and science? Not differences in intrinsic aptitude. Let's turn to the social factors that I think are much more important. Because I'm venturing outside my own area of work, and because time is short, I won't review all of the social factors producing differential success of men and women. I will talk about just one effect: how gender stereotypes influence the ways in which males and females are perceived. Let me start with studies of parents' perceptions of their own children. Steve said that parents report that they treat their children equally. They treat their boys and girls alike, and they encourage them to equal extents, for they want both their sons and their daughters to succeed. This is no doubt true. But how are parents perceiving their kids? Some studies have interviewed parents just after the birth of their child, at the point where the first question that 80% of parents ask -- is it a boy or a girl? -- has been answered. Parents of boys describe their babies as stronger, heartier, and bigger than parents of girls. The investigators also looked at the babies' medical records and asked whether there really were differences between the boys and girls in weight, strength, or coordination. The boys and girls were indistinguishable in these respects, but the parents' descriptions were different. At 12 months of age, girls and boys show equal abilities to walk, crawl, or clamber. But before one study, Karen Adolph, an investigator of infants' locomotor development, asked parents to predict how well their child would do on a set of crawling tasks: Would the child be able to crawl down a sloping ramp? Parents of sons were more confident that their child would make it down the ramp than parents of daughters. When Adolph tested the infants on the ramp, there was no difference whatever between the sons and daughters, but there was a difference in the parents' predictions. My third example, moving up in age, comes from the studies of Jackie Eccles. She asked parents of boys and girls in sixth grade, how talented do you think your child is in mathematics? Parents of sons were more likely to judge that their sons had talent than parents of daughters. A panoply of objective measures, including math grades in school, performance on standardized tests, teachers' evaluations, and children's expressed interest in math, revealed no differences between the girls and boys. Still, there was a difference in parents' perception of their child's intangible talent. Other studies have shown a similar effect for science. There's clearly a mismatch between what parents perceive in their kids and what objective measures reveal. But is it possible that the parents are seeing something that the objective measures are missing? Maybe the boy getting B's in his math class really is a mathematical genius, and his mom or dad has sensed that. To eliminate that possibility, we need to present observers with the very same baby, or child, or Ph.D. candidate, and manipulate their belief about the person's gender. Then we can ask whether their belief influences their perception. [117][spelke_Page_24.jpg] It's hard to do these studies, but there are examples, and I will describe a few of them. A bunch of studies take the following form: you show a group of parents, or college undergraduates, video-clips of babies that they don't know personally. For half of them you give the baby a male name, and for the other half you give the baby a female name. (Male and female babies don't look very different.) The observers watch the baby and then are asked a series of questions: What is the baby doing? What is the baby feeling? How would you rate the baby on a dimension like strong-to-weak, or more intelligent to less intelligent? There are two important findings. First, when babies do something unambiguous, reports are not affected by the baby's gender. If the baby clearly smiles, everybody says the baby is smiling or happy. Perception of children is not pure hallucination. Second, children often do things that are ambiguous, and parents face questions whose answers aren't easily readable off their child's overt behavior. In those cases, you see some interesting gender labeling effects. For example, in one study a child on a video-clip was playing with a jack-in-the-box. It suddenly popped up, and the child was startled and jumped backward. When people were asked, what's the child feeling, those who were given a female label said, "she's afraid." But the ones given a male label said, "he's angry." Same child, same reaction, different interpretation. In other studies, children with male names were more likely to be rated as strong, intelligent, and active; those with female names were more likely to be rated as little, soft, and so forth. [118][spelke_Page_25.jpg] I think these perceptions matter. You, as a parent, may be completely committed to treating your male and female children equally. But no sane parents would treat a fearful child the same way they treat an angry child. If knowledge of a child's gender affects adults' perception of that child, then male and female children are going to elicit different reactions from the world, different patterns of encouragement. These perceptions matter, even in parents who are committed to treating sons and daughters alike. [119][spelke_Page_26.jpg] I will give you one last version of a gender-labeling study. This one hits particularly close to home. The subjects in the study were people like Steve and me: professors of psychology, who were sent some vitas to evaluate as applicants for a tenure track position. Two different vitas were used in the study. One was a vita of a walk-on-water candidate, best candidate you've ever seen, you would die to have this person on your faculty. The other vita was a middling, average vita among successful candidates. For half the professors, the name on the vita was male, for the other half the name was female. People were asked a series of questions: What do you think about this candidate's research productivity? What do you think about his or her teaching experience? And finally, Would you hire this candidate at your university? For the walk-on-water candidate, there was no effect of gender labeling on these judgments. I think this finding supports Steve's view that we're dealing with little overt discrimination at universities. It's not as if professors see a female name on a vita and think, I don't want her. When the vita's great, everybody says great, let's hire. What about the average successful vita, though: that is to say, the kind of vita that professors most often must evaluate? In that case, there were differences. The male was rated as having higher research productivity. These psychologists, Steve's and my colleagues, looked at the same number of publications and thought, "good productivity" when the name was male, and "less good productivity" when the name was female. Same thing for teaching experience. The very same list of courses was seen as good teaching experience when the name was male, and less good teaching experience when the name was female. In answer to the question would they hire the candidate, 70% said yes for the male, 45% for the female. If the decision were made by majority rule, the male would get hired and the female would not. A couple other interesting things came out of this study. The effects were every bit as strong among the female respondents as among the male respondents. Men are not the culprits here. There were effects at the tenure level as well. At the tenure level, professors evaluated a very strong candidate, and almost everyone said this looked like a good case for tenure. But people were invited to express their reservations, and they came up with some very reasonable doubts. For example, "This person looks very strong, but before I agree to give her tenure I would need to know, was this her own work or the work of her adviser?" Now that's a perfectly reasonable question to ask. But what ought to give us pause is that those kinds of reservations were expressed four times more often when the name was female than when the name was male. [120][spelke_Page_27.jpg] So there's a pervasive difference in perceptions, and I think the difference matters. Scientists' perception of the quality of a candidate will influence the likelihood that the candidate will get a fellowship, a job, resources, or a promotion. A pattern of biased evaluation therefore will occur even in people who are absolutely committed to gender equity. I have little doubt that all my colleagues here at Harvard are committed to the principle that a male candidate and a female candidate of equal qualifications should have equal chance at a job. But we also think that when we compare a more productive scholar to a less productive one, a more experienced teacher to a less experienced one, a more independent investigator to a less independent one, those factors matter as well. These studies say that knowledge of a person's gender will influence our assessment of those factors, and that's going to produce a pattern of discrimination, even in people with the best intentions. [121][spelke_Page_28.jpg] From the moment of birth to the moment of tenure, throughout this great developmental progression, there are unintentional but pervasive and important differences in the ways that males and females are perceived and evaluated. I have to emphasize that perceptions are not everything. When cases are unambiguous, you don't see these effects. What's more, cognitive development is robust: boys and girls show equal capacities and achievements in educational settings, including in science and mathematics, despite the very different ways in which boys and girls are perceived and evaluated. I think it's really great news that males and females develop along common paths and gain common sets of abilities. The equal performance of males and females, despite their unequal treatment, strongly suggests that mathematical and scientific reasoning has a biological foundation, and this foundation is shared by males and females. Finally, you do not create someone who feels like a girl or boy simply by perceiving them as male or female. That's the lesson that comes from the studies of people of one sex who are raised as the opposite sex. Biological sex differences are real and important. Sex is not a cultural construction that's imposed on people. [122][spelke_Page_29.jpg] But the question on the table is not, Are there biological sex differences? The question is, Why are there fewer women mathematicians and scientists? The patterns of bias that I described provide four interconnected answers to that question. First, and most obviously, biased perceptions produce discrimination: When a group of equally qualified men and women are evaluated for jobs, more of the men will get those jobs if they are perceived to be more qualified. Second, if people are rational, more men than women will put themselves forward into the academic competition, because men will see that they've got a better chance for success. Academic jobs will be more attractive to men because they face better odds, will get more resources, and so forth. Third, biased perceptions earlier in life may well deter some female students from even attempting a career in science or mathematics. If your parents feel that you don't have as much natural talent as someone else whose objective abilities are no better than yours, that may discourage you, as Eccles's work shows. Finally, there's likely to be a snowball effect. All of us have an easier time imagining ourselves in careers where there are other people like us. If the first three effects perpetuate a situation where there are few female scientists and mathematicians, young girls will be less likely to see math and science as a possible life. [123][spelke_Page_30.jpg] So by my personal scorecard, these are the major factors. Let me end, though, by asking, could Steve also be partly right? Could biological differences in motives -- motivational patterns that evolved in the Pleistocene but that apply to us today -- propel more men than women towards careers in mathematics and science? My feeling is that where we stand now, we cannot evaluate this claim. It may be true, but as long as the forces of discrimination and biased perceptions affect people so pervasively, we'll never know. I think the only way we can find out is to do one more experiment. We should allow all of the evidence that men and women have equal cognitive capacity, to permeate through society. We should allow people to evaluate children in relation to their actual capacities, rather than one's sense of what their capacities ought to be, given their gender. Then we can see, as those boys and girls grow up, whether different inner voices pull them in different directions. I don't know what the findings of that experiment will be. But I do hope that some future generation of children gets to find out. _________________________________________________________________ [gender_logo.jpg] Steven Pinker & Elizabeth Spelke: Concluding Discussion _________________________________________________________________ PINKER: Thanks, Liz, for a very stimulating and apposite presentation. A number of comments. I don't dispute a lot of the points you made, but many have lost sight of the datum that we're here to explain in the first place. Basic abilities like knowing that an object is still there when you put a hankie over it, or knowing that one object can't pass through another, are not the kinds of things that distinguish someone who's capable of being a professor of physics or math from someone who isn't. And in many of the cases in which you correctly said that there is no gender difference in kids, there is no gender difference in adults either -- such as the give-a-number task and other core abilities. Also, a big concern with all of the null effects that you mentioned is statistical power. Bob Rosenthal 20 years ago pointed out that the vast majority of studies that psychologists do are incapable of detecting the kinds of results they seek, which is why it's so important to have meta-analyses and large sample sizes. I question whether all of the null results that you mentioned can really be justified, and whether they are comparable to the studies done on older kids and adults. One place where I really do disagree with you is in the value of the SAT-M, where the "circle" has amply been broken. This is what people at the College Board are obsessed with. What you are treating as the gold standard is performance in college courses. But the datum we are disputing is not how well boys and girls do in school, or how well men and women do in college, because there we agree there is no male advantage. The phenomenon we really are discussing is performance at the upper levels: getting tenure-track job, getting patents, and so on. And here the analyses have shown that the SAT is not biased against girls. That is, a given increment in SAT score predicts a given increment in the variable of interest to the same extent whether you're male or female. I think there may be a slight difference in which finding each of us is alluding to in talking about differences in grades. I was not suggesting that girls' better grades come about because they take easier courses; they really do get better grades holding courses constant. Rather it's the slight underprediction of grades by the SAT that can be explained in part by class choice and in part by conscientiousness. SPELKE: Well the most recent thing that I've read about this issue is the Gallagher and Kaufman book, Gender Differences in Mathematics, which just came out about a month ago. They report that equating for classes and institutions, and looking just at A students, there's a 21 point SAT math differential; that is to say, for two students getting the same grade of A, the average for the girls on the SAT will have been 21 points lower. That differential is there at every grade level and in all the courses. The SAT people have discussed it as a problem. One of the discussions reached the conclusion that the SAT is still useful, because although it under-predicts girls' performance in college, girls' grades over-predict their performance in college, and if you use the two together you are okay. In fact, they advised that people never take account of the SAT simply by itself, but consider it in relation to grades. When you spoke earlier about the use of GREs in admitting people to grad school, that's in fact what graduate programs do: We consider both grades and GREs. Interestingly, though, in all of the public discussion of the relative advantages of men versus women for math and science, over the last two months, people have not used the SAT in conjunction with grades. When talking about relative ability, they've used the SAT by itself. I think that has led to a distorted conversation about this issue. PINKER: It nonetheless remains true that in the most recent study by Lubinski and Benbow, which showed a fantastic degree of predictive power of the SAT given in 7th grade, there was no difference in predictive power in boys and girls in any of these measures. But let me return to the datum that is at issue here, namely the differential representation of the sexes in physical sciences, mechanical engineering, and mathematics. The fact that men and women are equal overall in spatial abilities, and overall in mathematical abilities, is irrelevant to this. It may be that the particular subtalents in which women excel make them more likely to go into accounting. But the datum we are discussing is not a gender difference in accounting. The datum we are discussing is a gender difference in the physical sciences, engineering, and mathematics. And I suspect that when you look at a range of professions, the size of the sex discrepancy correlates with how much spatial manipulation (not just any kind of spatial cognition) and how much mathematical reasoning (not just any kind of mathematical ability) each of those jobs requires. What about parents' expectations? In the 1970s the model for development was, "as the twig is bent, so grows the branch." -- that subtle differences in parents' perceptions early in life can have a lasting effect. You nudge the child in a particular direction and you'll see an effect on his trajectory years later. But there is now an enormous amount of research spearheaded by the behavioral genetics revolution suggesting that that is not true. There may be effects of parental expectations and parental treatment on young children while they're still in the home, but most follow-up studies show that short of outright abuse and neglect, these effects peter out by late adolescence. And studies of adoption and of twins and other sibs reared apart suggest that any effects of the kinds of parenting that are specific to a child simply reflect the preexisting genetic traits of the child, and the additional effect of parenting peters out to nothing. SPELKE: Can I respond to that? I think one thing is different about the gender case, compared to the early socialization effects for other kinds of categories, different styles of parenting, and so forth. The gender differences that we see reflected in parents' differing perceptions are mirrored by differing perceptions that males and females experience throughout their lives. It's not the case that idiosyncratic pairs of parents treat their kids one way, but then as soon as the children leave that environment, other people treat them differently. Rather, what we have in the case of gender is a pervasive pattern that just keeps getting perpetuated in different people. I'm rather a nativist about cognition, and I am tempted to look at that pattern and wonder, did Darwin give us some innately wrong idea about the genders? Professionals in professional contexts show the same patterns of evaluation that parents show in home contexts, and children face those patterns of evaluation, not just when they're young and at home, but continuing through high school, college, and finally with their colleagues on academic faculties. We're dealing here with a much more pervasive effect than the effects of socialization in the other studies that you've written and talked about. PINKER: Regarding bias: as I mentioned at the outset, I don't doubt that bias exists. But the idea that the bias started out from some arbitrary coin flip at the dawn of time and that gender differences have been perpetuated ever since by the existence of that bias is extremely unlikely. In so many cases, as Eagly and the Stereotype-Accuracy people point out, the biases are accurate. Also, there's an irony in these discussion of bias. When we test people in the cognitive psychology lab, and we don't call these base rates "gender," we applaud people when they apply them. If people apply the statistics of a group to an individual case, we call it rational Bayesian reasoning, and congratulate ourselves for getting them to overcome the cognitive illusion of base rate neglect. But when people do the same thing in the case of gender, we treat Bayesian reasoning as a cognitive flaw and base-rate neglect as rational! Now I agree that applying base rates for gender in evaluating individual men and women is a moral flaw; I don't think that base rates ought to be applied in judging individuals in most cases of public decision-making. But the fact that the statistics of a gender are applied does not mean that their origin was arbitrary; it could be statistically sound in some cases. SPELKE: Let me reply to that, because I agree that the origin is not arbitrary, and that the bias is there for an objective reason, but I think you're drawing the wrong conclusion about it. I think the reason there's a bias to think that men have greater natural talent for math and science is that when we look around the world and ask, who's winning the Nobel Prizes and making the great advances in science, what we see, again and again, is men. Although Linda Buck received this year's Nobel Prize in physiology or medicine, for the most part it's overwhelmingly men who are reaching the upper levels of math and science. It's natural to look at that and think, there must be some reason, some inner difference between men and women, which produces this enormous disparity. And I quite agree with you that good statistical reasoning should lead you to think, the next student who comes along, if male, is more likely to join that group of Nobel Prize winners. What I would like to suggest is that we have good reasons to resist this kind of conclusion, and the reasons aren't only moral. Let me just use an analogy, and replay this debate over the biological bases of mathematics and science talent 150 years ago. Let's consider who the 19th century mathematicians and scientists were. They were overwhelmingly male, just as they are today, but also overwhelmingly European, not Asian. You won't see a Chinese face or an Indian face in 19th century science. It would have been tempting to apply this same pattern of statistical reasoning and say, there must be something about European genes that give rise to greater mathematical talent than Asian genes do. If we go back still further, and play this debate in the Renaissance, I think we would be tempted to conclude that Catholic genes make for better science than Jewish genes, because all those Renaissance scientists were Catholic. If you look at those cases, you see what's wrong with this argument. What's wrong with the argument is not that biology is irrelevant. If Galileo had been switched at birth with some baby from the Pisan ghetto, the baby raised by Galileo's parents would not likely have ended up teaching us that the language of physics is mathematics. I think that Galileo's genes had something to do with his achievement, but so did Galileo's cultural and social environment: his nurturing. Genius requires huge amounts of both. If, in that baby switch, Galileo had found himself growing up in the Pisan ghetto, I bet he wouldn't have ended up being the example in this discussion today either. So yes, there are reasons for this statistical bias. But I think we want to step back and ask, why is it that almost all Nobel Prize winners are men today? The answer to that question may be the same reason why all the great scientists in Florence were Christian. PINKER: I think you could take the same phenomenon and come to the opposite conclusion! Say there were really was such a self-reinforcing, self-perpetuating dynamic: a difference originates for reasons that might be arbitrary; people perceive the difference; they perpetuate it by their expectations. Just as bad, you say, is the fact that people don't go into fields in which they don't find enough people like themselves. If so, the dynamic you would expect is that the representation of different genders or ethnic groups should migrate to the extremes. That is, there is a positive feedback loop where if you're in the minority, it will discourage people like you from entering the field, which will mean that there'll be even fewer people in the field, and so on. On either side of this threshold you should get a drift of the percentages in opposite directions. Now, there is an alternative model. At many points in history, arbitrary barriers against the entry of genders and races and ethnic groups to various professions were removed. And as soon as the barrier was removed, far from the statistical underrepresentation perpetuating or exaggerating itself, as you predict, the floodgates open, and the formerly underrepresented people reaches some natural level. It's the Jackie Robinson effect in baseball. In the case of gender and science, remember what our datum is. It's not that women are under-represented in professions in general or in the sciences in general: in many professions women are perfectly well represented, such as being a veterinarian, in which the majority of recent graduates are women by a long shot. If you go back fifty years or a hundred years, there would have been virtually no veterinarians who were women. That underrepresentation did not perpetuate itself via the positive feedback loop that you allude to. SPELKE: I'm glad you brought up the case of the basketball and baseball players. I think it's interesting to ask, what distinguishes these cases, where you remove the overt discrimination and within a very short period of time the differential disappears, from other cases, where you remove the overt discrimination and the covert discrimination continues? In the athletic cases where discrimination disappears quickly, there are clear, objective measures of success. Whatever people think about the capacities of a black player, if he is hitting the ball out of the park, he is going to get credit for a home run. That is not the case in science. In science, the judgments are subjective, every step of the way. Who's really talented? Who deserves bigger lab space? Who should get the next fellowship? Who should get promoted to tenure? These decisions are not based on clear and objective criteria. These are the cases where you see discrimination persisting. You see it in academia. You see it in Claudia Goldin's studies of orchestra auditions, which also involve subtle judgments: Who's the more emotive, sensitive player? If you know that the players are male or female, you're going pick mostly men, but if the players are behind a screen, you'll start picking more women. PINKER: But that makes the wrong prediction: the harder the science, the greater the participation of women! We find exactly the opposite: it's the most subjective fields within academia -- the social sciences, the humanities, the helping professions -- that have the greatest representation of women. This follows exactly from the choices that women express in what gives them satisfaction in life. But it goes in the opposite direction to the prediction you made about the role of objective criteria in bringing about gender equity. Surely it's physics, and not, say, sociology, that has the more objective criteria for success. SPELKE: Let me just say one thing, because I didn't say much in the talk at all, about this issue of motives, and biological differences in motives. That's been a less controversial issue, but I think it's an important one, and most of your examples were concerned with it. I think it's a really interesting possibility that the forces that were active in our evolutionary past have led men and women to evolve somewhat differing concerns. But to jump from that possibility into the present, and draw conclusions about what people's motives will be for pursuing one or another career, is way too big a stretch. As we both agree, the kinds of careers people pursue now, the kinds of choices they make, are radically different from anything that anybody faced back in the Pleistocene. It is anything but clear how motives that evolved then translate into a modern context. Let me just give one example of this. You've suggested, as a hypothesis, that because of sexual selection and also parental investment issues, men are selected to be more competitive, and women are selected to be more nurturant. Suppose that hypothesis is true. If we want to use it to make predictions about desires for careers in math and science, we're going to have to answer a question that I think is wide open right now. What makes for better motives in a scientist? What kind of motives are more likely to lead to good science: Competitive motives, like the motive J. D. Watson described in The Double Helix, to get the structure of DNA before Linus Pauling did? Or nurturant motives of the kind that Doug Melton has described recently to explain why he's going into stem cell research: to find a cure for juvenile diabetes, which his children suffer from? I think it's anything but clear how motives from our past translate into modern contexts. We would need to do the experiment, getting rid of discrimination and social pressures, in order to find out. ----------------- Guardian, Saturday April 30, 2005, pp 20-22, profile of John Brockman ______________________________________________________________________ The son of a Boston wholesale flower seller, he adapted his father's business methods in his work as a pop publicist and management consultant. He went on to become a successful literary agent, specialising in top science writers and -- with an online 'intellectual salon' -- building a reputation as a tireless promoter of influential ideas. Interview by Andrew Brown ______________________________________________________________________ The Hustler [cover.jpg] In 1968 John Brockman was promoting a film called Head , starring the Monkees. His idea of publicity was simply to have the whole town covered in posters showing a head, with no caption. Naturally, the chosen head was his. Grotesquely solarised, with blue-grey lips and and scarlet spectacles, fashionable, suggestive of intellectual power, impossible to decipher, there he stood against a thousand walls, looking down on the city of New York. The posters have long since faded, but Brockman's position remains the same, gazing inscrutably on anything interesting in Manhattan. Now he is one of the most successful literary agents in the world, but to his friends and clients he is much more: an impresario and promoter of scientific ideas who is changing the way that all educated people think about the world. Richard Dawkins, his friend and client, says, "his Edge web site has been well described as an online salon, for scientists and for other intellectuals who care about science. John Brockman may have the most enviable address book in the English-speaking world, and he uses it to promote science and scientific literature in a way that nobody else does." Anyone today who thinks that scientists are the unacknowledged legislators of the world has been influenced by Brockman's taste. As well as Dawkins, he represents Daniel Dennett, Jared Diamond, and Sir Martin Rees, as well as three Nobel prize winners and almost all the other famous popular scientists. His old friend Stewart Brand, the publisher of the Whole Earth Catalog and later the promoter of the Clock of the Long Now, which is intended to run for 10,000 years, says: "It's so easy to think the guy's just a high-class pimp that it's quite easy to ignore the impact on the intellectual culture of the west that John has enabled by getting his artist and scientist friends out to the world. There is a whole cohort of intellectuals who are interacting with each other and would not [be able to] without John." Brockman himself says, "Confusion is good. Then try awkwardness. Then you fall back on contradiction. Those are my three friends." Fortunately, they are not his only friends. When asked for photographs of himself as a young man, he sends one where he is standing with Bob Dylan and Andy Warhol on the day Dylan visited Warhol's Factory. In the course of a couple of hours' conversation, he brings up encounters with (amongst others) John Cage; Robert Rauschenberg; Sam Sheppard; Larry Page and Sergei Brin, the founders of Google, with whom he had just had lunch along with his client Craig Venter, the genome researcher; "Rupert" (Murdoch); Stewart Brand; Elaine Pagels, an influential historian of religion; Hunter S Thompson; Richard Dawkins; Daniel Dennett; Nicholas Humphrey, the psychologist; Murray Gell-Mann, the Nobel-winning physicist; the actor Dennis Hopper; and Steve Case of AOL. He even mentions Huey P Newton, the Black Panther. "Sometime around 1987 or '88, I get a call from Huey, who was a close friend of mine, who I was trying to avoid, because it had been revealed that he was actually gratuitously murdering people . . . you know, shooting them. He was flipping out. He wasn't talking about revolution or anything. Newton's message said: 'Me and my buddy Bob Trivers -- we're going to write a book on deceit and self-deception.'" Robert Trivers was one of the most important evolutionary biologists of the past 50 years, and came up with the hugely influential idea of "reciprocal altruism" as a graduate student at Harvard in the early 70s before his career was interrupted by psychological problems and he went off to live in the Jamaican jungle for some years. (He is now back at Harvard, in a chair funded by a friend of Brockman's.) Brockman continues: "Soon after that, he [Newton] died a very nasty death: just a crummy sidewalk dope deal. This was no way for a real revolutionary . . . "A couple of years ago, I made a rare visit to LA and was doing my favourite thing: watching the movie stars round the pool, and I got a message: Bob Trivers called. 'John. It's Bob Trivers. As I was saying. I've got the proposal ready. It's for a book on deceit and self-deception.'" Such a book will be ideal Brockman fodder. It takes science out to the edges of society yet deals with subjects of eternal importance. It captures a theory at the stage when it is most vigorously fighting for its life. It is written by the man who made the discovery, which is an important point. Though Brockman has made some journalists a lot of money, his truly unique selling point is that he has made real scientists far more. In 1999, for example, at the height of the pop science boom, he sold the world rights to a book by the theoretical physicist Brian Greene for $2m. Some of his books have proved initially trickier. Gell-Mann had to return an advance of $500,000 for a book, The Quark and the Jaguar, delivered late, that Bantam rejected. Brockman subsequently sold it to WH Freeman for a reported $50,000. Many would agree that at least half his clients are truly remarkable thinkers, but there is room for disagreement about which half. For instance, he represents Sir John Maddox, the former editor of Nature, but also Rupert Sheldrake, whose heretical ideas about biology were denounced by Maddox in a Nature editorial that suggested Sheldrake's book A New Science of Life be burnt. Brockman has sold most of Richard Dawkins' books, but also the Bible Code by Michael Drosnin, which claimed that everything significant in the world up to the death of Princess Diana could have been predicted by reading every seventh letter in the Hebrew Bible, and the novel The Diary of a Manhattan Call Girl by Tracy Quan, which was the first account of a prostitute's life to be serialised on the Internet. "He likes proposals to be about two pages long, no more, and then he likes to get an auction going," one of his authors says. "You'll get a call from him, and he's walking down Fifth Avenue on his cellphone, saying that he's got Simon & Schuster to bid 100,000 and now he will see what happens. A quarter of an hour later, Bantam has bid 125,000 and then he says he'll go back to S&S and see if he can get 150,000. But he's got an attention span of about half an hour. If the book isn't sold within a week, forget it." Tom Standage, the technology editor of the Economist, had his first book sold by Brockman on the basis of an outline one paragraph long. He sent it off in a speculative spirit and the next thing he heard was the rustle of a contract crawling towards him from the fax machine. Standage says: "He feels he's failed if a book earns out its advance and pays royalties because that means he hasn't got as much from the publishers as he could have done." This is how the young Brockman learned from his father, a broker in the wholesale flower market in Boston, to hustle sales. "He dominated the carnation industry. He would go to the Boston flower market, which was owned by the growers, who formed a cooperative. All these Swedes and Norwegians would be growing gladiolas and carnations and they'd bring them in at three in the morning and leave them like a long aisle. There'd be thousands of flowers, and you had to sell them, or they died. He said to me 'you gotta move them, they're going to die'. And one day, 40 years later, I'm on the phone, and I had a chilling feeling as I felt my father's voice coming through me, like, 'they're going to die'. So, why am I always so fixated on closing the deal, getting the next book in? It comes from that experience. That was a pure market situation. So, that's the way I run my business. It's not literary. It's not publishing. It's business. I have got properties to sell, on behalf of my clients. "My job is to do the best I can for them and I do it by making a market. The market decides. But knowing how to make a market involves . . . some capacities." The capacities are at the heart of his business, but it's hard to describe them. He has a keen sense for interesting ideas, but also for the ways in which they fit into society. For instance, he would never call himself an atheist, he says, in America: "I mean I don't believe: I'm sure there's no God. I'm sure there's no afterlife. But don't call me an atheist. It's like a losers' club. When I hear the word atheist, I think of some crummy motel where they're having a function and these people have nowhere else to go. That's what it means in America. In the UK it's very different." The Brockmans were the children of immigrants -- John's father's family had come from Austria -- and grew up in a largely poor and Catholic neighbourhood of Boston and he remains extremely sensitive to anti-Semitism. "There were no books in our house. My father could barely read. He was a brilliant man but he was on the streets working at eight years old. My mother read a little bit, but, you know, it was a little encyclopedia. "My parents were poor. My father started a business the day I was born which became a successful business. But we grew up in a tough neighbourhood called Dorchester, which was an Irish-Catholic bastion, where this radical right-wing priest went up and down the streets telling people to kill Jews. So that's how my brother and I grew up." He has one brother, a retired physicist, who is three years older. "We quickly found out, going to school, that . . . we were personally responsible for the death of Jesus Christ. We had a lot of fighting to do, and most of it on the losing end, because there were always 30 of them to two of us. My brother got the worse of it. My mother was a tough cookie. She would kick him out of the house if he didn't fight hard enough. Luckily in those days you didn't get killed; you just got a bloody nose. But it was tough." ______________________________________________________________________ "Confusion is good. Then try awkwardness. Then you fall back on contradiction. Those are my three friends." ______________________________________________________________________ This mixture of pugnacity and sensitivity about ethnicity can still surface. When he was upset by a profile in the Sunday Times magazine, which he thought played to an anti-Semitic stereotype, he complained straight to Rupert Murdoch (using Murdoch's banker, another of his contacts, as an intermediary). Brockman was a poor student in high school and was turned down for 17 colleges before studying business, finishing up with an MBA from Columbia University in New York. He worked selling tax shelters for a while, but in the evenings he was hanging out with all the artists he could find. He stacked chairs in the theatre with the young Sam Sheppard; he went to dinner parties with John Cage; he started to put on film festivals and then multi-media extravagances at about the same time as Ken Kesey's Merry Pranksters in San Francisco and Andy Warhol in New York. This early attraction to the art world seems to have set his style. The art that he was involved with qualified as art simply because everyone involved decided it was. In this flux, it seemed the only certainty was scientific truth, but he was early attracted to the idea of science, of computing as a metaphor for everything. Stewart Brand first met him in the early 60s: "I was in the army as an officer and spending the weekends in New York -- he was in the thick of the multimedia scene that was the cutting edge of performance pop art. He was an impresario, who could help organise events and people and media and be essential to the process, but unlike a lot of people he was actually alert to what the art was about, just as later, as an agent, he was alert to what the books were about. So far as I was concerned he was another artist in the group of artists I was running with." Life at a glance John Brockman Born: February 16 1941 Boston, Massachusetts. Educated: Babson Institute of Business Administration; Columbia University, New York. Employment: 1965-69 Multimedia artist; '74-present, literary agent; founder Brockman, Inc; chairman Content.com. Married: Katinka Matson (one son, Max, 1981). Some books: 1969 By the Late John Brockman; '88 Doing Science: The Reality Club; '95 The Third Culture; '96 Digerati: Encounters with the Cyber Elite; '03 The Next Fifty Years: Science in the First Half of the Twenty-First Century; '04 Science at the Edge. In 1967 Brockman discovered how to sell flower power while it was still fresh. A business school friend who had gone to work for a paper company asked Brockman to help motivate the sales force for their line of sanitary towels. This was at a time when the New York Times was solemnly explaining that "Total environment" discoth?ques, such as Cheetah and The Electric Circus in New York, were turning on their patrons with high-decibel rock'n'roll combined with pulsing lights, flashing slide images, and electronic "colour mists". Brockman asked -- and got -- a fee of $15,000 despite having no consulting experience. He put on a multimedia show for the salesmen: they lay on the floor of a shiny vinyl wigwam while four sound systems played them Beatles songs, bird calls, company advertising slogans with an executive shouting about market statistics and competitive products, and a film showed a young woman wearing a dress made of the company's paper which she ripped down to her navel. In the 60s it was cutting-edge art, an "intermedia kinetic experience", and the salesmen exposed to it reportedly sold an additional 17% of feminine hygiene products in the next quarter. Brockman took the show around nine cities for the company, energising its sales force nationwide, and was established as a consultant who could sell his services to anyone. But it was not enough. His book By the Late John Brockman was unfavourably reviewed, but he was not discouraged and continued to write and edit books -- 18 at last count. One, Einstein, Gertrude Stein, Wittgenstein and Frankenstein , had to be hurriedly withdrawn after portions were found to have been plagiarised from an article by James Gleick, the author of Chaos , one of the first big pop science hits and not a Brockman client. Brockman blamed one of his assistants. Brockman's later books have mostly been collections of interviews with friends and clients, salted and sometimes vinegared as well with their opinions of each other. He has a made a Christmas tradition of asking questions of 100 or so people and circulating their responses. "What do you believe to be true, but cannot prove?" was the most recent one, in 2004, and is a fine example of Brockman's method as an editor or curator of thought. The question was supplied by Nicholas Humphrey, but it was Brockman who spotted its potential, and then knew 120 interesting people who were prepared to answer it. Humphrey's own answer is characteristically thought-provoking: "I believe that human consciousness is a conjuring trick, designed to fool us into thinking we are in the presence of an inexplicable mystery . . . so as to increase the value we each place on our own and others' lives." Philip Anderson, the Nobel Prize-winning physicist, believes that string theory is a waste of time. Randolph Nesse, an evolutionary biologist, believes, but cannot prove, that believing things without proof is evolutionary advantageous; Ian McEwan that no part of his consciousness will survive death. Brockman has constantly reinvented himself. He has been at the leading edge of intellectual fashion for the past 30 years. In the late 90s, just before the dot.com bubble popped, he told an interviewer from Wired magazine that he wanted to be "post-interesting". Looking back on all the ideas he has enthused about you glimpse a mind that rushes around like a border collie -- tirelessly and gracefully pursuing anything that moves, but absolutely uninterested in things that stay still, and liable, if shut up in a car, to get bored and eat all the upholstery. Like a lot of successful salesmen, part of his secret is that he is interested in people for their own sake as well as for what they can do for him, and can study them with extraordinary concentration, solemnly placing out, beside the journalist's machine, two tape recorders of his own at the beginning of an interview. To be under his attentive, almost affectionate gaze, is to know how a sheep feels in front of a collie. Twice in the course of a couple of hours' chat he says "you ought to write a book about that". He became a book agent by accident. He was talking about God to the scientist John Lilly, a friend of Brand's, whose research into dolphins and LSD was one of the first tendrils of a scientific study of consciousness, and he realised Lilly had a book there. He sold the proposal and found a new business where his talents and his interests coincided. He has been in the vanguard of the trend towards larger advances at the expense of royalties, and a model of rewards in which a few superstars make gigantic sums and almost everyone else makes next to nothing. His first enormous commercial success came in the early 80s, as personal computers started to appear. He understood that software manuals would need publishing just as normal books do. In the end, the idea of software publishers didn't work out, but not before Brockman had made a fortune from the idea. He started an annual dinner for the other players in the business, called the millionaires' dinner. Later, when this seemed unimpressive, he renamed it the billionaires' dinner; then the scientists' dinner -- whatever worked to bring lively people round him. He works with and is married to Katinka Matson, the daughter of a New York literary agent who was AD Peters's partner in 50s. "She actually makes the wheels turn in the office," says Tom Standage. The Brockmans have one son, Max, who works in the family business as a third-generation agent, and who was blessed in his crib by a drunken dance performed round it by Hunter S Thompson, Dennis Hopper and Gerd Stern, a multi-media artist from the avant-garde scene. After the first boom in personal computers and their software blew out, Brockman was perfectly placed for the next boom, in writing about the people who made it. The house magazine of that boom was Wired, which sold itself to Conde Nast as "The magazine which branded the digital age"; it is almost an obligation on the editor of Wired to be a Brockman client. He set out his manifesto in the early 90s for what he called the Third Culture: "Traditional American intellectuals are, in a sense, increasingly reactionary, and quite often proudly (and perversely) ignorant of many of the truly significant intellectual accomplishments of our time. Their culture, which dismisses science, is often non-empirical. By contrast, the Third Culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are." Everything was speeding up, too. Brockman had always been quick to close a deal. Now he demanded pinball-fast reactions from the editors he sold to. One trick was to watch the front page of the New York Times and get a quick book proposal out of every science story that appeared there. This mean that if you were a Brockman client on the staff of the New York Times a front page splash was not just professionally gratifying, but also a potential route to a large cheque. There was a danger that this constituted a temptation to hype. When one New York Times journalist, Gina Kolata, followed a "cure for cancer" splash with a book contract the next day, there was an outcry and the book was eventually cancelled. For Brockman, America now is the intellectual seedbed for Europe and Asia. He wrote: "The emergence of the Third Culture introduces new modes of intellectual discourse and reaffirms the pre-eminence of America in the realm of important ideas. Throughout history, intellectual life has been marked by the fact that only a small number of people have done the serious thinking for everybody else. What we are witnessing is a passing of the torch from one group of thinkers, the traditional literary intellectuals, to a new group, the intellectuals of the emerging Third Culture. Intellectuals are not just people who know things but people who shape the thoughts of their generation. An intellectual is a synthesiser, a publicist, a communicator." Brockman, it hardly needs saying, is the true intellectual's agent. Of course, this was angrily resented by those outside the magic circle, especially if they were themselves intellectuals in every respect save being represented by him. But any anger or ridicule stays off the record. Who knows when they will need to deal with him? Who knows when he could bless them with a million dollars, and whisk them into the magic circle? Yet it is a tribute to Brockman's personality that people who have known him a long time like him a great deal. Stewart Brand says: "The salon-keeper has an interesting balancing act between highlighting the people they're attracted to and also having a strong enough personality so that they are taken seriously as a peer. People do not feel threatened by him or competitive with him. They either admire him or profess to be amused by him. But you look behind that, and you realise that they don't look down on him at all." The magic circle has gone by different names and using different degrees of formality. In the 90s it was a manifested in a physical gathering, run with Heinz Pagels, called the Reality Club. The elite would come together and talk about the work that interested them. They didn't have to be his clients, and many of them weren't. But all invested their time in ideas he was promoting. Pagels died in an accident and Brockman says he didn't have the heart to go on by himself. Instead, he set up his Edge website, where he puts up new interviews every month, which can be read as transcripts or watched as videos, with commentaries. It all reinforces his idea that reality is essentially social. Even the name, the Reality Club, goes right back to his earliest big idea: that reality is what the smart people, who should be friends of John Brockman, decide to make of the world: "It's an argument that I have with all my scientist friends, and I lose it every time. They don't buy it at all. It's very primitivistic, I'm told, or even solipsism, but it works for me." _________________________________________________________________ Linda S. Gottfredson responds to Simon Baron-Cohen _________________________________________________________________ [132]LINDA S. GOTTFREDSON Sociologist, School of Education, University of Delaware [gottfredson100.jpg] Simon Baron-Cohen's work joins the search for causal mechanisms linking genes, brain, and behavior. The patterned variation by sex at all three levels of analysis provides clues to what those mechanisms might be (e.g., testosterone). Baron-Cohen employs those patterns to better understand, in particular, the etiology of autism and its much higher prevalence among males. Variation is the raw material for much scientific analysis and for evolution itself, but public discussion of human variation seems mostly off-limits today. We are called upon to celebrate diversity but not notice difference; to observe a new etiquette that forbids utterance of supposedly tactless knowledge. Good feeling compels public ignorance. But bewildered or bemused, outraged or apprehensive, most scientists soldier on. Baron-Cohen continues to investigate the nature of sex differences. His research on babies only 24 hours old, while needing replication, fits the larger pattern of sex differences in interests, personality, and abilities across the lifespan. For instance, at all ages and worldwide, females tend to be more interested in people and males in inanimate objects. As noted in earlier commentaries, humans are not the only primates showing this pattern. It would have been an exception to the rule had Baron-Cohen's team not found boys gazing more at the mechanical object and girls more at the human face. Like the habituation research now used to assay differences in cognitive ability among infants, his results provide prima facie evidence that socialization cannot be the sole cause of variation in social behavior. The interesting question is not whether meaningful innate sex differences exist, but how anyone could construe the preponderance of evidence otherwise. Baron-Cohen argues that the distinction between "systematizers" (disproportionately male) and "empathizers" (disproportionately female) is especially important in the etiology of autism. He theorizes that genetic risk of autism rises when both parents are systematizers. While onto something important, his work might advance faster and persuade better if, rather than proposing a new distinction, it exploited existing evidence on the dimensionality and relatedness of human psychological traits (all of them heritable), particularly interests ("Holland's hexagon" of six modal types), personality (the "big five"--or three or seven), and abilities (the "3-stratum hierarchical model"). Researchers in vocational interest measurement, personality assessment, personnel testing, and differential psychology have spent a century parsing, cataloguing, and correlating these differences among individuals. They find a regular pattern of sex differences regardless of age, time, or place. It is not clear where Baron-Cohen's systematizer-empathizer distinction fits in this much-explored territory, but it would seem to map best onto dimensions in the non-cognitive realm: sympathetic vs. cold ("agreeableness" personality dimension), "realistic" vs. "social" vocational interests, or valuing "ideas" vs. "feeling." Prevalence of autism has increased so much recently that some label it epidemic. Baron-Cohen must explain how this increase is consistent with evidence that autism has strong genetic roots. The two facts are not inconsistent, but many people assume they are. They fallaciously reason that if prevalence jumps within only decades (e.g., more violent crime, more women getting BAs in math), then the behavior in question must not be genetically influenced because the gene pool could not have changed during that time. But it need not have. First, non-random mating can change the distribution of phenotypes in the next generation of the same gene pool. Baron-Cohen's theory would predict that more assortative mating for "systematizing" will lead to more autism in the offspring generation. It would operate like inbreeding, which increases the odds of offspring inheriting the same deleterious recessive allele from both parents. This explanation doesn't work well, however, for rates of socially important behaviors that fluctuate within a generation (e.g., criminal behavior, which has a heritable component), and perhaps not even for autism. Second, environmental change matters, but not equally for all genotypes. When environments become more deleterious, the more susceptible genotypes are the first casualties. Environmental toxins may be one such factor in autism. Conversely, as environments become more favorable, some genotypes are better able to exploit the new opportunities. So, as barriers to women in education and work have fallen, the most talented and ambitious women have been the best placed to advance. In neither case has the gene pool changed--only the environments favoring some genotypes over others. Third, different genotypes seek and evoke different experiences and environments. Outgoing, agreeable, feelings-oriented personalities prefer (and are preferred for) dealing with people; non-social, pragmatic, things-oriented personalities find a better fit working with mechanical objects and processes. When free to choose, the two types will gravitate toward different careers. They will also create different personal environments for themselves and their children. So, just as low-IQ parents don't create the most propitious environments for their genetically at-risk children, perhaps two systematizers provide non-optimal rearing for theirs too. _________________________________________________________________ EDGE BOOKS _________________________________________________________________ Curious Minds: How a Child Becomes a Scientist (Pantheon) All new essays by 27 leading Edge contributors..."Good, narrative history, combined with much fine writing...quirky, absorbing and persuasive in just the way that good stories are."--Nature "Some of the biggest brains in the world turn their lenses on their own lives...fascinating...an invigorating debate."--Washington Post "Compelling."--Discover " An engrossing treat of a book...crammed with hugely enjoyable anecdotes ...you'll have a wonderful time reading these reminiscences."--New Scientist "An intriguing collection of essays detailing the childhood experiences of prominent scientists and the life events that sparked their hunger for knowledge. Full of comical and thought-provoking stories."--Globe & Mail "An inspiring collection of 27 essays by leading scientists about the childhood moments that set them on their shining paths."--Psychology Today Published in the UK as When We Were Kids: How a Child Becomes a Scientist (Jonathan Cape) _________________________________________________________________ The New Humanists: Science at the Edge (Barnes & Noble) The best of Edge, now available in a book..."Provocative and fascinating." -- La Stampa "A stellar cast of thinkers tackles the really big questions facing scientists." -- The Guardian "A compact, if bumpy, tour through the minds of some of the world's preeminent players in science and technology." -- Philadelphia Inquirer "What a show they put on!"-- San Jose Mercury News "a very important contribution, sparkling and polychromatic."--Corriere della Sera _________________________________________________________________ The Next Fifty Years: Science in the First Half of the Twenty-first Century (Vintage) Original essays by 25 of the world's leading scientists..."Entertaining" --New Scientist "Provocative" --Daily Telegraph "Inspired"--Wired "Mind-stretching" --Times Higher Education Supplement "Fascinating"--Dallas Morning News "Dazzling" --Washington Post Book World _________________________________________________________________ [147]John Brockman, Editor and Publisher [148]Russell Weinberger, Associate Publisher [149]contact: editor at edge.org References 91. http://www.edge.org/documents/archive/images/pinker.slides/pages/pinker_Page_54.htm 92. http://www.edge.org/documents/archive/images/pinker.slides/pages/pinker_Page_55.htm 93. http://www.edge.org/documents/archive/images/pinker.slides/pages/pinker_Page_56.htm 94. http://www.edge.org/documents/archive/images/pinker.slides/pages/pinker_Page_57.htm 95. http://www.edge.org/documents/archive/images/pinker.slides/pages/pinker_Page_58.htm 96. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_01.htm 97. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_02.htm 98. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_03.htm 99. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_04.htm 100. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_05.htm 101. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_06.htm 102. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_07.htm 103. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_08.htm 104. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_09.htm 105. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_10.htm 106. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_11.htm 107. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_12.htm 108. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_13.htm 109. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_14.htm 110. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_15.htm 111. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_16.htm 112. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_17.htm 113. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_18.htm 114. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_19.htm 115. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_20.htm 116. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_22.htm 117. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_24.htm 118. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_25.htm 119. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_26.htm 120. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_27.htm 121. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_28.htm 122. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_29.htm 123. http://www.edge.org/documents/archive/images/spelke.slides/pages/spelke_Page_30.htm 124. http://www.edge.org/documents/archive/gdn1.pdf 125. http://www.edge.org/documents/archive/gdn2.pdf 126. mailto:eamonn.mccabe at btinternet.com 127. mailto:refuseinc at earthlink.net 128. mailto:tobiaseverke at mac.com 129. http://www.everke.com/ 130. http://books.guardian.co.uk/review/story/0,12084,1472567,00.html 131. http://books.guardian.co.uk/review/story/0,12084,1472567,00.html 132. http://www.edge.org/3rd_culture/bios/gottfredson.html 133. http://www.edge.org/books/curious_index.html 134. http://www.edge.org/books/curious_index.html 135. http://www.amazon.com/exec/obidos/tg/detail/-/0375422919/qid=1111624348/sr=1-1/ref=sr_1_1/002-4630037-1607214?v=glance&s=books 136. http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=pi62n5J0GE&isbn=0375422919&itm=3 137. http://www.amazon.co.uk/exec/obidos/ASIN/0224072943/qid=1111624532/sr=1-6/ref=sr_1_10_6/202-3872213-3079839 138. http://www.edge.org/books/new_humanists.html 139. http://www.edge.org/books/new_humanists.html 140. http://search.barnesandnoble.com/booksearch/isbninquiry.asp?isbn=0760745293 141. http://www.amazon.co.uk/exec/obidos/ASIN/0297607758/qid=1111624466/sr=1-3/ref=sr_1_10_3/202-3872213-3079839 142. http://www.edge.org/books/next50_index.html 143. http://www.edge.org/books/next50_index.html 144. http://www.amazon.com/exec/obidos/tg/detail/-/0375713425/qid=1111627778/sr=1-1/ref=sr_1_1/002-4630037-1607214?v=glance&s=books 145. http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=pi62n5J0GE&isbn=0375713425&itm=8 146. http://www.amazon.co.uk/exec/obidos/ASIN/0753817101/ref=pd_null_recs_b/202-3872213-3079839 147. http://www.edge.org/3rd_culture/bios/brockman.html 148. http://www.edge.org/3rd_culture/bios/weinberger.html 149. mailto:editor at edge.org From checker at panix.com Thu May 12 00:41:09 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:41:09 -0400 (EDT) Subject: [Paleopsych] Legend Games: Racial Differences In Intelligence Reviewed In Academic Journal Message-ID: Racial Differences In Intelligence Reviewed In Academic Journal Blogger News Network http://www.legendgames.net/showstory.asp?page=blognews/stories/ST0000036.txt 5.5.9 Established February 13, 2005 Editorial by John Ray Academic news report below from [45]Chris Brand: "Licensed scholarly warfare broke out in the APA journal [46]Psychology, Public Policy and Law (vi 05), (full articles [47]here) over whether psychological race differences (especially in intelligence) are substantially heritable. Race realists Arthur Jensen, Phil Rushton and Linda Gottfredson fought their corner excellently, conveniently reviewing the literature (cf. the Cambridge debate ([48]i 97) and adding some useful points (e.g. that skin lightness is linked to IQ among South Africa's Blacks - which it is not among America's Blacks, where lighter skins often came about from historical Black female matings with White farmhands during the days of slavery). The hereditarians were opposed by such as Robert Sternberg (sometimes said by me to believe in 666 types of intelligence - Behav.Res.&Therapy, 1992) and Richard E. Nisbett (reviewed by me in Heredity, 2003) who claimed among other things medium-term boosts on non-IQ tests from Headstart-type programmes involving 8 hours intervention daily! Overall, the race realists had a coherent message with copious evidence and the environmentalists had the scraps - they remained respectively the `stompers' and `stompees' as amusingly caricatured by Professor Earl `Buzz' Hunt (Brand, Person.&Indiv.Diffs, 1999). Occidental Quarterly reckoned the 60-page Rushton & Jensen article might prove as much of a landmark as Jensen's classic 1969 Harvard Educational Review article, `How much can we boost IQ and educational attainment.' The race-realist argument was summarized in [49]News-Medical Net, 26 iv." The articles Chris summarizes above are not notable for being published in an academic journal. Almost all of the research in this field has been first published in academic journals. It is however of some note to find such articles in a journal of the American Psychological Association. The APA is distinctly Left-leaning and their journals are the most prestigious and authoritative ones ones in psychology. [50]Logical Meme has a bit more on the Rushton & Jensen paper. He also has a short roundup of the [51]medical research that has found race differences. There is another academic report [52]here (PDF) of similar interest. One of the authors in this case is a Nobel prizewinner! Working from a very comprehensive body of data, the authors found that average racial differences in adult income are almost entirely predictable from childhood differences in IQ. They also found that, from early childhood on, black males have other disadvantages (less self-control, less tendency to plan ahead etc) and that those differences also influence income in later life. The third finding is that although blacks and Hispanics share similar environmental disadvantages in childhood, Hispanics tend to rise above that whereas blacks do not -- showing again that the source of black disadvantage is not environmental. There are fuller discussions of the report concerned [53]here and [54]here John Ray blogs at [55]Dissecting Leftism. References 45. http://www.crispian.demon.co.uk/indexlatest.htm 46. http://content.apa.org/journals/law 47. http://taxa.epi.umn.edu/~mbmiller/journals/pppl/200504/content.apa.org/journals/law/11/2.html 48. http://www.lrainc.com/swtaboo/late/cb_camb.html 49. http://www.news-medical.net/?id=9530 50. http://www.featuringdave.com/logicalmeme/2005/04/30-years-of-research-on-race-cognitive.html 51. http://www.featuringdave.com/logicalmeme/2005/04/racist-science-roundup.html 52. http://www.ifau.se/swe/pdf2005/wp05-03.pdf 53. http://www.marginalrevolution.com/marginalrevolution/2005/04/politically_inc.html 54. http://www.featuringdave.com/logicalmeme/2005/04/bell-curve-latest-addition-to-meta.html 55. http://dissectleft.blogspot.com/ From checker at panix.com Thu May 12 00:41:27 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:41:27 -0400 (EDT) Subject: [Paleopsych] =?iso-8859-1?q?Featuring_Dave=3A_The_Bell_Curve_=E2?= =?iso-8859-1?q?_Latest_Addition_to_the_Meta-Analysis?= Message-ID: The Bell Curve ? Latest Addition to the Meta-Analysis http://www.featuringdave.com/logicalmeme/2005/04/bell-curve-latest-addition-to-meta.html Tuesday, April 26, 2005 [4]Marginal Revolution reports on some politically incorrect research findings in this month's Journal of Law and Economics. Nobel Prize winner James Heckman and colleagues have an essay, ?Labor Market Discrimination and Racial Differences in Premarket Factors?. It has previously been established that most of the black-white wage differential can be explained by IQ scores, but the role of socio-economic factors/status (SES) in determining why one?s IQ is X is what is endlessly debated. Liberals want to believe that income disparities between groups is entirely the function of the environment, ie., SES. The only problem is that virtually every major study that has looked at the roles of IQ, while statistically accounting for SES, determine that there is significant variable that can only be inferred as a genetic component. As a science, genetics is every day discovering new correlations between health condition/ailment/characteristic X and some genetic factor. Still taboo, however, is the very possibility that cognitive ability might have a substantial correlation with genetic factors. Heckman et al?s study controls for schooling and, as Marginal Revolution notes, ?[t]he results are not encouraging. After throwing all kinds of factors into the analysis they are able to increase the unexplained wage gap somewhat but no matter how far back they go they still find big ability differences, even in children as young as 1-2 years of age.? Here is Heckman et al writing about another interesting finding from the study: Minority deficits in cognitive and noncognitive skills emerge early and then widen. Unequal schooling, neighborhoods, and peers may account for this differential growth in skills, but the main story in the data is not about growth rates but rather about the size of early deficits. Hispanic children start with cognitive and noncognitive deficits similar to those of black children. They also grow up in similarly disadvantaged environments and are likely to attend schools of similar quality. Hispanics complete much less schooling than blacks. Nevertheless, the ability growth by years of schooling is much higher for Hispanics than for blacks. By the time they reach adulthood, Hispanics have significantly higher test scores than do blacks. Conditional on test scores, there is no evidence of an important Hispanic-white wage gap. Our analysis of the Hispanic data illuminates the traditional study of black-white differences and casts doubt on many conventional explanations of these differences since they do not apply to Hispanics, who also suffer from many of the same disadvantages. The failure of the Hispanic-white gap to widen with schooling or age casts doubt on the claim that poor schools and bad neighborhoods are the reasons for the slow growth rate of black test scores. Though the authors apparently don?t admit as much, the entity of ?g? (genetic predisposition towards cognitive ability, general intelligence, or IQ) is begging to be recognized here. Such is but the latest indirect confirmation of a wide, accumulating body of evidence pointing towards what [5]The Bell Curve, in its meta-analysis found: On average, approximately 50% of variability in intelligence is due to genetic inheritance, and there is a consistent and statistically significant difference worldwide between blacks, whites, and asians. (Say what? The horror! Racist! Nazi pig!) Anthropologically, the varying ethnicities in the world can be categorized, by general genotype, into 3 characteristic groups: mongoloids, caucasoids, and negroids. Hundreds and hundreds of studies from all around the world, in a myriad of different places and contexts, indicate the consistent empirical fact that, ceteris paribus, mongoloids rank first in IQ ability, followed by caucasoids, with negroids being a distant third. This does not mean that all persons of group X are ?smarter? than Y, or that all persons in group Y are ?less intelligent? than X. These are statistical averages extrapolated across populations. Evolutionary hypotheses vary, but a consensus seems to be emerging that geography is the determining factor for cognitive abilitity, in addition to other differences in races. Basically, populations below vs. above the equator have had to deal with different environmental/geographical conditions which has shaped the populations? evolution. In the extreme dry conditions of East Africa, for example, evolution selected for persons who could traverse great distances on foot. (This, by the way, provides the most plausible explanation for why Kenyans dominate marathon races; the body morphology of a Kenyan is noticeably different than a Somalian, Chinese, Swede, etc. Different limb-to-trunk ratios and the like). Likewise, in the harsh winters of Europe (a geographical area with four distinct seasons), evolution selected for better spatial reasoning skills & the propensity for longer-term planning (think preparations for the winter season, the difficulty of finding food in the winter, etc), as well as for higher levels of altruistic, contractual social relations (e.g., I help you build your winter home and you promise to help me build mine). Such evidence of differing cognitive skills among racial groups is, of course, absolutely appalling to the liberal?s sensibilities. It goes against their romanticized, Rousseauian notion of the mind as a ?blank slate?. Many want to believe, and insist on believing, that all minds are identical and that, with the right environmental circumstances, anyone can excel in math, or become an Einstein or a Mozart. The irony here, of course, is that these same people usually don?t question the idea that child prodigies ? a Kasparov or a Mozart ? are ?born that way?. Nonetheless, when it comes to official science on the matter, liberals refuse to even entertain the hypothesis that cognitive ability (i.e., brain structure and functionality) might have genetic component, let alone actually acknowledge the evidence. Hence, their reaction is to not even bother to look at the body of evidence, dismissing en toto the entire intellectual pursuit. Politically correct, ad hominem attacks ensue, lobbed at the scientists who dare to look at this phenomenon in the first place. The science is then stultified and clouded by taboo. (Witness the tentativeness and recalcitrance among the small number of scientists involved with pharmaceutical discoveries that affect only certain races, e.g., the recent news story of [6]NitroMed, the first drug targeting heart conditions unique to blacks). As was the case with Copernicus, who had to shelf his findings for 30 years for fear of assured persecution by the Catholic church, so is the case with the field of population genetics. Unfortunately, truth has an annoying way of rearing its ugly head indefinitely. The evidence cannot simply be ?wished away? and so will inevitably be a reality that social theory will sooner or later have to recognize. We?re still decades away from an honest dialogue about race and group disparities in this country, so rife our society is with conditioned liberal white guilt for the plights of different ethnic groups. In the meantime, the meta-analysis of the Bell Curve has yet to be disproven. [7]Gene Expression is an excellent site that reports on current research and findings. [8]Steve Sailor, President of the Human Biodiversity Institute, writes about these matters frequently on his excellent site. Sailor [9]writes: Much of the Race Does Not Exist cant stems from the following logic (if you can call it logic): ?If there really are different racial groups, then one must be The Master Race, which means -- oh my God ? that Hitler Was Right! Therefore, we must promote whatever ideas most confuse the public about race. Otherwise, they will learn the horrible truth and they'll all vote Nazi.? Look, this is one big non-sequiter: Of course, there are different racial groups. And of course their members tend to inherit certain different genes, on average, than the members of other racial groups. And that means racial groups will differ, on average, in various innate capabilities. But that also means that no group can be supreme at all jobs. To be excellent at one skill frequently implies being worse at something else. So, there can't be a Master Race. Sports fans can cite countless examples. Men of West African descent monopolize the Olympic 100m dash, but their explosive musculature, which is so helpful in sprinting, weighs them down in distance running, where they are also-rans. Similarly, there are far more Samoans in the National Football League than Chinese, simply because Samoans tend to be much, much bigger. But precisely because Samoans are so huge, they'll never do as well as the Chinese in gymnastics. Race is a fundamental aspect of the human condition. Burying our heads in the sand and refusing to think about this fundamental aspect only makes the inevitable problems caused by race harder to overcome. References 1. http://www.featuringdave.com/logicalmeme/atom.xml 2. http://logicalmeme.com/ 3. http://www.marginalrevolution.com/marginalrevolution/2005/04/politically_inc.html 4. http://www.marginalrevolution.com/marginalrevolution/2005/04/politically_inc.html 5. http://www.amazon.com/exec/obidos/tg/detail/-/0029146739/qid=1114562677/sr=8-1/ref=pd_csp_1/103-4217589-2775837?v=glance&s=books&n=507846 6. http://www.featuringdave.com/logicalmeme/2005/02/fda-faces-racial-medicine-debate.html 7. http://www.gnxp.com/ 8. http://www.isteve.com/ 9. http://www.vdare.com/sailer/may_24.htm 10. http://www.featuringdave.com/logicalmeme/2005/04/bell-curve-latest-addition-to-meta.html 11. http://www.haloscan.com/comments/sodafizz13/111456326660041073/ 12. http://www.haloscan.com/tb/sodafizz13/111456326660041073/ From checker at panix.com Thu May 12 00:43:13 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:43:13 -0400 (EDT) Subject: [Paleopsych] The New Yorker: Brain Candy Message-ID: Brain Candy http://www.newyorker.com/critics/books/articles/050516crbo_books by MALCOLM GLADWELL Is pop culture dumbing us down or smartening us up? Issue of 2005-05-16 Posted 2005-05-09 Twenty years ago, a political philosopher named James Flynn uncovered a curious fact. Americans--at least, as measured by I.Q. tests--were getting smarter. This fact had been obscured for years, because the people who give I.Q. tests continually recalibrate the scoring system to keep the average at 100. But if you took out the recalibration, Flynn found, I.Q. scores showed a steady upward trajectory, rising by about three points per decade, which means that a person whose I.Q. placed him in the top ten per cent of the American population in 1920 would today fall in the bottom third. Some of that effect, no doubt, is a simple by-product of economic progress: in the surge of prosperity during the middle part of the last century, people in the West became better fed, better educated, and more familiar with things like I.Q. tests. But, even as that wave of change has subsided, test scores have continued to rise--not just in America but all over the developed world. What's more, the increases have not been confined to children who go to enriched day-care centers and private schools. The middle part of the curve--the people who have supposedly been suffering from a deteriorating public-school system and a steady diet of lowest-common-denominator television and mindless pop music--has increased just as much. What on earth is happening? In the wonderfully entertaining "Everything Bad Is Good for You" (Riverhead; $23.95), Steven Johnson proposes that what is making us smarter is precisely what we thought was making us dumber: popular culture. Johnson is the former editor of the online magazine Feed and the author of a number of books on science and technology. There is a pleasing eclecticism to his thinking. He is as happy analyzing "Finding Nemo" as he is dissecting the intricacies of a piece of software, and he's perfectly capable of using Nietzsche's notion of eternal recurrence to discuss the new creative rules of television shows. Johnson wants to understand popular culture--not in the postmodern, academic sense of wondering what "The Dukes of Hazzard" tells us about Southern male alienation but in the very practical sense of wondering what watching something like "The Dukes of Hazzard" does to the way our minds work. As Johnson points out, television is very different now from what it was thirty years ago. It's harder. A typical episode of "Starsky and Hutch," in the nineteen-seventies, followed an essentially linear path: two characters, engaged in a single story line, moving toward a decisive conclusion. To watch an episode of "Dallas" today is to be stunned by its glacial pace--by the arduous attempts to establish social relationships, by the excruciating simplicity of the plotline, by how obvious it was. A single episode of "The Sopranos," by contrast, might follow five narrative threads, involving a dozen characters who weave in and out of the plot. Modern television also requires the viewer to do a lot of what Johnson calls "filling in," as in a "Seinfeld" episode that subtly parodies the Kennedy assassination conspiracists, or a typical "Simpsons" episode, which may contain numerous allusions to politics or cinema or pop culture. The extraordinary amount of money now being made in the television aftermarket--DVD sales and syndication--means that the creators of television shows now have an incentive to make programming that can sustain two or three or four viewings. Even reality shows like "Survivor," Johnson argues, engage the viewer in a way that television rarely has in the past: When we watch these shows, the part of our brain that monitors the emotional lives of the people around us--the part that tracks subtle shifts in intonation and gesture and facial expression--scrutinizes the action on the screen, looking for clues. . . . The phrase "Monday-morning quarterbacking" was coined to describe the engaged feeling spectators have in relation to games as opposed to stories. We absorb stories, but we second-guess games. Reality programming has brought that second-guessing to prime time, only the game in question revolves around social dexterity rather than the physical kind. How can the greater cognitive demands that television makes on us now, he wonders, not matter? Johnson develops the same argument about video games. Most of the people who denounce video games, he says, haven't actually played them--at least, not recently. Twenty years ago, games like Tetris or Pac-Man were simple exercises in motor co?rdination and pattern recognition. Today's games belong to another realm. Johnson points out that one of the "walk-throughs" for "Grand Theft Auto III"--that is, the informal guides that break down the games and help players navigate their complexities--is fifty-three thousand words long, about the length of his book. The contemporary video game involves a fully realized imaginary world, dense with detail and levels of complexity. Indeed, video games are not games in the sense of those pastimes--like Monopoly or gin rummy or chess--which most of us grew up with. They don't have a set of unambiguous rules that have to be learned and then followed during the course of play. This is why many of us find modern video games baffling: we're not used to being in a situation where we have to figure out what to do. We think we only have to learn how to press the buttons faster. But these games withhold critical information from the player. Players have to explore and sort through hypotheses in order to make sense of the game's environment, which is why a modern video game can take forty hours to complete. Far from being engines of instant gratification, as they are often described, video games are actually, Johnson writes, "all about delayed gratification--sometimes so long delayed that you wonder if the gratification is ever going to show." At the same time, players are required to manage a dizzying array of information and options. The game presents the player with a series of puzzles, and you can't succeed at the game simply by solving the puzzles one at a time. You have to craft a longer-term strategy, in order to juggle and co?rdinate competing interests. In denigrating the video game, Johnson argues, we have confused it with other phenomena in teen-age life, like multitasking--simultaneously e-mailing and listening to music and talking on the telephone and surfing the Internet. Playing a video game is, in fact, an exercise in "constructing the proper hierarchy of tasks and moving through the tasks in the correct sequence," he writes. "It's about finding order and meaning in the world, and making decisions that help create that order." It doesn't seem right, of course, that watching "24" or playing a video game could be as important cognitively as reading a book. Isn't the extraordinary success of the "Harry Potter" novels better news for the culture than the equivalent success of "Grand Theft Auto III"? Johnson's response is to imagine what cultural critics might have said had video games been invented hundreds of years ago, and only recently had something called the book been marketed aggressively to children: Reading books chronically understimulates the senses. Unlike the longstanding tradition of gameplaying--which engages the child in a vivid, three-dimensional world filled with moving images and musical sound-scapes, navigated and controlled with complex muscular movements--books are simply a barren string of words on the page. . . . Books are also tragically isolating. While games have for many years engaged the young in complex social relationships with their peers, building and exploring worlds together, books force the child to sequester him or herself in a quiet space, shut off from interaction with other children. . . . But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can't control their narratives in any fashion--you simply sit back and have the story dictated to you. . . . This risks instilling a general passivity in our children, making them feel as though they're powerless to change their circumstances. Reading is not an active, participatory process; it's a submissive one. He's joking, of course, but only in part. The point is that books and video games represent two very different kinds of learning. When you read a biology textbook, the content of what you read is what matters. Reading is a form of explicit learning. When you play a video game, the value is in how it makes you think. Video games are an example of collateral learning, which is no less important. Being "smart" involves facility in both kinds of thinking--the kind of fluid problem solving that matters in things like video games and I.Q. tests, but also the kind of crystallized knowledge that comes from explicit learning. If Johnson's book has a flaw, it is that he sometimes speaks of our culture being "smarter" when he's really referring just to that fluid problem-solving facility. When it comes to the other kind of intelligence, it is not clear at all what kind of progress we are making, as anyone who has read, say, the Gettysburg Address alongside any Presidential speech from the past twenty years can attest. The real question is what the right balance of these two forms of intelligence might look like. "Everything Bad Is Good for You" doesn't answer that question. But Johnson does something nearly as important, which is to remind us that we shouldn't fall into the trap of thinking that explicit learning is the only kind of learning that matters. In recent years, for example, a number of elementary schools have phased out or reduced recess and replaced it with extra math or English instruction. This is the triumph of the explicit over the collateral. After all, recess is "play" for a ten-year-old in precisely the sense that Johnson describes video games as play for an adolescent: an unstructured environment that requires the child actively to intervene, to look for the hidden logic, to find order and meaning in chaos. One of the ongoing debates in the educational community, similarly, is over the value of homework. Meta-analysis of hundreds of studies done on the effects of homework shows that the evidence supporting the practice is, at best, modest. Homework seems to be most useful in high school and for subjects like math. At the elementary-school level, homework seems to be of marginal or no academic value. Its effect on discipline and personal responsibility is unproved. And the causal relation between high-school homework and achievement is unclear: it hasn't been firmly established whether spending more time on homework in high school makes you a better student or whether better students, finding homework more pleasurable, spend more time doing it. So why, as a society, are we so enamored of homework? Perhaps because we have so little faith in the value of the things that children would otherwise be doing with their time. They could go out for a walk, and get some exercise; they could spend time with their peers, and reap the rewards of friendship. Or, Johnson suggests, they could be playing a video game, and giving their minds a rigorous workout. From checker at panix.com Thu May 12 00:43:29 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:43:29 -0400 (EDT) Subject: [Paleopsych] The Age (au): Curiouser and curiouser Message-ID: Curiouser and curiouser http://www.theage.com.au/news/Arts/Curiouser-and-curiouser/2005/05/08/1115491042465.html?oneclick=true 5.5.9 Robyn Archer, former Melbourne festival director, spoke on art and curiosity on Saturday night. The arts must ask its audience to throw out a lifeline to curiosity in all things in order to survive and prosper. By Robyn Archer. In my essay "The Myth of the Mainstream" (Currency Press, April 2005), I talk about curiosity being amongst the finest virtues of humankind and how this was revealed to me when I looked back trying to recall the first symptoms of my father's frontal lobe dementia: I eventually felt that it was this ever-curious man's loss of the vital stream of curiosity. The curiosity factor has become a potent metaphor for me. I can't help thinking that once the blinkers are on, there's only one road ahead-and that leads to death. Anyone who approaches art, or virtually anything, only wishing to defend their own tastes, anyone who won't look at something because they fear it won't be to their liking, anyone who bags something before they've seen it, might as well be dead already. They've lost their sense of curiosity. They're winding down. One of the most powerful, and perhaps accidental, foes of the preservation and stimulation of curiosity is that all-pervasive factor of contemporary life - marketing. I say "accidental" because at the start of the 20th century, and at various peaks throughout that time, when waves of social reform produced increasing numbers of citizens (first in the developed and then the developing nations) who experienced a phenomenon at that point only known by wealthy, that is, leisure time and spare cash, marketing itself was a new and exciting tool that existed precisely to stimulate curiosity. But as early as the mid-19th-century snake-oil merchants were derided so thoroughly that their profession entered the vernacular, and in the 1950s the same happened to used car salesmen. These days, when marketing is a career, one still sees the altruistic basis of the trade in the very best of the advertising branch of marketing, when the skill of art directors, graphic and video artists and especially concept developers, grab our attention and stimulate our curiosity enough to take the next step and try to satisfy that curiosity about an available product, be that new technology, food, insurance, entertainment or art. The difficulty for art, unlike entertainment, is that it is hard to commodify and therefore hard to market. And the very art most likely to stimulate the sense of curiosity is always the most fragile. Marketing when applied to the arts is at best crude and at worst destructive. Hype may get your audience to the shimmering waters of art, but the audience will not necessarily drink . In precisely the same way as one needs to address the people at this time, not the politicians, the arts must go outside the segmented and ultimately blinkered nature of targeted marketing, and ask its potential audience (which I believe knows no bounds) to throw out a lifeline to curiosity in all things. If ordinary people are scared of the arts, a highly disputed piece of parochial, reductive and selective marketing research, then our job as artists and commissioners of new work is not to try to persuade them to enjoy a particular piece of art, or even art itself, but to ask them to live again fully in all things, and put an end to lives which are driven madly by the false ideals, objects, and icons which marketing itself has created. This achieved, audiences would come thirsty to art and ready to drink deeply. There would scarcely be a need for marketing in the arts. Finding the means to achieve that is the challenge. Until that renewal is complete, the parallel universe of the small symbolic following for art must be maintained, and new works from artists vigorously commissioned and nurtured in the knowledge that the ripple effect of the creative lightning bolt will always remain an important part of what artists, their commissioner-presenters, and the informed critique that broadcasts these actions, always do when that tripartite cultural activity is in a state of grace and good health. This is an extract of Robyn Archer's Alfred Deakin Innovation Lecture "Imagination and the Audience: Commissioning for Creativity" delivered on Saturday at the Melbourne Town Hall. From checker at panix.com Thu May 12 00:45:10 2005 From: checker at panix.com (Premise Checker) Date: Wed, 11 May 2005 20:45:10 -0400 (EDT) Subject: [Paleopsych] Wired News: Time Travelers Welcome at MIT Message-ID: Time Travelers Welcome at MIT http://www.wired.com/news/print/0,1294,67451,00.html By [21]Mark Baard 02:00 AM May. 09, 2005 PT CAMBRIDGE, Massachusetts -- If John Titor was at the Time Traveler Convention last Saturday night at MIT, he kept a low profile. Titor, the notorious internet discussion group member who claims to be from the year 2036, was among those invited to the [23]convention, where any time traveler would have been ushered in as an honored guest. "We were hoping [28]Titor might show up," said Massachusetts Iinstitute of Technology grad student Amal Dorai, convention organizer. "Maybe he's going to make a grand entrance." The convention, which drew more than 400 people from our present time period, was held at MIT's storied East Campus dormitory. It featured an MIT rock band, called the Hong Kong Regulars, and hilarious lectures by MIT physics professors. The profs were treated like pop stars by attendees fascinated by the possibility of traveling back in time. East Campus housemaster Julian Wheatley, also a senior lecturer in Chinese at MIT, wore a name tag suggesting he had come back from 2121 to attend the convention. "East Campus is known for taking a certain kind of zany approach to science," Wheatley said. Centrally located on the MIT campus, the [29]East Campus dormitory houses students with a reputation for turning out offbeat inventions, such as a person-sized hamster wheel and a roller coaster built from two-by-fours. The East Campus dorm's peculiar reputation and the Time Traveler Convention's far out theme may explain why so many people made the effort to travel in driving rain to a two-hour event. A fan of the [30]Cat and Girl internet comic strip, which Dorai credits with giving him the idea for the convention, drove a band of jugglers up from the [31]Yale University campus, in New Haven, Connecticut. Others took Greyhound or Chinatown buses from New York. "We thought it would be cool to be visited by ourselves from the future," said Shauna Anthony, who traveled from New York with fellow [32]New School University graduate student Sara Moore. The MIT convention was the second public attempt this year to draw time travelers to a specific place at a more-or-less specific time. In March, an Australian group called the [33]Destination Day Bureau made its own shout-out to time travelers in Perth, Australia, by placing a welcome plaque in a public square. (Look up [34]photos from MIT and Perth.) MIT's Dorai gave interviews ahead of time to major media outlets to ensure that no one in the future missed his invitation: to share chips and soda with people sporting tweed jackets and canes, and those dressed-up as their favorite science fiction and fantasy characters. But when attendees gathered outside for a raucous countdown at 10 p.m. Eastern Standard Time, nothing appeared on the makeshift landing pad at the coordinates Dorai set for the time travelers. Fog from an aqueous smoke machine rolled across the empty landing area, which lay at one end of a sand volleyball court in the East Campus courtyard. One person in the crowd shouted, "Happy New Year," while another suggested the time travelers may have mistakenly set their watches for Central Standard Time. A group of students then raided a plate of treats set out for the time travelers, while others snapped pictures of the scene with their cell phones and digital cameras. Conventioneers should not be surprised if people from the future pull a no-show at a special event, said MIT physics professor [35]Alan Guth. If time travel were possible, a visitor from the future would not wait for an invitation to come back, Guth said, in a lecture that had the crowd inside MIT's Morss Hall in stitches. Even if traveling back in time were forbidden, "You'd think some teenager might take the keys to the family time machine," said Guth, "and we'd see him streaking across the sky, with music blaring out the window." But many of the convention's attendees were on edge in the minutes leading up to 10 o'clock. The audience gasped when the podium in front of Guth collapsed during his lecture. A few minutes later, several sat up in their chairs when a musician standing beside the stage dropped his guitar on a cymbal. It's actually a blessing that no one from the future showed up on Saturday night, said David Batchelor, the NASA physicist who wrote "[36]The Science of Star Trek." Speaking on his own behalf and not for NASA in a phone interview, Batchelor noted the same potential risks mentioned by speakers at the convention, such as the displacement of matter in a finite universe caused by the introduction of someone from another time. He also touched on the paradoxes arising from such acts as going back in time and killing one's own ancestors. "We should breathe a sigh of relief," said Batchelor, who considered his decision not to go to the convention a safe bet. "It means we were protected from the chaos that would result if someone came back and changed something." Shauna Anthony and Sara Moore, the New School University graduate students, also fretted over what might happen if they got what they came for: a visit from their future selves. "What if the future Shauna came back with just one leg?" asked Moore. "We'd spend the rest of our lives worrying about how and when that would happen." References 21. http://www.wired.com/news/feedback/mail/1,2330,0-540-67451,00.html 22. http://www.wired.com/news/technology/0,1282,67451,00.html 23. http://web.mit.edu/adorai/timetraveler 24. http://network.realmedia.com/RealMedia/ads/adstream_sx.ads/lycoswired/ron/ron/st/ss/a/188144232 at x08,x10,x24,x15,Position1,Top1!x15 25. http://network.realmedia.com/RealMedia/ads/click_nx.ads/lycoswired/ron/ron/st/ss/a/188144232 at x08,x10,x24,x15,Position1,Top1!x15 26. http://www.wired.com/news/print/0,1294,67451,00.html 27. http://www.wired.com/news/print/0,1294,67451,00.html 28. http://www.johntitor.com/ 29. http://ec.mit.edu/ 30. http://www.catandgirl.com/ 31. http://www.yale.edu/ 32. http://www.newschool.edu/ 33. http://www.destinationday.net/ 34. http://www.flickr.com/photos/40397332 at N00/sets/315938 35. http://web.mit.edu/physics/facultyandstaff/faculty/alan_guth.html 36. http://ssdoo.gsfc.nasa.gov/education/just_for_fun/startrek.html From anonymous_animus at yahoo.com Thu May 12 18:30:01 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 12 May 2005 11:30:01 -0700 (PDT) Subject: [Paleopsych] business/crime In-Reply-To: <200505121800.j4CI0PR00568@tick.javien.com> Message-ID: <20050512183001.41771.qmail@web30806.mail.mud.yahoo.com> >>The reality is that sometimes the characteristics that make someone successful in business or government can render them unpleasant personally. What's more astonishing is that those characteristics when exaggerated are the same ones often found in criminals.<< --That actually makes sense. I met a former gang leader once. He had all the qualities I've seen in successful businesspeople, specifically the ability to "map" the people around him and manipulate them using their own language. I don't think that would be as easy for someone who is fully connected to people, it requires a certain emotional distance and the ability to move people around on a mental chessboard. Gang members often use military or business metaphors as well. The mindset is not too different, only the playing field. Michael Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html From anonymous_animus at yahoo.com Thu May 12 18:32:10 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 12 May 2005 11:32:10 -0700 (PDT) Subject: [Paleopsych] gender In-Reply-To: <200505121800.j4CI0PR00568@tick.javien.com> Message-ID: <20050512183210.16275.qmail@web30804.mail.mud.yahoo.com> >>I think it's a really interesting possibility that the forces that were active in our evolutionary past have led men and women to evolve somewhat differing concerns. But to jump from that possibility into the present, and draw conclusions about what people's motives will be for pursuing one or another career, is way too big a stretch.<< --Speaking for myself, as I get praise and recognition for different traits, those traits develop faster. So it wouldn't surprise me if girls who don't get praise for being good at math or science beyond grade school choose careers in another direction, or if boys don't get recognition for being good with color don't go on to become interior designers. Michael __________________________________ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail From checker at panix.com Thu May 12 19:13:39 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:13:39 -0400 (EDT) Subject: [Paleopsych] Wired: Rob Carlson: Who needs a geneticist? Build your own DNA lab Message-ID: Rob Carlson: Who needs a geneticist? Build your own DNA lab http://www.wired.com/wired/archive/13.05/view.html?pg=2?tw=wn_tophead_5 Splice It Yourself The era of garage biology is upon us. Want to participate? Take a moment to buy yourself a molecular biology lab on eBay. A mere $1,000 will get you a set of precision pipettors for handling liquids and an electrophoresis rig for analyzing DNA. Side trips to sites like BestUse and LabX (two of my favorites) may be required to round out your purchases with graduated cylinders or a PCR thermocycler for amplifying DNA. If you can't afford a particular gizmo, just wait six months - the supply of used laboratory gear only gets better with time. Links to sought-after reagents and protocols can be found at DNAHack. And, of course, Google is no end of help. Still, don't expect to cure cancer right away, surprise your loved ones with a stylish new feather goatee, or crank out a devilish frankenbug. (Instant bioterrorism is likely beyond your reach, too.) The goodies you buy online require practice to use properly. The necessary skills may be acquired through trial and error, studying online curricula, or taking a lab course at a community college. Although there are cookbook recipes for procedures to purify DNA or insert it into a bacterium, bench biology is not easy; the many molecular manipulations required to play with genes demand real skills. Science, after all, involves doing things no one has done before, and it most often requires developing new art. But art can be learned, and, more important, this kind of art can be taught to robots. They excel at repetitive tasks requiring consistent precision, and an online search will uncover a wide variety of lab automation tools for sale. For a few hundred to a few thousand dollars, you can purchase boxy-looking robots with spindly arms that handle platefuls of samples, mix and distribute reagents - and make a fine martini. Some of the units are sophisticated enough that you can teach them all the new tricks published in fancy journals. Just make sure you have plenty of electrical outlets. That said, actually manipulating a genome with your new tools requires learning something about software that helps design gene sequences. These bioinformatics programs are all over the Web, and in no time you'll be tweaking genome sequences on your computer late into the night. But while you may discover some interesting relationships between organisms, and with access to the right databases you may even find a connection between a mutation and a disease (no mean contribution, to be sure), the real work gets done at the lab bench. If you want to get down and dirty bashing DNA, order genetic parts suitable for use in E. coli from the synthetic biology group at MIT (available soon). These genes constitute a library of defined components that can be assembled into control systems for biological computation, or used to program bacteria in order to produce interesting proteins and other compounds. There's even an online design tool for genetic circuits. If you're more into hacking plants - perhaps you want true plastic fruit growing on your tomato vine or apple tree - head to BioForge, where you can get expert info. Concern that these resources can be used intentionally to create hazardous organisms is overblown. Relatively few labs possess all the necessary equipment for the task. Despite the recent demonstration of working viruses constructed from mail-order DNA, repeating those feats would be difficult. Yet it is getting easier to synthesize whole genomes, particularly if your aims aren't sinister. Instead of trying to assemble a viral or bacterial genome yourself, you can order the whole sequence online from Blue Heron Biotechnology, where researchers will first check it for genes in known pathogens and toxins, and then, two to four weeks later, FedEx you the DNA. A few thousand dollars will buy a couple of genes, enough for a simple control circuit; soon it will buy most of a bacterial genome. And your Synthetic Biology at Home project will get easier when microfluidic DNA synthesizers hit the market. These have already been used to write sequences equivalent in size to small bacterial genomes, a capability currently limited to a few academic and industrial labs - but not for much longer. The advent of garage biology is at hand. Skills and technology are proliferating, and the synthesis and manipulation of genomes are no longer confined to ivory towers. The technology has even reached the toy market: The Discovery DNA Explorer kit for kids 10 and older (see Wired, issue 11.12) is surprisingly functional at $80, and how long will it be before we see a Slashdot story about a Lego Mindstorms laboratory robot? Sure, few high school students will be able to pay for this equipment with their earnings from Mickey D's, but anyone who spends a few thou on cars, boats, or computers can get to work hacking biology tomorrow. Rob Carlson (carlson at ee.washington.edu) is a senior scientist in the Department of Electrical Engineering at the University of Washington. From checker at panix.com Thu May 12 19:13:55 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:13:55 -0400 (EDT) Subject: [Paleopsych] Slate: The Adderall Me: My romance with ADHD meds. Message-ID: The Adderall Me: My romance with ADHD meds. http://slate.msn.com/id/2118315/fr/rss/ By Joshua Foer Updated Tuesday, May 10, 2005, at 4:26 AM PT Depressives have Prozac, worrywarts have Valium, gym rats have steroids, and overachievers have Adderall. Usually prescribed to treat Attention Deficit Hyperactivity Disorder (read Sydney Spiesel in Slate on the risks and benefits), the drug is a cocktail of amphetamines that increases alertness, concentration, and mental-processing speed and decreases fatigue. It's often called a cognitive steroid because it can make people better at whatever it is they're doing. When scientists administered amphetamines to Stanford's varsity swim team, lap times improved by 4 percent. According to one recent study, as many as one in five college students have taken Adderall or its chemical cousin Ritalin as study buddies. The drug also has a distinguished literary pedigree. During his most productive two decades, W.H. Auden began every morning with a fix of Benzedrine, an over-the-counter amphetamine similar to Adderall that was used to treat nasal congestion. James Agee, Graham Greene, and Philip K. Dick all took the drug to increase their output. Before the FDA made Benzedrine prescription-only in 1959, Jack Kerouac got hopped up on it and wrote On the Road in a three-week "kick-writing" session. "Amphetamines gave me a quickness of thought and writing that was at least three times my normal rhythm," another devotee, John Paul Sartre, once remarked. If stimulants worked for those writers, why not for me? Who wouldn't want to think faster, be less distracted, write more pages? I asked half a dozen psychiatrists about the safety of using nonprescribed Adderall for performance-enhanced journalism. Most of them told me the same thing: Theoretically, if used responsibly at a low dosage by someone who isn't schizophrenic, doesn't have high blood pressure, isn't on other medications, and doesn't have some other medical condition, the occasional use of Adderall is probably harmless. Doctors have been prescribing the drug for long enough to know that, unlike steroids, it has no long-term health consequences. Provided Adderall isn't snorted, injected, or taken in excessive amounts, it's not highly addictive-though without doctor oversight, it's hard to know whether you're in the minority of people for whom the drug may be dangerous. As an experiment, I decided to take Adderall for a week. The results were miraculous. On a recent Tuesday, after whipping my brother in two out of three games of pingpong-a triumph that has occurred exactly once before in the history of our rivalry-I proceeded to best my previous high score by almost 10 percent in the online anagrams game that has been my recent procrastination tool of choice. Then I sat down and read 175 pages of Stephen Jay Gould's impenetrably dense book The Structure of Evolutionary Theory. It was like I'd been bitten by a radioactive spider. The first hour or so of being on Adderall is mildly euphoric. The feeling wears off quickly, giving way to a calming sensation, like a nicotine buzz, that lasts for several hours. When I tried writing on the drug, it was like I had a choir of angels sitting on my shoulders. I became almost mechanical in my ability to pump out sentences. The part of my brain that makes me curious about whether I have new e-mails in my inbox apparently shut down. Normally, I can only stare at my computer screen for about 20 minutes at a time. On Adderall, I was able to work in hourlong chunks. I didn't feel like I was becoming smarter or even like I was thinking more clearly. I just felt more directed, less distracted by rogue thoughts, less day-dreamy. I felt like I was clearing away underbrush that had been obscuring my true capabilities. At the same time, I felt less like myself. Though I could put more words to the page per hour on Adderall, I had a nagging suspicion that I was thinking with blinders on. This is a concern I've heard from other users of the drug. One writer friend who takes Adderall to read for long uninterrupted stretches told me that he uses it only rarely because he thinks it stifles his creativity. A musician told me he finds it harder to make mental leaps on the drug. "It's something I've heard consistently," says Eric Heiligenstein, clinical director of psychiatry at the University of Wisconsin. "These medications allow you to be more structured and more rigid. That's the opposite of the impulsivity of creativity." On the other hand, lots of talented people like Auden and Kerouac have taken amphetamines precisely because they find them inspiring. Kerouac and the Beats ingested the drug in such heroic quantities that it didn't just make them more focused, it completely transformed their writing. According to legend, On the Road was drafted in a 120-foot-long single-spaced paragraph that burbled down a single continuous scroll of paper. Adderall is supposed to be effective for four to six hours. (An extended-release version of the drug, which as Spiesel explains was recently banned in Canada, lasts 12 hours.) But I found the effects gradually wore off after about three. About six hours after taking the drug, I would feel slightly groggy, the way I sometimes get in the early afternoon when my morning coffee wears off. But when I'd lie down for an afternoon nap, I couldn't go to sleep. My mind was still buzzing. This withdrawal effect is common. Adderall users often complain that they feel tired, "stupid," or depressed the day after. After running on overdrive, your body has to crash. For me, the comedown was mild, a small price to pay for an immensely productive day. But there are larger costs, and risks, to Adderall. Though the Air Force furnishes amphetamine "go pills" to its combat pilots in Iraq and Afghanistan, possessing Adderall (or a fighter jet) without a prescription is a felony in many states. And the drug has been known, in rare cases, to make people obsessive compulsive, and even occasionally to cause psychosis. Several years ago, a North Dakota man blamed Adderall for making him murder his infant daughter and won an acquittal. There's also the risk that Adderall can work too well. The mathematician Paul Erd?s, who famously opined that "a mathematician is a device for turning coffee into theorems," began taking Benzedrine in his late 50s and credited the drug with extending his productivity long past the expiration date of his colleagues. But he eventually became psychologically dependent. In 1979, a friend offered Erd?s $500 if he could kick his Benzedrine habit for just a month. Erd?s met the challenge, but his productivity plummeted so drastically that he decided to go back on the drug. After a 1987 Atlantic Monthly profile discussed his love affair with psychostimulants, the mathematician wrote the author a rueful note. "You shouldn't have mentioned the stuff about Benzedrine," he said. "It's not that you got it wrong. It's just that I don't want kids who are thinking about going into mathematics to think that they have to take drugs to succeed." Erd?s had good reason to worry. Kerouac's excessive use of Benzedrine eventually landed him in a hospital with thrombophlebitis. Auden went through a withdrawal in the late 1950s that tragically curtailed his output. That's some trouble I don't need. Perhaps I could get a regular supply of Adderall by persuading a psychiatrist that I have ADHD-it's supposed to be one of the easiest disorders to fake. But I don't think I will. Although I did save one pill to write this article. From checker at panix.com Thu May 12 19:16:58 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:16:58 -0400 (EDT) Subject: [Paleopsych] Telegraph: Stress 'makes us live longer' Message-ID: Stress 'makes us live longer' http://news.telegraph.co.uk/news/main.jhtml?xml=/news/2005/05/09/nstress09.xml By Nicole Martin (Filed: 09/05/2005) Short bursts of stress can help people stay young, according to research which undermines the belief that a frenzied lifestyle can damage your health. Such exposure to stress will prolong life and help prevent chronic illnesses such as arthritis, Alzheimer's and Parkinson's disease, Dr Marios Kyriazis told an anti-ageing conference in London. He recommended that, rather than avoiding stress, people should subject themselves to mild doses of frantic activity, which might include packing a suitcase in a hurry before catching a flight, or shopping for a dinner party during the time limit of a lunch break. Dr Kyriazis, the medical director of the British Longevity Society, argued that moderate stress increased the production of proteins that help to repair the body's cells, including brain cells, enabling them to work at peak capacity. "Research shows that cells subjected to stress repair themselves, allowing us to live longer," he said. "As the body ages, this self-repair mechanism starts to slow down. "The best way to keep the process working efficiently is to 'exercise it', in the same way that you would exercise your muscles to keep them strong." Other stressful activities he recommended included giving a best man's speech, following the instruction manual for a new DVD recorder, volunteering to help at a youth club and redecorating a room over a weekend. However, Dr Kyriazis insisted that long-term stress should be avoided. He said that prolonged stressful experiences, which could include caring for an elderly relative, were bad for health. From checker at panix.com Thu May 12 19:17:14 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:17:14 -0400 (EDT) Subject: [Paleopsych] FRB, Richmond: Interview: Thomas Schelling Message-ID: Interview: Thomas Schelling Region Focus Spring 2005: Interview - Federal Reserve Bank of Richmond http://www.rich.frb.org/publications/economic_research/region_focus/spring_2005/interview.cfm [Schelling is among the very most imaginative economists and a fabulous writer as well. I discovered him in graduate school and am delighted that Alice was able to take an honors course under him when she studied chemical engineering at the University of Maryland.] Thomas Schelling's early research was common fare for economists in the 1950s. The quality of the work may have been higher than most, but the topics were relatively mundane. His first two books were titled simply National Income Behavior and International Economics. But his interests extended beyond the traditional confines of the discipline, a point that was made clear with the publication of The Strategy of Conflict in 1960. In it, he used the tools of economics to illuminate important issues in international relations, while making significant contributions to game theory and laying the ground-work for later research in experimental economics. Schelling has continued to publish on military strategy and arms control throughout his career, but his work has led him to a number of other seemingly disparate issues, such as racial segregation, organized crime, and environmental policy. In each case, he has been able to generate original insights from ordinary observation. As his long-time colleague Richard Zeckhauser has written, Schelling "thinks about the essence of phenomena. In scanning everyday behavior, he sees patterns and paradoxes that others overlook." Schelling spent most of his career at Harvard University, before joining the faculty of the University of Maryland in 1990. He is a past president of the American Economic Association and recently worked with other distinguished economists on the Copenhagen Consensus, a project designed to prioritize the largest social problems facing the world. Aaron Steelman interviewed Schelling at his home in Bethesda, Md., on February 7, 2005. --- RF: Your early work focused on topics that were fairly conventional. How did your work progress into areas, such as strategic bargaining, that largely had been beyond the scope of economists? Schelling: In 1948, I had just finished my coursework for the Ph.D. at Harvard, and a friend of mine called from Washington. He was working on the Marshall Plan and said that he had an opportunity to go to Paris but he couldn't leave until he had a replacement. So he asked me if I would like to replace him. I said sure. Eventually, I went to Europe as part of this assignment and worked mainly on negotiations for the European Payments Union. Then, Averell Harriman, who had been head of the Paris office, went to the White House to be President Truman's foreign policy advisor. Harriman asked my boss to go with him, who in turn asked me a few months later to join him. In 1951, the foreign aid program was shifted to the Mutual Security Program, with Harriman as director, in the Executive Office of the President. I moved there, and stayed through the first nine months of the Eisenhower administration. So when I left, I had spent five years in the foreign aid bureaus, largely working on negotiations. That, I believe, was what focused my attention on the type of issues that showed up in The Strategy of Conflict. RF: One of the more famous bargaining situations that you propose in The Strategy of Conflict involves a problem in which communication is incomplete or impossible - the game where two strangers are told to meet in New York City but have not communicated with each other about the meeting place. What does this game tell us about bargaining? And what, if any, are the policy implications? Schelling: That little exercise, which I designed to determine if people could coordinate without any communication, became fairly famous and now I am usually identified as the originator of the idea of "focal points." My argument was that in overt negotiations something is required to get people to arrive at a common expectation of an outcome. And the ability to reach such a conclusion without communication suggested to me that there was a psychological phenomenon, even in explicit negotiations, which may work to focus bargainers eventually on that commonly expected outcome. By understanding that, I thought, we may be able to more easily facilitate policy negotiations over such matters as what would be an appropriate division of the spoils, an appropriate division of labor, and so forth. RF: What were the responses when you originally posed this question to people? Schelling: When I first asked that question, way back in the 1950s, I was teaching at Yale. A lot of the people to whom I sent the questionnaire were students, and a large share of them responded: under the clock at the information desk at Grand Central Station. That was because in the 1950s most of the male students in New England were at men's colleges and most of the female students were at women's colleges. So if you had a date, you needed a place to meet, and instead of meeting in, say, New Haven, you would meet in New York. And, of course, all trains went to Grand Central Station, so you would meet at the information desk. Now when I try it on students, they almost never give that response. Some cities have more obvious focal points than others. For instance, if I asked people where would you meet in Paris, they probably would have no trouble. Most would go to the Eiffel Tower. But in other cities, it's not so clear. The question first occurred to me while I was driving across country with two college friends. We were going from San Diego to New Hampshire and back, and camping along the way. We stopped in San Antonio and one of the other two guys got out and bought some peanut butter and crackers. While he was gone, a police officer made me move on, and because of the one-way streets, it took me about 10 minutes to get back to where I dropped him off, and he wasn't there. I kept circling around and eventually we found each other. But we realized that this could happen to us in any city, and we should come up with a plan about how to meet if we got separated. We spent the whole afternoon thinking about it individually, but not talking about it, and that evening around the campfire we compared notes. We all wound up in the same place. The criteria we used were the following: Every city had to have this place and there could be only one of it, you had to be able to find it by asking any police officer or fireman, and you had to be able to reach it by public transportation. That narrowed the list down to the town hall or the main police station or the main post office. Well, before we left home, we had each given our mothers a list of cities in which we would look for mail, and the way you get mail when you are traveling across country is to have the letter sent to your name, care of general delivery, and it arrives at the main post office in that city. That occurred to all three of us, and if we had to choose among the places that shared the criteria we described, the main post office seemed to be the obvious choice. RF: You begin many of your papers with examples that are taken from everyday life. For instance, in "Hockey Helmets, Daylight Saving, and Other Binary Choices," you use the case of a player for the Boston Bruins who suffered a severe head injury to demonstrate why some collective action problems can be so difficult to solve - in this case, getting hockey players to voluntarily wear helmets. Is this a conscious strategy of yours to engage readers in what otherwise might seem like an abstract discussion? Schelling: I always try to find something that I can put in the first paragraph to make the article sound interesting. It was just a coincidence that the hockey player had been hit in the head and that I had noticed it. It was a good example of a scenario in which everyone might wish to be compelled to do something that they wouldn't do on their own individually. So I think that has been part of my style. I wrote a textbook in international economics that had about a dozen policy chapters. I tried to have the first page of every chapter present an interesting puzzle or phenomenon that would get the interest of the readers. RF: You have written that the "ordinary human being is sometimes ... not a single rational individual. Some of us for some decisions are more like a small collectivity than like the textbook consumer." Could you explain what you mean by this, perhaps through a few examples? Schelling: I started working on that subject in the 1970s when I was asked to join a committee of the National Academy of Sciences on substance abuse and habitual behavior. I was the only economist there. Everyone else was a specialist on a certain type of addictive substance such as heroin or some other health problem like obesity. It seemed to be taken for granted that if you were addicted - whether to heroin or alcohol or nicotine - there wasn't much you could do for yourself. I argued that this was not the case, and gave a number of examples of ways people can help themselves avoid relapse. For instance, one person tried to show how addictive heroin was by pointing out that many former users, even those who had avoided heroin for a long time, would be likely to use the drug again if they were to hang out with the people they used to shoot up with or even if they listened to the same music that they played when they used heroin in the past. I pointed out that there was some instructive material right there. Don't associate with the same people. Don't listen to the same music. And if the place where you used to use heroin is on your way to work, find a different route. So even though those people may be inclined to use heroin again, there were clearly some ways in which they could help prevent themselves from having a relapse. The more I thought about this issue, the more I began to conclude that a lot of people have something like two selves - one that desperately wants to drink and one that desperately wants to stay sober because drinking is ruining his life and his family. It's as if those people have two different core value systems. Usually only one is prominent at a given time, and people may try to make sure that the right value system attains permanence by taking precautions that will avoid stimulating the other value system. RF: Some have called you a "dissenter" from mainstream economics. But it seems to me that this is true only insofar as it concerns topics of inquiry. On methodological issues, you don't seem as willing to abandon some of the core assumptions of neoclassical economics as, say, those people who call themselves "behavioral economists." Do you think that this is a fair characterization? Schelling: This is something that I talk about a lot. I claim that we couldn't do without rational choice. But we don't expect rational choice from a child or an Alzheimer's patient or someone suffering from shock. We will better understand the uses and limits of rational choice if we better understand those exceptions. I use the example of the magnetic compass. It's usually a wonderful way to determine which direction north is. But if you are anywhere near the actual north magnetic pole, the compass could point in any direction, even south. The same is true with rational choice. It is a wonderful tool if used when appropriate, but it may not work all the time. So I consider myself in the rational-choice school, absolutely. But I am more interested in the exceptions than many other economists tend to be. As for the behavioralist critique of neoclassical economics, I would conjecture that if you walked into a classroom where a behavioralist is teaching microeconomics, that person would teach it in a straight, standard fashion. It's something that you have to master - you can't do without it. For instance, if a student were to ask about the effect of a gasoline tax on driving behavior, the response would likely be that such a tax will tend to lower consumption of gasoline and/or increase the desirability of more fuel efficient cars. That's just straight neoclassical economics. More generally, I think that when a new idea develops, it is important that the enthusiasts are given free rein to explore and perhaps even exaggerate that idea. Once it catches on and becomes respectable, then it's time to become more critical. Rational choice has gone through that process, and the behavioralists have emerged to challenge some of its assumptions. The behavioralists have probably overstated their case, but their ideas are relatively new and will be critiqued as well. I think that people like Dick Thaler and Bob Frank, who are clearly two of the most innovative behavioralist economists today, so much enjoy what they do that I'm not sure if they consciously exaggerate the role of these exceptional situations. When I read Bob Frank, I get the sense that he is passionate, almost emotional about his belief that American consumers are suffering welfare losses because they are spending their money trying to avoid the discomfort of not being equal to their neighbors. I think he overdoes it, and I think that I have told him so. I don't know if his answer today would be, "Of course I overdo it. I'm trying to get attention paid to something I think is important." Or if he would say instead, "No, I don't overdo it. I really do believe that the phenomenon is that important." But even if the former is true, I would excuse that. I think that the point is important enough that if exaggeration will help them get it across, let them exaggerate. RF: What is your opinion of modern game theory? Schelling: That's a hard one, because I don't keep up with all the latest work in that field. But I would like to make the following broad claims: Economists who know some game theory are much better equipped to handle a lot of important questions than those who don't. But economists who are game theorists tend to be more interested in the mathematics aspect of the discipline than the social sciences aspect. Some economists of the latter group are good at using their theoretical work to examine policy issues. Still, many - and I think this is especially true of young game theorists - tend to think that what will make them famous is their mathematical sophistication, and integrating game theory with behavioral observations somehow will detract from the rigor of their work. I'll give you an example. I had a student at Harvard named Michael Spence, who a few years ago won the Nobel Prize. Mike wrote a fascinating dissertation about market incentives to engage in excessive competitive expenditure. I was on his committee, and I argued that he needed to do two things. First, summarize the theory in 40 pages. Second, find six to 10 realistic examples to illustrate how the theory worked and why it mattered. He spent much of a year doing that. But in the end, he published the 40-page version of his dissertation in a top-tier journal, and used that paper as the first chapter of a book. Both of them got a lot of attention, and led to his appointment to the Harvard faculty. The reason that I advised him to take this approach was quite simple: If he didn't, other people would and they would get credit for his work because they were able to apply it to real-world questions. I think that other economists, especially young game theorists, can learn from this example. Even very technical work often can be used in an applied manner - and this can benefit the work as well as the economist. RF: In 1950, few people would have predicted, I think, that the Cold War would end as peacefully as it did. For example, it is surely notable that the conflict ended without the use of nuclear weapons. Why do you think both sides avoided using means that would have had fairly certain, but catastrophic, consequences? Schelling: I have written and lectured about this quite a bit. When I give a talk on the subject, I begin by stating, "The most important event of the second half of the 20th century is one that didn't happen." I think you have to go through the history to understand it fully. In the early 1950s, it was believed that the likelihood of the United States using nuclear weapons was so great that the Prime Minister of Great Britain came to Washington with the express purpose of persuading the Truman administration not to use them. And because the British had been partners in the development of nuclear weapons, their Parliament thought that the Prime Minister had a good right to share in any decision about how they would be used. As we know, they were not used, but the Eisenhower administration repeatedly asserted that nuclear weapons were just like any other type of weapon, and that they could be used as such. The attitude in the Kennedy and Johnson administrations was quite different. They believed that nuclear weapons were fundamentally different, and their statements helped to build the consensus that their use was taboo - a consensus that may have dissuaded Nixon from using them in Vietnam. Also, in the 1960s there was a great fear that dozens of countries would come to possess nuclear weapons. But the nonproliferation efforts were vastly more successful than most people expected. It was thought that Germany was bound to demand them, and that the Japanese couldn't afford to be without them. And then it would spiral down to other countries: the Spanish, the Italians, the Swedes, the South Africans, the Brazilians would all have nuclear weapons. The process by which these countries would acquire them, it was thought, was through nuclear electric power - the reactors would produce enough plutonium to yield weapons. For several reasons, that didn't occur. Israel's restraint in the 1973 war was also very important, I think. Everyone knew that Golda Meir had nuclear weapons, and she had perfect military targets - two Egyptian armies north of the Suez Canal, with no civilians anywhere near. But she didn't use them. Why? Well, you could say, quite reasonably, that they didn't want to suffer worldwide opprobrium. I think, though, that there was probably another reason. She knew that if she did, the Iranians, the Syrians, and other enemies of Israel would likely acquire them and would not be reluctant to use them. In addition, it was not clear in the late 1970s that the Soviets shared the nuclear taboo. Yet, they didn't use them in their war against Afghanistan - and this was also very important. There is a possibility that nuclear weapons will be used in the India-Pakistan dispute. But I'm not especially worried about that. The Indians and the Pakistanis have been involved in nuclear strategic discussions in the West for decades. They have had a long time to think about this, and have watched the U.S.-Soviet negotiations. I think they know that if they were to use nuclear weapons it could easily lead to something beyond their control. So I think that by now the taboo is so firmly entrenched, that it is very unlikely we will see nation-states use nuclear weapons. What we don't know is if that taboo holds for non-state actors. I think that it might, but I don't hold that opinion with much conviction. RF: Some policymakers and analysts have argued that diplomacy is much more difficult in today's world than it was during the Cold War because there are now multiple non-state players who seem to place less value on stability than the Soviets did. How does this change the bargaining game? How can economics inform the current conflict with Islamic terrorists? Schelling: One big difference is that you simply don't know who the non-state actors are. We have made a big deal out of Osama bin Laden. But we don't know if he is alive, and if he is alive, whether he still controls the money and organization in the way that he did a few years ago. Also, there are no recognized private channels of communication with non-state actors. If you want to get a message to bin Laden, you either hold a press conference and hope that he will hear it, or send it to him through a secret private channel. Also, there is a popular notion that deterrence will not work when you are dealing with non-state actors. But I'm not so sure that this is the case. Consider the Taliban. I think that if the leaders of the Taliban had known what type of response the attacks of Sept. 11 would produce from the United States, they would have tried to prevent the attacks. So I think that we should consider what we can do to alienate bin Laden from some of his supporters. You also need to consider what types of weapons they are likely to use and what types of targets they are likely to choose. And we need to determine their objectives. For instance, we still don't know what the objectives were of the attacks on the World Trade Center, because the effects were so widespread. It killed a lot of people. It produced the largest media coverage of a terrorist attack in history. It demonstrated U.S. vulnerability, while also destroying a symbol of Western capitalism. And it demonstrated the competence and some would say the bravery of the terrorists who were willing to sacrifice themselves. Each of those could have been the principal objective, or there could have been some combination of objectives. But we don't know for sure. When we think about weapons, many people seem to think that terrorists will use whatever weapon they can get their hands on. But consider the use of, say, smallpox from a cost-benefit analysis. They could release smallpox in New York, Chicago, and San Francisco. But smallpox is a very difficult disease to contain in a world of global travel, and the United States is the country best equipped to deal with an outbreak. Releasing smallpox in the United States, then, could result in many more deaths in poor countries with relatively bad health systems like Indonesia and Pakistan than in the United States. I'm not sure that would be a result the terrorists would welcome. By unleashing such widespread death in the developing world - especially in places where they enjoy support today - they could substantially reduce their approval and assistance from people who are now their allies. In contrast, anthrax might be a more attractive option because it is not contagious, and its effects could be limited to the United States. Also, there may be a cultural aspect to this. If releasing a noncontagious toxin in, say, a subway station is considered by large parts of Islamic culture to be a cowardly way to attack your enemy, then this could be costly to them. It could damage their support in the same way that releasing a contagious toxin could, even though the effects of the actual attack would be much more direct and localized. RF: What do you think have been the greatest diplomatic successes and failures of the past 50 years? Schelling: I think the great diplomatic success of the 20th century was the way the Marshall Plan morphed into NATO. Essentially, the cooperative arrangements between the Marshall Plan countries and the United States were absorbed when NATO was formed, and the good will was maintained. As a result, the United States was able to maintain excellent diplomatic relations with the other major Western powers for roughly 50 years. But recently that has started to unravel, and it's going to be very hard to get back the sort of camaraderie and mutual respect that we had built up. As for other challenges, I think relations with Russia are much more complicated than they ever were with the Soviet Union. The power structure within the Kremlin may have been more complex than we understood at the time, but that didn't really affect the way we conducted diplomacy. I don't know what to say about the Israeli-Palestinian issue. I think that one of the greatest tragedies for diplomacy in the last few decades was the assassination of Prime Minister Rabin. The prospects for peace would have been much better, I think, had that not occurred. There is one additional observation I would like to make. I think that technological changes have made diplomacy more centralized. Television, for instance, makes it possible for the U.S. Secretary of State to speak directly to billions of people around the world when she holds a press conference. Similarly, improved air travel makes it much easier for ambassadors to travel home to get instructions from the administration. RF: I would like to talk about your famous checkerboard example as it applies to racial segregation. You have written, "A moderate urge to avoid small-minority status may cause a nearly integrated pattern to unravel, and highly segregated neighborhoods to form." Could you describe how this process unfolds? Schelling: When I started thinking about this question, many American neighborhoods were either mostly white or mostly black. One possible explanation for this, of course, was rampant racism. But I was curious about how this might emerge in a world where racism was not particularly acute, where in fact people might prefer racial diversity. The process works basically like this. Let's say the racial composition of a neighborhood is 55 percent white and 45 percent black, and that the majority population in the surrounding areas is utterly without prejudice. Then you may get a case where more and more members of the majority group move in. This may be fine with the minority group for a while. They may not mind going from being 45 percent of the population to 35 percent. But at some point - say, when their part of the population is only 20 percent - then the most sensitive members of that group will probably evacuate, reducing their percentage even further. The result is a highly segregated neighborhood, even though this wasn't the intent of the majority population. I wanted to come up with an easily understandable mechanism to explain this phenomenon that I could use in teaching a class. I spent several summers at the RAND Corporation, which had a good library. I looked at several sociological journals, trying to find something I could use, but I wasn't able to find anything suitable. So I decided I would have to do something myself. One day, I was flying home from somewhere and had nothing to read. So I passed the time by putting little "X"s and "O"s in a line, with one group representing whites and the other representing blacks, and used the assumption that there was a moderate desire to avoid becoming part of a very small minority group. Well, it turned out that this exercise was very hard to do on paper, because you had to keep erasing and starting over. But my son had a coin collection at the time, and he had a bunch of copper coins and a bunch of zinc coins. I laid them out, and then I decided that putting them in a line wasn't good enough. You needed more dimensions. So I arranged them on a checkerboard. I got my 12-year-old son to sit down at the coffee table with me, and we would move things around. Soon, we got quite used to how it worked and how different the results were if one group was more discriminating than the other or if one group was more numerous than the other. I published my results, and it got quite a bit of attention at the time. But it wasn't until 25 years later that I realized that this game had pioneered some of the work in what is called "agent-based modeling" and which is used in a variety of disciplines in the social sciences. At the time I was working out this example I didn't realize that I was engaged in an area of research that would one day have a formal name. RF: You have done some research on crime. Why do you think some types of criminal activity become organized while others do not? Schelling: Part of this is semantic. Let's say you have a group of automobile thieves. They may be organized, but we don't call that "organized crime." Instead, when we use that term we are almost always referring to a small group of activities: gambling, prostitution, and drugs are the big ones. My question was: What is it that characterizes those things we call "organized crime"? The answer is that they all exist as monopolies. There is strong demand for each of the activities I mentioned before, but each of them is illegal. So the people who work in those markets are relatively easy to extort because they cannot turn to the police. As a result, it is possible to gain something approaching monopoly power in those markets. So the bookmakers, prostitutes, and drug dealers are not really the perpetrators of organized crime. They are the victims. When I looked at the issue more, though, I found that there were some markets that were legal but which were also characterized by high levels of extortion. Two were small laundry services and restaurants. What did they have in common? They were, at that time at least, mostly cash businesses and without well-documented accounting practices. So the proprietors of those businesses could pay the money under the table to the extortionist. And if they didn't, the extortionists would break their legs. Furthermore, if those businesses knew that their competitors in that market were being similarly extorted and thus realized that they were not being placed at a competitive disadvantage, they had less of an incentive to turn to the police. RF: How did you become involved with the Copenhagen Consensus and what type of policy proposals has the group offered? Schelling: I don't know precisely why I was chosen. Bjorn Lomborg, the organizer of the project, wanted to gather a group of economists of some reputation, and he probably knew that I had written about the greenhouse gas issue. So that was probably the connection. When the project started we had a United Nations list of global problems related mostly to development and poverty. We were asked to look over that list and pick 10 that we thought would be worth pursuing. We did that, and then we asked a very distinguished person in that field to write a major paper on the issue, along with two other people to write critiques of the paper. Somewhere along the way, we began to emphasize an idea that wasn't clear to me at the outset and that I think wasn't clear to many other people - namely, that this was mainly a budget priority exercise. We were supposed to do cost-benefit analysis. We were told that we had $50 billion to spend, and we should decide which projects would provide the most welfare benefit for the money. Unfortunately, that approach had not governed our choice of projects and had not governed the way the papers were written. For instance, no one really had a good idea of what you could do with some part of $50 billion to generate more liberal trade. The same was true with education. The papers argued that unless you can reform the educational systems in the big industrialized countries, more money won't help. Similarly, it wasn't clear to us how more money would help us prevent the spread of financial crises. So we had about five topics that really did not fit, and we treated many of them as not applicable. In retrospect, I think we should have treated climate change in the same way. Of those projects where we could see how the expenditure of money would help, restricting the spread of HIV and AIDS seemed like it should be at the top of the list. It is just so crucially important that we advocated spending about half of the money on it. Then there were some projects, like malnutrition and malaria control, where you just got so much for your money, that we put them near the top also. Projects to improve sanitation also were deemed quite worthwhile. In general, I think that the program was successful in some ways and less successful in others. And if we had it to do all over again, I think that we could do an awful lot better. RF: How did you come to the University of Maryland? Schelling: In the 1980s, Congress passed a law making it illegal for most businesses to have a mandatory retirement age for most employees. But they allowed colleges and universities a seven-year grace period. Harvard, at the time, had mandatory retirement at 70, and I was going to be 70 before the grace period expired. Well, I was in good health, felt that there was more research that I wanted to do, and still enjoyed teaching. So I let it be known that I could be attracted to another university. My first preference was a university in Southern California, where I grew up. But then a former colleague and a very good friend of mine who was dean of the University of Maryland's School of Public Affairs called, and I told him about my situation. He asked me not to accept another offer until I heard from him. It also turned out that the chairman of the economics department had been my teaching fellow at Harvard in the 1960s. So I had two very close connections at Maryland, and I also knew a few other people on the faculty, like Mancur Olson. Plus, as we have discussed, much of my work is very policy-oriented, which made the Washington area pretty desirable to me. Overall, it seemed like this would be a good fit for me, so when the president of the university made me a very generous offer, I accepted it. I have been at Maryland since 1990. I still teach a class or two, but I am now in an emeritus position. From checker at panix.com Thu May 12 19:17:32 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:17:32 -0400 (EDT) Subject: [Paleopsych] JGIM: Motivations for Physician-assisted Suicide: Patient and Family Voices Message-ID: Motivations for Physician-assisted Suicide: Patient and Family Voices Journal of General Internal Medicine Volume 20 Issue 3 Page 234 - March 2005 doi:10.1111/j.1525-1497.2005.40225.x Robert A. Pearlman, MD, MPH 1,2,3,4,5 , Clarissa Hsu, PhD 6 , Helene Starks, MPH 4 , Anthony L. Back, MD 1,2,3 , Judith R. Gordon, PhD 7 , Ashok J. Bharucha, MD 8 , Barbara A. Koenig, PhD 9 and Margaret P. Battin, PhD 10 Objective: To obtain detailed narrative accounts of patients' motivations for pursuing physician-assisted suicide (PAS). Design: Longitudinal case studies. Participants: Sixty individuals discussed 35 cases. Participants were recruited through advocacy organizations that counsel individuals interested in PAS, as well as hospices and grief counselors. Setting: Participants' homes. Measurements And Results: We conducted a content analysis of 159 semistructured interviews with patients and their family members, and family members of deceased patients, to characterize the issues associated with pursuit of PAS. Most patients deliberated about PAS over considerable lengths of time with repeated assessments of the benefits and burdens of their current experience. Most patients were motivated to engage in PAS due to illness-related experiences (e.g., fatigue, functional losses), a loss of their sense of self, and fears about the future. None of the patients were acutely depressed when planning PAS. Conclusions: Patients in this study engaged in PAS after a deliberative and thoughtful process. These motivating issues point to the importance of a broad approach in responding to a patient's request for PAS. The factors that motivate PAS can serve as an outline of issues to explore with patients about the far-reaching effects of illness, including the quality of the dying experience. The factors also identify challenges for quality palliative care: assessing patients holistically, conducting repeated assessments of patients' concerns over time, and tailoring care accordingly. The motivation to pursue physician-assisted suicide (PAS) has been an important issue in the debates regarding the legality and appropriate response to requests for PAS. Understanding the motivation is critically important to physicians because many of them have been asked for assistance in PAS. 1 [-] 3 Previously described explanations include inadequate treatment for pain or other symptoms (i.e., inadequate palliative care), 1 [-] 7 psychiatric problems (e.g., depression, hopelessness), 7 [-] 13 and concerns about losses (e.g., function, control, sense of community, sense of self). 5,6,14 [-] 17 However, these explanations are principally based on three sources of evidence: physicians' impressions, patients' reports of hypothetical circumstances under which they would consider PAS, and survey data from Oregon. There is limited direct reporting from patients and family members about what drives patients to pursue PAS. To address the current gaps in our understanding, we conducted a longitudinal qualitative study with patients who seriously pursued PAS, and their family members. This study expands the medical and ethics literature on end-of-life care by providing a detailed descriptive account of the pursuit of PAS from the perspective of patients and their family members. Participants A detailed account of the recruitment and methods is described elsewhere. 18 Briefly, we recruited patients who were seriously pursuing PAS and their family members (ongoing cases), as well as family members of persons who had seriously pursued and/or died of PAS (historical cases). We recruited participants through advocacy organizations that counsel people interested in a hastened death (see Table 1 for definitions), hospices, and grief counselors. The referral sources sent information to their clients and/or verbally informed them about the study. All participants voluntarily contacted us. Patients were screened for decisional incapacity, which, if found, would have precipitated a series of actions to protect the patient. Specifically, we looked for evidence that would suggest that pursuit of a hastened death was motivated by a psychiatric disorder (e.g., severe depression, delusions). To protect respondents' confidentiality, we destroyed all records with personal identifiers and removed identifying information from transcripts. All study procedures were reviewed and approved by the university's Institutional Review Board. Data Collection We conducted qualitative, semistructured interviews with patients and family members. Five investigators conducted interviews; for each family, the same investigator interviewed all members. In total, we conducted 159 interviews with 60 participants concerning 35 patients between April 1997 and March 2001. Patients and family members for ongoing cases were interviewed at enrollment and at approximately 3-month intervals until the patient's death. Family members of deceased patients (ongoing and historical cases) were interviewed on average 2.4 times. The interview guide included open-ended questions about the history of the illness, the reasons for and other factors influencing the pursuit of a hastened death, and the manner of death. Additional details of the interview are presented elsewhere. 18 To enhance trustworthiness of the data, all interviews were audiotaped and transcribed. In addition, all members of the multidisciplinary research team read the transcripts in their entirety, discussed them at weekly meetings, and generated follow-up questions for the interviewer, when appropriate. Analyses These results are based on multiple readings of the entire transcripts. Using content analysis methods, 19 the team developed primary codes, such as reasons for a hastened death and catalytic factors , to classify sections of the transcripts. The interviewer and another investigator independently coded all transcripts and met to resolve coding disagreements. Significant coding discrepancies were discussed and resolved at weekly team meetings. For each case, two investigators (R.A.P. and C.H.) independently reread the relevant sections of the transcripts, and identified the apparent motivations for the patients' pursuit of a hastened death. Respondents frequently volunteered motivations in the context of their stories, and when they did not, the interviewers specifically probed for explanations of their interest in hastening their death. These two investigators developed detailed memos for each patient, rating the relative importance of each identified issue based on the context of the patient's overall story and the emphasis given to the issues in the narrative (see Table 3 footnote for a description of the rating process). In the cases with both patient and family member interviews (n =12) and the 5 historical cases with multiple family members, we found that respondents reported similar issues. Thus, our judgments of importance were informed by these multiple reports. The two internists (R.A.P. and A.L.B.) independently reviewed the narratives of the 25 patients who hastened their death to estimate each patient's life expectancy at the time of death. They concurred in their assessments for 23 of the 25 cases, and after discussion resolved the two disagreements. The psychiatrist (A.J.B.) reviewed the transcripts for evidence of major depression and formulated a psychiatric profile for each case. 20 The interview guide did not include a depression questionnaire due to concerns that this would decrease subject participation. A more detailed description of this review is reported elsewhere. 20 Participant Characteristics We studied 35 cases of patients who pursued a hastened death (Table 2). These participants are described in detail elsewhere. 18,20 In brief, all patients were white, over half were married or living with a partner, and nearly half were widowed or divorced. Approximately one third of the participants were Protestant, 17% reported no religious preference, and 16% reported being atheist. Two thirds of the patients drew strength from their spiritual beliefs; the remainder did not consider spirituality in their deliberations about hastening their death. Nineteen patients had received disease-modifying treatment or attempts at curative treatment earlier in the course of their illnesses. While all patients seriously pursued a hastened death, 17 patients self-administered medications, 8 patients were too ill so family members administered the medications, and 1 patient used a shotgun after he was unable to obtain medications. Eight patients died of their underlying illness and 1 patient was alive at the study's conclusion. Motivating Issues Our analysis identified 7 common influential issues within 3 categories: illness-related experiences, changes in the person's sense of self, and fears about the future (Table 3). The case in Table 4 illustrates how these issues evolve and interact over time. Illness-related Experiences. Feeling weak, tired, and uncomfortable. In 24 cases, physical changes and symptoms were judged by the investigators as influential in the individual's pursuit of hastened death. Many different symptoms became unacceptable (e.g., shortness of breath, fatigue, diarrhea). The effects of medications and treatments also were an issue. One respondent reported that his partner, who had severe thrush due to AIDS, lost 3 days each week to Amphotericin B, which was "horrible for him." Another patient described her response to steroids: The side effects of the treatment are unacceptable [...] the Prednisone destroys you. For example, it destroys your muscles. My thighs are so weak I can't get up from the floor, and I don't have the energy to exercise. The whole thing is a vicious circle. [...] My face [...] looks like a melon. [...] I look like a frog in heat. (Case 23) The participants' symptoms often shared several qualities: they caused suffering, were expected to get worse, interfered with the patient's functioning and quality of life, and contributed to undermining the person's identity and sense of self. As one woman with ovarian cancer stated, " [...] the terrible weakness and the nausea and just not feeling like you can do anything. [...] And it's kind of like goals that I actually have or things that I want to accomplish are slowly being taken away [...] it's kind of like the realm of the possible [...] is shrinking" (Case 2). Pain and/or unacceptable side effects of pain medications. Pain, judged to be influential in 14 cases, functioned as a motivation in several ways: it could be unbearable, preoccupying, or consuming. One patient reported, The pain could happen immediately or it could happen an hour or two later. And then I have to see about seeing [my provider] again. It is a treadmill that I'm on; I can't get off of it, and I've had it. And I can't live like this anymore. (Case 30) In addition, a few patients worried about the unacceptable, mind-altering effects of pain medications. A woman with cancer explained, Well, the pain that I had before with the rheumatoid arthritis I knew that I could handle [-] . [...] But this pain that I have, I'm not sure [-] I can't get rid of it with the pain medicine always. [...] To give me enough to keep that pain under control, they'd have to put me out, and I don't want my son to have to take care of a bed patient. (Case 6) Loss of function. For two thirds of our participants, loss of function, ranging from losing the ability to read the newspaper or socialize with friends to the inability to eat and go to the bathroom, motivated patients' interest in a hastened death. These losses were inextricably intertwined with these patients' physical changes. Patients and their families viewed functional losses as markers of the patient's transition from life to death. A number of patients viewed the onset of incontinence or the inability to get to the bathroom as a sentinel event in their decision-making process. A daughter described her mother's experience stating, She was totally bedridden. She was messing her sheets and stuff like this, and Mother just [-] I mean, she's just [-] she was a very fastidious person. And she just [-] she [-] well, basically, she thought the quality of her life was appalling. She couldn't do anything. All she could do was lie in bed. (Case 26) Many patients accommodated over time to functional losses. Eventually, however, the losses became too great. As one family member explained, [My husband had] let go of so many things along the way and kind of made do. [He'd say], "Okay, well, now I can't walk anymore. Well, I sure like being on this couch." Then he lost something else [...] . But when he could no longer take in fluids, I think that really kind of pissed him off, because he had just been saying, "God, I'm so glad I have this Gatorade. This is the best. This is keeping me alive." [...] I think he couldn't find any more pleasure. (Case 21) One woman's account exemplifies how these losses affected her mother's sense of self and attitudes about dying: "The things that were meaningful to [my mother] in her life were her art, her ability to do her art and her friends, and spending time with her friends and cooking and eating. And she was [...] very convinced that when she couldn't do any of those things anymore, her life would be meaningless, and she wouldn't want to live anymore" (Case 7). Sense of Self. Loss of sense of self. Almost two thirds of participants pursued a hastened death because they were concerned about how dying was eroding their sense of self. Patients expressed concern about losing their personality, "source of identity," or "essence." Without the ability to maintain aspects of their life that defined them as individuals, life lost its meaning and personal dignity was jeopardized. "I'm not comfortable, and I can't do anything, so as far as I'm concerned in quality of life I'm not living; I'm existing as a dependent non-person. I've lost, in effect, my essence" (Case 23). One family member explained that her mother realized that "she was going to lose significant ability to be the person she was" (Case 1). The partner of a patient with AIDS stated: He didn't want to kill himself; he didn't want to die. It was about finding any method to be vital and the list was narrowed down to the most [-] the simplest things, and when they were gone, he didn't have a reason [...] . So it wasn't just the diarrhea or the lack of driving; it was just losing, like, his definition [-] what his sense of vitality was. And when that was gone, then he was ready. (Case 19) Several patients mentioned that they did not want to be remembered as ill and frail. One patient reported, " [...] not wanting to be seen by those that love me as this skin-and-bone frail, demented person. In other words, I don't want that image of me for me, and I don't want that image to be kind of a last image that my daughters and loved ones have of me. And that's just a dignity issue" (Case 4). For some, being cared for and losing independence was an assault on their sense of self. For these individuals, sense of self was closely linked to their desire for control. One daughter described her mother's reaction to her favorite hospice nurse's care for a fecal impaction: The nurse was over there, basically, manually helping her along. [...] And she just said, "This is not worth it." [...] And a lot [had] to do with her as a person where she just was so independent. The whole idea of nursing to her was just abhorrent. (Case 34) Desire for control. In 21 cases, the patient's desire for a hastened death was linked to a long-standing sense of independence and desire to maintain control over future events. One family member described her mother as "an extraordinarily independent person, absolutely needing to be in control of her life all the time and already felt [-] how shall I put it [-] she had problems with feeling not in control" (Case 7). Another woman with lung cancer described her attitude toward hastened death as, I will do things my way and the hell with everything and everybody else. Nobody is going to talk me in or out of a darn thing [...] what will be, will be; but what will be, will be done my way. I will always be in control. (Case 3) Fears About the Future. Fears about future quality of life and dying. While many motivational issues were based on current experiences, another common motivation was fear about the future. We judged this as influential in 21 cases. Such concerns were often affected by past experiences. For example, one patient's fears about pain and pain management were rooted in her past experience with pain due to a lifetime of severe arthritis. She told us, "I don't want to get to the place where I'm rum-dummy with morphine, because I almost reached that spot [...] and I couldn't even make out a check" (Case 6). Fears were usually associated with other motivating issues, such as loss of control, physical and functional decline, becoming a burden on family (noted to be influential in 3 cases), and loss of one's sense of self. However, what separated the fears from the other issues was their anticipatory nature. As one family member stated, "He said that he doesn't want to just turn into this vegetable kind of person where you're not aware of what's going on, and that everybody around you is affected; everybody's having to take care of you, feed you, clean you, give your medication" (Case 4). Often these fears pertained to lingering or prolonging death through the use of medical technologies. One family reported, "Living there and existing for three, four, five, six months. Living with tubes coming out of every orifice [...] that's what frightened her" (Case 14). Negative past experiences with dying. In half the cases, negative personal experiences with the death of a loved one added to patients' interest in hastened death. One patient reported the following reaction to his mother's dying experience: [T]here's no question about wanting to make provisions for a hastened death should conditions become so unbearable. I want to spare my family as much of that grief as I can [...] . [My mother] died of cancer, and we were constantly frustrated by not being able to do anything for her. [...] And just watched her waste away. And what a terrible way to go. (Case 24) Discussion Go to: [GO] [up] [down] Our data suggest that the pursuit of a hastened death was motivated by multiple, interactive factors in the context of progressive, serious illness. These patients considered a hastened death over prolonged periods of time and repeatedly assessed the benefits and burdens of living versus dying. None of the participants cited responding to bad news, such as the diagnosis of cancer, or a depressed mood as motivations for interest in hastened death. Lack of access to health care and lack of palliative care also were not mentioned as issues of concern. These findings are comparable to those reported in Oregon. 6,16 This report emphasizes the importance of 3 general sets of issues: the effects of illness (e.g., physical changes, symptoms, functional losses), the patient's sense of self (e.g., loss of sense of self, desire for control), and fears about the future. The cases also illustrate that pain is often not the most salient motivating factor. Thus, this report corroborates and expands known findings. 2,5,6,15,16,21 [-] 23 This research adds to the literature by providing rich descriptions from patients and family members about interactions between these issues and the meaning that patients ascribe to current and/or anticipated illness experiences. Many participants identified the effects of the illness on two very personal attributes that often give life meaning: a person's desire for control and sense of self. When the effects of the illness and/or treatment attack these deeply personal values, a hastened death is viewed as a means to stop this process and minimize the damage. These feelings have been reported among patients with AIDS. 17,24 The influence of some of the issues in this study differed from previous reports. For example, while the effect of pain on patients' decisions to hasten death has been widely discussed, our participants mentioned pain much less frequently than they mentioned the loss of meaningful activities and physical functioning. 4,25 Similarly, burden on family was influential in only 3 cases, although this may reflect that family members were the reporters for two thirds of our cases. 2,15,21 Depression and hopelessness have been suggested as causal factors in the pursuit of a hastened death 7 [-] 13 because they often precede suicide attempts among patients who are not terminally ill, 26 and studies of depressed patients with HIV and cancer have documented interest in PAS. 9,11,27 [-] 30 Depression and hopelessness were not significant issues for our sample, although fears about future quality of life and dying may reflect hopelessness when it is understood to mean negative expectancies about the future and one's ability to change it. In the 3 patients with possible depression, their interest in a hastened death preceded any alteration in mood, and thus, in our judgment, their possible depression did not impinge on decisional capacity. Importantly, other forms of psychological suffering motivated the patients in this study toward a hastened death. They experienced severe losses (e.g., bodily integrity, functioning, control) as existential suffering that undermined their personal sense of who they were. 31 This loss of sense of self (often described in terms of a loss of vitality, essence, personal definition) highlights the threats of dying to the social construction of life's meaning. 32 This may be especially salient among individuals living in a secular culture. Two minor differences between the ongoing and historical cases are noteworthy. Patients seeking a hastened death more frequently expressed their fears and expressed their ongoing deliberations about decisions. Family members presented more of a complete story about the patients' illness. These differences are not surprising based on the different vantage points of the participants. Overall, however, similar issues were reported, lending support to the validity of the motivating issues we identified. The results should be viewed in the context of the study's limitations. Our participants were a highly self-selected group: they were recruited from advocacy organizations that counsel patients interested in PAS and agreed to participate. Thus, these patients may not be representative of others who pursue a hastened death. In addition, depression may be underrepresented because 1) depressed patients may volunteer less for research, 2) our indirect, informal assessment may have been insufficient, and 3) depression may have served as an exclusion by the advocacy organizations for providing support. Several important implications for clinicians emerge from these cases. First, the dynamic and interactive nature of the motivations challenges health care providers to understand the holistic illness and dying experience of patients. These data confirm the recommendation, espoused in high-quality palliative care, that providers repeatedly assess the patient's concerns about losses and dying in order to understand and tailor end-of-life care to the patient's changing personal experience. 4 Second, the motivating issues can serve as an outline of topics for talking to patients about the far-reaching effects of illness, including the quality of the dying experience. Clinicians should explore a patient's fears, and how the patient sees herself in light of current and future physical decline and functional losses. 33 A patient's request for assistance with a hastened death should generate a thorough evaluation of the patient's motives and attempts at ameliorating the patient's suffering. We especially wish to thank the participants. The Greenwall Foundation and the Walter and Elise Haas Fund provided funding for this research. The Veterans Health Administration (VHA) and the Health Services Research and Development Service of the Department of Veterans Affairs provided additional support. Kathleen Foley, Ezekiel Emanuel, and Susan Block gave valuable guidance and/or feedback on earlier drafts. Drs. Pearlman, Back, and Koenig were Faculty Scholars in the Project on Death in America (PDIA) of the Open Society Institute. The views expressed in this article are those of the authors and do not necessarily represent the views of the funding sources, Department of Veterans Affairs, Project on Death in America, University of Washington, University of Pittsburgh, or persons mentioned above. References 1. Doukas DJ, Waterhouse D, Gorenflo DW, Seid J. Attitudes and behaviors on physician-assisted death: a study of Michigan oncologists. J Clin Oncol. 1995;13: 1055 [-] 61. [MEDLINE Abstract] [ISI Abstract] 2. Meier DE, Emmons CA, Wallenstein S, Quill T, Morrison RS, Cassel CK. A national survey of physician-assisted suicide and euthanasia in the United States. N Engl J Med. 1998;338: 1193 [-] 201. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 3. Emanuel EJ, Fairclough D, Clarridge BC, et al. Attitudes and practices of U.S. oncologists regarding euthanasia and physician-assisted suicide. Ann Intern Med. 2000;133: 527 [-] 32. [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 4. Foley KM. Competent care for the dying instead of physician-assisted suicide. N Engl J Med. 1997;336: 54 [-] 8. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 5. Ganzini L, Nelson HD, Schmidt TA, Kraemer DF, Delorit MA, Lee MA. Physicians' experiences with the Oregon Death with Dignity Act. N Engl J Med. 2000;342: 557 [-] 63. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 6. Sullivan AD, Hedberg K, Fleming DW. Legalized physician-assisted suicide in Oregon [-] the second year. N Engl J Med. 2000;342: 598 [-] 604. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 7. Quill TE, Meier DE, Block SD, Billings JA. The debate over physician-assisted suicide: empirical data and convergent views. Ann Intern Med. 1998;128: 552 [-] 8. [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 8. Block SD, Billings JA. Patient requests to hasten death. Evaluation and management in terminal care. Arch Intern Med. 1994;154: 2039 [-] 47. [CrossRef Abstract] [MEDLINE Abstract] [CSA Abstract] 9. Breitbart W, Rosenfeld B, Pessin H, et al. Depression, hopelessness, and desire for hastened death in terminally ill patients with cancer. JAMA. 2000;284: 2907 [-] 11. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 10. Chochinov HM, Wilson KG, Enns M, Lander S. Depression, hopelessness, and suicidal ideation in the terminally ill. Psychosomatics. 1998;39: 366 [-] 70. [MEDLINE Abstract] [ISI Abstract] 11. Emanuel EJ, Fairclough DL, Daniels ER, Clarridge BR. Euthanasia and physician-assisted suicide: attitudes and experiences of oncology patients, oncologists, and the public. Lancet. 1996;347: 1805 [-] 10. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 12. Emanuel EJ, Fairclough DL, Emanuel LL. Attitudes and desires related to euthanasia and physician-assisted suicide among terminally ill patients and their caregivers. JAMA. 2000;284: 2460 [-] 8. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 13. Ganzini L, Johnston WS, McFarland BH, Tolle SW, Lee MA. Attitudes of patients with amyotrophic lateral sclerosis and their care givers toward assisted suicide. N Engl J Med. 1998;339: 967 [-] 73. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 14. Bachman JG, Doukas DJ, Lichtenstein RL, Alcser KH. Assisted suicide and euthanasia in Michigan. N Engl J Med. 1994;331: 812 [-] 3. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 15. Back AL, Wallace JI, Starks HE, Pearlman RA. Physician-assisted suicide and euthanasia in Washington State. Patient requests and physician responses. JAMA. 1996;275: 919 [-] 25. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 16. Chin AE, Hedberg K, Higginson GK, Fleming DW. Legalized physician-assisted suicide in Oregon [-] the first year's experience. N Engl J Med. 1999;340: 577 [-] 83. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 17. Lavery JV, Boyle J, Dickens BM, Maclean H, Singer PA. Origins of the desire for euthanasia and assisted suicide in people with HIV-1 or AIDS: a qualitative study. Lancet. 2001;358: 362 [-] 7. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 18. Back AL, Starks H, Hsu C, Gordon JR, Bharucha A, Pearlman RA. Clinician-patient interactions about requests for physician-assisted suicide: a patient and family view. Arch Intern Med. 2002;162: 1257 [-] 65. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 19. Morse J, Field PA. Qualitative Research Methods for Health Professionals. Thousand Oaks, CA: Sage Publications; 1995. 20. Bharucha A, Pearlman RA, Back AL, Gordon JR, Starks H, Hsu C. The pursuit of physician-assisted suicide: role of psychiatric factors. J Palliat Med. 2003;6: 873 [-] 83. [CrossRef Abstract] [MEDLINE Abstract] 21. van der Maas PJ, van Delden JJ, Pijnenborg L, Looman CW. Euthanasia and other medical decisions concerning the end of life. Lancet North Am Ed. 1991;338: 669 [-] 74. [MEDLINE Abstract] [CSA Abstract] 22. Ganzini L, Harvath TA, Jackson A, Goy ER, Miller LL, Delorit MA. Experiences of Oregon nurses and social workers with hospice patients who requested assistance with suicide. N Engl J Med. 2002;347: 582 [-] 8. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] 23. Ganzini L, Dobscha SK, Heintz RT, Press N. Oregon physicians' perceptions of patients who request assisted suicide and their families. J Palliat Med. 2003;6: 381 [-] 90. [CrossRef Abstract] [MEDLINE Abstract] 24. Kohlwes RJ, Koepsell TD, Rhodes LA, Pearlman RA. Physicians' responses to patients' requests for physician-assisted suicide. Arch Intern Med. 2001;161: 657 [-] 63. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 25. New York State Task Force on Life and the Law. When Death Is Sought: Assisted Suicide and Euthanasia in the Medical Context. Albany, NY, 1994. 26. Beck AT, Steer RA, Kovacs M, Garrison B. Hopelessness and eventual suicide: a 10-year prospective study of patients hospitalized with suicidal ideation. Am J Psychiatry. 1985;142: 559 [-] 63. [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 27. Chochinov HM, Wilson KG. The euthanasia debate: attitudes, practices and psychiatric considerations. Can J Psychiatry. 1995;40: 593 [-] 602. [MEDLINE Abstract] [ISI Abstract] 28. Breitbart W, Rosenfeld BD, Passik SD. Interest in physician-assisted suicide among ambulatory HIV-infected patients. Am J Psychiatry. 1996;153: 238 [-] 42. [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 29. Humphry D. Final Exit: The Practicalities of Self-deliverance and Assisted Suicide for the Dying, 2nd ed. New York, NY: Dell; 1997. 30. Rosenfeld B, Breitbart W. Physician-assisted suicide and euthanasia. N Engl J Med. 2000;343: 151; discussion 151 [-] 3. [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 31. Cassell EJ. The Nature of Suffering and the Goals of Medicine, 2nd ed. New York, NY: Oxford University Press; 2004. 32. Kaufman SR. The Ageless Self: Sources of Meaning in Late Life. Madison: University of Wisconsin Press; 1986. 33. Bascom PB, Tolle SW. Responding to requests for physician-assisted suicide: "These are uncharted waters for both of us. [...] " JAMA. 2002;288: 91 [-] 8. [CrossRef Abstract] [MEDLINE Abstract] [ISI Abstract] [CSA Abstract] 34. Pearlman RA, Starks H. Why do people seek physician-assisted death? In: Quill T, Battin MP, eds. Physician-Assisted Dying: The Case for Palliative Care and Patient Choice. Baltimore, MD: Johns Hopkins University Press; 2004;92 [-] 3. Footnotes None of the authors has any financial or other conflicts of interest with respect to this work. This manuscript was presented in part at the 24th annual meeting of the Society of General Internal Medicine, May 2001. Journal of General Internal Medicine Volume 20 Issue 3 Page 234 - March 2005 [List of Issues] [image] [Table of Contents] [image] [Previous Article] [Next Article] [image] [Add to Favorites] [image] [Email to a Friend] [image] [Jacket] QuickSearch in: Synergy PubMed (MEDLINE) CrossRef for Authors: Robert A. Pearlman Clarissa Hsu Helene Starks Anthony L. Back Judith R. Gordon Ashok J. Bharucha Barbara A. Koenig Margaret P. Battin Key Words: physician-assisted suicide euthanasia decision making end-of-life issues qualitative research Accepted for publication August 6, 2004 Affiliations 1VA Puget Sound Health Care System, Seattle Division, Seattle, WA, USA; 2Departments of Medicine, 3Medical History and Ethics, and 4Health Services, University of Washington, Seattle, WA, USA; 5National Center for Ethics in Health Care (VHA), Washington, DC, USA; 6Departments of Anthropology and 7Psychology, University of Washington, Seattle, WA, USA; 8Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA; 9Center for Biomedical Ethics, Stanford University, Stanford, CA, USA; 10Department of Philosophy, University of Utah, Salt Lake City, UT, USA. Correspondence Address correspondence and requests for reprints to Dr. Pearlman: VA Puget Sound Health Care System, 1660 S. Columbian Way (S-182-GRECC), Seattle, WA 98108 (e-mail: Robert.Pearlman at med.va.gov). None of the authors has any financial or other conflicts of interest with respect to this work. This manuscript was presented in part at the 24th annual meeting of the Society of General Internal Medicine, May 2001. Image Previews Table 1. Definitions Table 2. Patient Characteristics Table 3. Motivating Issues for Pursuing a Hastened Death * Table 4. Illustrative Case Demonstrating Dynamic and Interactive Motivations for Pursuing a Hastened ... From checker at panix.com Thu May 12 19:17:44 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:17:44 -0400 (EDT) Subject: [Paleopsych] NYT: How Do Japanese Dump Trash? Let Us Count the Myriad Ways Message-ID: How Do Japanese Dump Trash? Let Us Count the Myriad Ways http://www.nytimes.com/2005/05/12/international/asia/12garbage.html By NORIMITSU ONISHI YOKOHAMA, Japan - When this city recently doubled the number of garbarage categories to 10, it handed residents a 27-page booklet on how to sort their trash. Highlights included detailed instructions on 518 items. Lipstick goes into burnables; lipstick tubes, "after the contents have been used up," into "small metals" or plastics. Take out your tape measure before tossing a kettle: under 12 inches, it goes into small metals, but over that it goes into bulky refuse. Socks? If only one, it is burnable; a pair goes into used cloth, though only if the socks "are not torn, and the left and right sock match." Throw neckties into used cloth, but only after they have been "washed and dried." "It was so hard at first," said Sumie Uchiki, 65, whose ward began wrestling with the 10 categories last October as part of an early trial. "We were just not used to it. I even needed to wear my reading glasses to sort out things correctly." To Americans struggling with sorting trash into a few categories, Japan may provide a foretaste of daily life to come. In a national drive to reduce waste and increase recycling, neighborhoods, office buildings, towns and megalopolises are raising the number of trash categories - sometimes to dizzying heights. Indeed, Yokohama, with 3.5 million people, appears slack compared with Kamikatsu, a town of 2,200 in the mountains of Shikoku, the smallest of Japan's four main islands. Not content with the 34 trash categories it defined four years ago as part of a major push to reduce waste, Kamikatsu has gradually raised the number to 44. In Japan, the long-term push to sort and recycle aims to reduce the amount of garbarage that ends up in incinerators. In land-scarce Japan, up to 80 percent of garbarage is incinerated, while a similar percentage ends up in landfills in the United States. The environmentally friendlier process of sorting and recycling may be more expensive than dumping, experts say, but it comparable in cost to incineration. "Sorting trash is not necessarily more expensive than incineration," said Hideki Kidohshi, a garbarage researcher at the Center for the Strategy of Emergence at the Japan Research Institute. "In Japan, sorting and recycling will make further progress." For Yokohama, the goal is to reduce incinerated garbarage by 30 percent over the next five years. But Kamikatsu's goal is even more ambitious: eliminating garbarage by 2020. In the last four years, Kamikatsu has halved the amount of incinerator-bound garbarage and raised its recycled waste to 80 percent, town officials said. Each household now has a subsidized garbarage disposal unit that recycles raw garbarage into compost. At the single Garbarage Station where residents must take their trash, 44 bins collect everything from tofu containers to egg cartons, plastic bottle caps to disposable chopsticks, fluorescent tubes to futons. On a recent morning, Masaharu Tokimoto, 76, drove his pick-up truck to the station and expertly put brown bottles in their proper bin, clear bottles in theirs. He looked at the labels on cans to determine whether they were aluminum or steel. Flummoxed about one item, he stood paralyzed for a minute before mumbling to himself, "This must be inside." Some 15 minutes later, Mr. Tokimoto was done. The town had gotten much cleaner with the new garbarage policy, he said, though he added: "It's a bother, but I can't throw away the trash in the mountains. It would be a violation." In towns and villages where everybody knows one another, not sorting may be unthinkable. In cities, though, not everybody complies, and perhaps more than any other act, sorting out the trash properly is regarded as proof that one is a grown-up, responsible citizen. The young, especially bachelors, are notorious for not sorting. And landlords reluctant to rent to non-Japanese will often explain that foreigners just cannot - or will not - sort their trash. In Yokohama, after a few neighborhoods started sorting last year, some residents stopped throwing away their trash at home. Garbarage bins at parks and convenience stores began filling up mysteriously with unsorted trash. "So we stopped putting garbarage bins in the parks," said Masaki Fujihira, who oversees the promotion of trash sorting at Yokohama City's family garbarage division. Enter the garbarage guardians, the army of hawk-eyed volunteers across Japan who comb offending bags for, say, a telltale gas bill, then nudge the owner onto the right path. One of the most tenacious around here is Mitsuharu Taniyama, 60, the owner of a small insurance business who drives around his ward every morning and evening, looking for missorted trash. He leaves notices at collection sites: "Mr. So-and-so, your practice of sorting out garbarage is wrong. Please correct it." "I checked inside bags and took especially lousy ones back to the owners' front doors," Mr. Taniyama said. He stopped in front of one messy location where five bags were scattered about, and crows had picked out orange peels from one. "This is a typical example of bad garbarage," Mr. Taniyama said, with disgust. "The problem at this location is that there is no community leader. If there is no strong leader, there is chaos." He touched base with his lieutenants in the field. On the corner of a street with large houses, where the new policy went into effect last October, Yumiko Miyano, 56, was waiting with some neighbors. Ms. Miyano said she now had 90 percent compliance, adding that, to her surprise, those resisting tended to be "intellectuals," like a certain university professor or an official at Japan Airlines up the block. "But the husband is the problem - the wife sorts her trash properly," one neighbor said of the airlines family. Getting used to the new system was not without its embarrassing moments. Shizuka Gu, 53, said that early on, a community leader sent her a letter reprimanding her for not writing her identification number on the bag with a "thick felt-tip pen." She was chided for using a pen that was "too thin." "It was a big shock to be told that I had done something wrong," Ms. Gu said. "So I couldn't bring myself to take out the trash here and asked my husband to take it to his office. We did that for one month." At a 100-family apartment complex not too far away, Sumishi Kawai was keeping his eyes trained on the trash site before pickup. Missorting was easy to spot, given the required use of clear garbarage bags with identification numbers. Compliance was perfect - almost. One young couple consistently failed to properly sort their trash. "Sorry! We'll be careful!" they would say each time Mr. Kawai knocked on their door holding evidence of their transgressions. At last, even Mr. Kawai - a small 77-year-old man with wispy white hair, an easy smile and a demeanor that can only be described as grandfatherly - could take no more. "They were renting the apartment, so I asked the owner, 'Well, would it be possible to have them move?' " Mr. Kawai said, recalling, with undisguised satisfaction, that the couple was evicted two months ago. From checker at panix.com Thu May 12 19:17:59 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:17:59 -0400 (EDT) Subject: [Paleopsych] Reason: Ronald Bailey: Trans-Human Expressway Message-ID: Ronald Bailey: Trans-Human Expressway Why libertarians will win the future http://www.reason.com/rb/rb051105.shtml May 11, 2004 [I'm sure Reason is a year behind the rest of the world.] Here's a prediction. Politics in the 21st century will cut across the traditional political left/right rift of the last two centuries. Instead, the chief ideological divide will be between transhumanists and bioconservatives/bioluddites. James Hughes, the executive director of the [25]World Transhumanist Association, explores this future political order in his remarkably interesting yet wrongheaded, [26]Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. Hughes, who lectures on health policy at Trinity College in Connecticut, defines transhumanism as "the idea that humans can use reason to transcend the limitation of the human condition." Specifically, transhumanists welcome the development of intimate technologies that will enable people to boost life spans, enhance intellectual capacities, augment athletic abilities, and choose their preferred emotional states. Hughes does an excellent job of describing the transformative possibilities of biotech, nanotech, information systems and cognitive research. Citizen Cyborg is not just about the wonders of technology, but also about how Hughes thinks transhumanists can best persuade their fellow citizens to welcome the changes. Hughes begins by offering a good history of the beginnings of transhumanist thinking and demonstrates that it is the legitimate heir of humanism. Humanism is the philosophy that humanity is the proper measure of all things; its practical manifestations include scientific inquiry and liberal politics. Transhumanism argues for the freedom of people to use technology to go beyond their naturally given capacities. In the late 20th century, transhumanism was chiefly celebrated and promoted by a group of libertarian techno-optimists. Among the chief leaders of this fledgling movement were philosopher Max More and Natasha Vita-More who founded the [27]Extropy Institute in 1992. Hughes makes it clear that he is uncomfortable with Extropian libertarianism and his project in Citizen Cyborg is to articulate a big tent transhumanism that can attract social democrats, tech-friendly political moderates, Greens and so forth. His preferred scenario is somehow to combine social democracy with the transhumanist goal of enabling people to use technology to transform their bodies, brains and progeny in ways they deem beneficial. As a self-described man of the Left, Hughes does recognize and effectively rebut the "left-wing bioluddite error" of "fighting individuals' free use of technology instead of power relations and prejudices." Where Hughes goes wrong is in fetishizing democratic decision-making. He fails to recognize that the Enlightenment project that spawned modern liberal democracies began by trying to keep certain questions about the transcendent out of the public sphere. Questions about the ultimate meaning and destiny of humanity are private concerns. Worries about biotechnological progress must not to be used as excuses to breach the Enlightenment understanding of what belongs in the private sphere and what belongs in the public. Technologies dealing with the birth, death and the meaning of life need protection from meddling--even democratic meddling--by others who want to control them as a way to force their visions of right and wrong on the rest of us. Your fellow citizens shouldn't get to vote on whom you have sex with, what recreational drugs you ingest, what you read and watch on TV and so forth. Hughes understands that democratic authoritarianism is possible, but discounts the possibility that the majority may well vote to ban the technologies that promise a better world. However, even as he extols social democracy as the best guarantor of our future biotechnological liberty, Hughes ignores that it is precisely those social democracies he praises, Germany, France, Sweden, and Britain, which now, not in the future, [28]outlaw germinal choice, genetic modification, reproductive and therapeutic cloning, and stem cell research. For example, Germany, Austria and Norway ban the creation of human embryonic stem cell lines. Britain outlaws various types of pre-implantation genetic diagnosis to enable parents to choose among embryos. (Despite worrisome [29]bioconservative agitation against this type of biotech research, in the United States, private research in these areas remains legal.) Hughes also favors not only social democracy but global governance centered on the United Nations with the "authority to tax corporations and nations," and a "permanent standing international army," and with UN agencies "expanded into a global infrastructure of technological and industrial regulation capable of controlling the health and environmental risks from new technologies." This is the same UN that just voted for an [30]ambiguous resolution calling on nations to ban all forms of human cloning which are incompatible with human dignity and the protection of human life. Fortunately, the resolution leaves some wiggle, but the next time the UN makes one of these democratic decisions, transhumanists may not like the result. Furthermore, Hughes's analysis is largely free of economics--he simply ignores the processes by which wealth is created and gets busy redistributing the wealth through government health care and government subsidized eugenics. After reading Citizen Cyborg, you might come away thinking that Hughes believes that corporations exist primarily to oppress people. While acknowledging that the last US government involvement in [31]eugenics--a project that involved sterilizing tens of thousands of people--was a bad idea, Hughes fails to underscore that it was democratically elected representatives, not corporations, who ordered women's tubes tied and men's testicles snipped. Although it clearly pains him, Hughes grudgingly recognizes that libertarian transhumanists still belong in his big tent. And why not? You will not find a more militantly open, tolerant bunch on the planet. Adam and Steve want get married? We'll be the groomsmen. Joan wants to contract with Jill for surrogacy services? We'll throw a baby shower. Bill and Jane want to use ecstasy for great sex? We'll leave them alone quietly. John wants to grow a new liver through therapeutic cloning? We'll bring over the scotch to help him break in the new one. In a sense, Hughes himself has not transcended the left/right politics of the past two centuries; he hankers to graft old fashioned left-wing social democratic ideology onto transhumanism. That isn't necessary. The creative technologies that Hughes does an excellent job of describing will so scramble conventional political and economic thinking that his ideas about government health care and government guaranteed incomes will appear quaint. The good news is that if his social democratic transhumanism flounders, Hughes will reluctantly choose biotech progress. "Even if the rich do get more enhancements in the short term, it's probably still good for the rest of us in the long term," writes Hughes. "If the wealthy stay on the bleeding edge of life extension treatments, nano-implants and cryo-suspension, the result will be cheaper, higher-quality technology." In the end, Citizen Cyborg is invaluable in sharpening the political issues that humanity will confront as the biotech revolution rolls on. ------------------------------------- Ronald Bailey is Reason's science correspondent. His new book, Liberation Biology: A Moral and Scientific Defense of the Biotech Revolution will be published in early 2005. References 24. mailto:rbailey at reason.com 25. http://www.transhumanism.org/index.php/WTA/index/ 26. http://www.amazon.com/exec/obidos/tg/detail/-/0813341981/reasonmagazineA 27. http://www.extropy.org/ 28. http://www.glphr.org/genetic/europe2-7.htm 29. http://brownback.senate.gov/LIStemCell.cfm 30. http://www.cbsnews.com/stories/2004/10/21/tech/main650621.shtml 31. http://www.commondreams.org/headlines/021500-02.htm 32. http://www.rppi.org/phprint.php 33. https://www.kable.com/pub/anxx/newsubsprem04.asp 34. http://www.1shoppingcart.com/app/aftrack.asp?AFID=179647 35. http://www.freedomsummit.com/ 36. http://www.enlightenedcaveman.com/ 37. https://www.kable.com/pub/anxx/multigift.asp 38. http://www.reason.com/choice/ 39. http://www.thisisburningman.com/ From checker at panix.com Thu May 12 19:20:59 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:20:59 -0400 (EDT) Subject: [Paleopsych] AFP: Single malt whisky 'can protect you from cancer', conference told Message-ID: Single malt whisky 'can protect you from cancer', conference told http://news.yahoo.com/news?tmpl=story&u=/afp/20050508/hl_afp/healthbritaincancer 5.5.8 LONDON (AFP) - Single malt whisky can beat the threat of cancer, thanks to high levels of a powerful antioxidant that kills cancer cells, a medical conference in Scotland was told. Jim Swan, an independent consultant to the global drinks industry, said that, according to research, single malt whisky contains "more ellagic acid than red wine". Swan, a doctor, told the EuroMedLab 2005 conference explained that ellagic acid is an effective "free radical scavenger" that "absorbs" or "eats up" rogue cells that occur in the body during eating. "The free radicals can break down the DNA structure of our existing cells, which then leads to the risk of the body making replacement rogue cancer cells," he said. "So, whether you indulge in the odd tipple, or you are a serious connoisseur, whisky can protect you from cancer -- and science proves it." Lesley Walker of Cancer Research UK was dubious. "There is considerable data documenting the link between drinking excess alcohol and the increased risk of a number of cancers, particularly in smokers," she said. "Ellagic acid is a powerful antioxidant, but that does not mean it is necessary to hit the bottle," she said, noting that the ellagic acid can also be found in soft fruits. The EuroMedLab 2005 conference in Glasgow, hosted by the Association of Clinical Biochemists, runs until Thursday, with more than 3,000 researchers, doctors and science and technology companies expected to attend. From checker at panix.com Thu May 12 19:21:14 2005 From: checker at panix.com (Premise Checker) Date: Thu, 12 May 2005 15:21:14 -0400 (EDT) Subject: [Paleopsych] Space.com: Creation of Black Hole Detected Message-ID: Creation of Black Hole Detected http://www.space.com/scienceastronomy/050509_blackhole_birth.html 5.5.9 By [13]Robert Roy Britt Senior Science Writer Astronomers photographed a cosmic event this morning which they believe is the birth of a black hole, SPACE.com has learned. A faint visible-light flash moments after a high-energy gamma-ray burst likely heralds the [14]merger of two dense neutron stars to create a relatively low-mass black hole, said Neil Gehrels of NASA's Goddard Space Flight Center. It is the first time an optical counterpart to a very short-duration gamma-ray burst has ever been detected. Gamma rays are the most energetic form of radiation on the electromagnetic spectrum, which also includes X-rays, light and radio waves. The merger occurred 2.2 billion light-years away, so it actually took place 2.2 billion years ago and the light just reached Earth this morning. Quick global effort Gehrels said the burst occurred just after midnight East Coast time. It was detected by NASA's orbiting Swift telescope. Swift automatically repositioned itself within 50 seconds to image the same patch of sky in X-rays. It just barely caught an X-ray afterglow, Gehrels said in a telephone interview. The X-ray counterpart was barely detectable and only observed for a few minutes. An email was sent out to astronomers worldwide, and large observatories then tracked to the location and spotted a faint visible-light afterglow. Gamma ray bursts are [15]mysterious beasts. They come from all over the universe. Long-duration bursts, lasting a few seconds, are thought to be associated with the formation of black holes when massive stars explode and collapse. In recent years, scientists have detected X-ray and optical afterglows of these long bursts. Very short-duration bursts, like the one this morning, last only a tiny fraction a second. Until now, no optical afterglows from these bursts have been detected. Theorists think a burst like this represents the formation of a black hole a few times the mass of the Sun, but if so, then there should be flashes of X-rays and visible light, too. The burst has been named GRB050509b. What happened Steinn Sigurdsson, a Penn State University researcher who is excited about the observations but was not involved in them, explained what theorists think happened: Over a long time period, at least a hundred million years and perhaps billions of years, the two neutron stars spiraled toward each other. Neutron stars themselves are [16]very dense objects, collapsed stellar remnants. "A fraction of a second before contact, the lower mass neutron star is disrupted and forms a neutrino driven accretion disk around the higher mass neutron star," Sigurdsson told SPACE.com. "It implodes under the weight and forms a maximally spinning low-mass black hole." Astronomers can't see black holes, because light and everything else that enters them is lost to observation. But just before material falls in, some high-energy process -- likely involving magnetism and speeds approach that of light -- vents some of the material back into space. The gamma ray burst signals the formation of a superheated jet of gas being shot out from the chaotic region around the newly formed black hole at a significant fraction of light-speed, Sigurdsson said. "This really does look like a merger scenario," said Gehrels, who heads up the scientific operations for the Swift satellite. The first gamma-ray burst was detected by accident in 1967. It was found by U.S. satellites deployed to monitor possible violations of the nuclear test ban treaty. Researchers now know that one erupts roughly every day somewhere in the cosmos. Most originate many billions of light years away. Each burst can briefly outshine an entire galaxy. Gamma-ray bursts in our own galaxy are [17]very rare. Some scientists speculate that such bursts in the Milky Way's past might have [18]caused mass extinctions on Earth. References 14. http://www.space.com/scienceastronomy/neutron_stars_031203.html 15. http://www.space.com/scienceastronomy/burst_blackholes_030305.html 16. http://www.space.com/scienceastronomy/neutron_stars_031203.html 17. http://www.space.com/scienceastronomy/bright_flash_050218.html 18. http://www.space.com/scienceastronomy/astronomy/gammaray_bursts_010522-1.html From shovland at mindspring.com Fri May 13 01:13:35 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 12 May 2005 18:13:35 -0700 Subject: [Paleopsych] business/crime Message-ID: <01C5571E.511E9000.shovland@mindspring.com> Some of the most successful people in business are actually criminals who are smart enough to avoid taking too much. Steve Hovland www.stevehovland.net -----Original Message----- From: Michael Christopher [SMTP:anonymous_animus at yahoo.com] Sent: Thursday, May 12, 2005 11:30 AM To: paleopsych at paleopsych.org Subject: [Paleopsych] business/crime >>The reality is that sometimes the characteristics that make someone successful in business or government can render them unpleasant personally. What's more astonishing is that those characteristics when exaggerated are the same ones often found in criminals.<< --That actually makes sense. I met a former gang leader once. He had all the qualities I've seen in successful businesspeople, specifically the ability to "map" the people around him and manipulate them using their own language. I don't think that would be as easy for someone who is fully connected to people, it requires a certain emotional distance and the ability to move people around on a mental chessboard. Gang members often use military or business metaphors as well. The mindset is not too different, only the playing field. Michael Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From shovland at mindspring.com Fri May 13 13:44:03 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 13 May 2005 06:44:03 -0700 Subject: [Paleopsych] Free screen saver from San Francisco Orchid Show Message-ID: <01C55787.27919160.shovland@mindspring.com> Steve Hovland www.stevehovland.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 179552 bytes Desc: not available URL: From checker at panix.com Fri May 13 15:49:54 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 11:49:54 -0400 (EDT) Subject: [Paleopsych] NYT: Will It Be a Boy or a Girl? You Could Check the Receipt Message-ID: Will It Be a Boy or a Girl? You Could Check the Receipt http://www.nytimes.com/2005/05/12/nyregion/12baby.html By LINDA SASLOW When Rochelle Ludwig became pregnant last year, she and her husband, David, debated whether to find out the sex of their baby early. Knowing that a routine sonogram at 20 weeks would most likely provide that information, they ultimately resisted the urge to look. Laura and Lloyd Rosenbaum also thought it was important to be surprised. "When we thought about the excitement when the baby is born and you hear, 'It's a girl!' or 'It's a boy!' - we didn't want to give up that moment," she said. But the Ludwigs and Rosenbaums wanted someone to know, behind the counter at the baby store. Maybe it is another example of big city neuroticism. Or maybe it is the ultimate in practicality. But the Ludwigs and the Rosenbaums are among a growing number of Manhattan parents-to-be who do not learn the sex of their baby early, but still want the nursery decorated when baby arrives. So they choose two sets of furniture, clothing and bedding, then ask the store owners to call their obstetrician to find out whether to submit the order in pink or blue. "It's a New York mentality," said Dr. Ricky Friedman, an obstetrician on the Upper East Side. "With the new technology at our disposal, just about anyone who wants to know the sex of their baby can. But for about half of our patients, who want to be surprised, they still want to be fully prepared, and everything still has to be planned perfectly." Susan Johnson, co-owner of Blue Bench, in TriBeCa, has kept the secret for a dozen customers. But the service is starting to get more attention in places like New York magazine. "The first time a customer asked me to call her doctor, I was so nervous and afraid that I'd blow the surprise," she said. "After we had spent hours together picking out two sets of furniture, bedding and curtains, once I knew what she was having, I told her not to call me again during her pregnancy." Keeping the secret is especially hard for store owners when family members come snooping around. "There have been several occasions," said Pat Meyerson, co-owner of La Layette, on the Upper East Side, "when a sister or mother-in-law has called and asked us to share the secret, but we never tell. Once we know the sex of the baby, we write it down on a sheet of paper and put it away - so we can try to forget that we know." Ms. Johnson said one pregnant customer asked her to share the secret only with her mother. "She wanted to have the nursery painted, carpeted and decorated in time for the baby, but didn't want to know herself," she said. "So every day when she went to work, her mom came to her apartment and worked on the room, then padlocked the door before leaving. For months, she lived with a padlocked nursery." Not knowing can be excruciating, the expectant parents say. After the Rosenbaums' sonogram, the technician wrote the sex of their baby on a slip of paper, folded it into a sealed envelope and handed it to them. "We made it for one block, then ripped up the envelope and threw the pieces into a garbage pail," Ms. Rosenbaum said. But a month later, she returned for another sonogram, "and this time brought it to my doctor's office, so the stores could know." (It was a boy.) Some parents look for clues, said Pamela Scurry, owner of Wicker Garden, on the Upper East Side. One customer, she said, had ordered two sets of layettes, but came back with her mother to choose alternatives when some items were unavailable. "When the saleswoman spent a lot of time with them choosing a new pink blanket, they were smiling at each other, certain we knew it was a girl," Ms. Scurry said. When they chose an outfit for a bris, the circumcision ceremony, in case it was a boy, "and the saleswoman spent even longer helping them, they looked at each other again, now convinced that it was a boy. They were so busy trying to figure it out, without really wanting to know." (It was a girl.) Temptation sat in Ms. Ludwig's home for weeks. After she had ordered two sets of bedding and two gliders, one with pink fabric, the other with blue, she told the store to ship the order to the home of her husband's family. But three weeks before her due date, the glider was mistakenly shipped to her apartment. "For three long weeks, it sat in our nursery, in a huge box marked 'Do not open,' " Ms. Ludwig said. "It was torture." (The glider came in blue.) From checker at panix.com Fri May 13 15:50:10 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 11:50:10 -0400 (EDT) Subject: [Paleopsych] NYT: Geneticists Link Modern Humans to Single Band Out of Africa Message-ID: Geneticists Link Modern Humans to Single Band Out of Africa http://www.nytimes.com/2005/05/12/science/12cnd-migrate.html By [2]NICHOLAS WADE A team of geneticists believe they have shed light on many aspects of how modern humans emigrated from Africa by analyzing the DNA of the Orang Asli, the original inhabitants of Malaysia. Because the Orang Asli appear to be directly descended from the first emigrants from Africa, they have provided valuable new clues about that momentous event in early human history. The geneticists conclude that there was only one migration of modern humans out of Africa - that it took a southern route to India, Southeast Asia and Australasia, and consisted of a single band of hunter-gatherers, probably just a few hundred people strong. A further inference is that because these events took place during the last Ice Age, Europe was at first too cold for human habitation and was populated only later - not directly from Africa but as an offshoot of the southern migration which trekked back through the lands that are now India and Iran to reach the Near East and Europe. The findings depend on analysis of mitochondrial DNA, a type of genetic material inherited only through the female line. They are reported in today's issue of Science by a team of geneticists led by Vincent Macaulay of the University of Glasgow. Everyone in the world can be placed on a single family tree, in terms of their mitochondrial DNA, because everyone has inherited that piece of DNA from a single female, the mitochondrial Eve, who lived some 200,000 years ago. There were, of course, many other women in that ancient population, but over the generations one mitochondrial DNA replaced all the others through the process known as genetic drift. With the help of mutations that have built up on the one surviving copy, geneticists can arrange people in lineages and estimate the time of origin of each lineage. With this approach, Dr. Macaulay's team calculates that the emigration from Africa took place about 65,000 years ago, pushed along the coastlines of India and Southeast Asia, and reached Australia by 50,000 years ago, the date of the earliest known archaeological site. The Orang Asli - meaning "original men" in Malay - are probably one of the surviving populations descended from this first migration, since they have several ancient mitochondrial DNA lineages that are found nowhere else. These lineages are between 42,000 and 63,000 years old, the geneticists say. Groups of Orang Asli like the Semang have probably been able to remain intact because they are adapted to the harsh life of living in forests, said Dr. Stephen Oppenheimer, the member of the geneticists' team who collected blood samples in Malaysia. Some archaeologists believe that Europe was colonized by a second migration, which traveled north out of Africa. This fits with the earliest known modern human sites - which date to 45,000 years ago in the Levant and 40,000 years ago in Europe. But Dr. Macaulay's team says there could only have been one migration, not two, because the mitochondrial lineages of everyone outside Africa converge at the same time to the same common ancestors. Therefore, people from the southern migration, probably in India, must have struck inland to reach the Levant, and later Europe, the geneticists say. Dr. Macaulay said it was not clear why only one group had succeeded in leaving Africa. One possibility is that since the migration occurred by one population budding into another, leaving people in place at each site, the first emigrants may have blocked others from leaving. Another possibility is that the terrain was so difficult for hunter-gatherers, who must carry all their belongings with them, that only one group succeeded in the exodus. Although there is general, but not complete, agreement that modern humans emigrated from Africa in recent times, there is still a difference between geneticists and archaeologists as to the timing of this event. Archaeologists tend to view the genetic data as providing invaluable information about the interrelationship between groups of people, but they place less confidence in the dates derived from genetic family trees. There is no evidence of modern humans outside Africa earlier than 50,000 years ago, says Dr. Richard Klein, an archaeologist at Stanford University. Also, if something happened 65,000 years ago to allow people to leave Africa, as Dr. Macaulay's team suggests, there should surely be some record of this event in the archaeological record within Africa, Dr. Klein said. Yet signs of modern human behavior do not appear in Africa until the transition between the Middle and Later Stone Age, 50,000 years ago, he said. "If they want to push such an idea, find me a 65,000-year-old site with evidence of human occupation outside of Africa," Dr. Klein said. Geneticists counter that many of the coastline sites occupied by the first emigrants would now lie under water, since sea level has risen more than 200 feet since the last Ice Age. Dr. Klein expressed reservations about this argument, noting that rather than waiting for the rising sea levels to overwhelm them, people would build new sites further inland. Dr. Macaulay said that genetic dates have improved in recent years now that it is affordable to decode the whole ring of mitochondrial DNA, not just a small segment as before. But he said he agreed "that archaeological dates are much firmer than the genetic ones" and that it is possible his 65,000-year date for the African exodus is too old. Dr. Macaulay's team has been able to estimate the size of the population in Africa from which the founders are descended. The calculation indicates a maximum of 550 women, but the true size may have been considerably less. This points to a single group of hunter-gatherers, perhaps a couple of hundred strong, as the ancestors of all humans outside of Africa, Dr. Macaulay said. From checker at panix.com Fri May 13 15:50:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 11:50:23 -0400 (EDT) Subject: [Paleopsych] CBC: A Manifesto on Biotechnology and Human Dignity Message-ID: A Manifesto on Biotechnology and Human Dignity The Center for Bioethics and Culture Network http://www.thecbc.org/redesigned/manifesto.php If you agree with this statement, [15]click here to join with Chuck Colson, James Dobson, Joni Eareckson Tada, Dr. Richard Land, and the other signatories listed above in signing on to the Biotech Manifesto. "Our children are creations, not commodities."President George W. Bush "If any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after are the patients of that power," slaves to the "dead hand of the great planners and conditioners." C. S. Lewis 1. The Issue The debates over human cloning have focused our attention on the significance for the human race of what has been called "the biotech century." Biotechnology raises great hopes for technological progress; but it also raises profound moral questions, since it gives us new power over our own nature. It poses in the sharpest form the question: What does it mean to be human? 2. Biotechnology and Moral Questions We are thankful for the hope that biotechnology offers of new treatments for some of the most dreaded diseases. But the same technology can be used for good or ill. Scientists are already working in many countries to clone human beings, either for embryo experiments or for live birth. In December 2002, the Raelians, a religious cult that believes the human race was cloned by space aliens, announced that a baby they called "Eve" was the first cloned human. But it is not just the fringe cults that are involved in cloning; that same month, Stanford University announced a project to create cloned embryos for medical experimentation. Before long, scientists will also be able to intervene in human nature by making inheritable genetic changes. Biotechnology companies are already staking claims to parts of the human body through patents on human genes, cells, and other tissues for commercial use. Genetic information about the individual may make possible advances in diagnosis and treatment of disease, but it may also make those with "weaker" genes subject to discrimination along eugenic lines. 3. The Uniqueness of Humanity and Its Dignity These questions have led many to believe that in biotechnology we meet the moral challenge of the twenty-first century. For the uniqueness of human nature is at stake. Human dignity is indivisible: the aged, the sick, the very young, those with genetic diseases--every human being is possessed of an equal dignity; any threat to the dignity of one is a threat to us all. This challenge is not simply for Christians. Jews, Muslims, and members of other faiths have voiced the same concerns. So, too, have millions of others who understand that humans are distinct from all other species; at every stage of life and in every condition of dependency they are intrinsically valuable and deserving of full moral respect. To argue otherwise will lead to the ultimate tyranny in which someone determines who are deemed worthy of protection and those who are not. 4. Why This Must Be Addressed As C. S. Lewis warned a half-century ago in his remarkable essay The Abolition of Man, the new capacities of biotechnology give us power over ourselves and our own nature. But such power will always tend to turn us into commodities that have been manufactured. As we develop powers to make inheritable changes in human nature, we become controllers of every future generation. It is therefore vital that we undertake a serious national conversation to ensure a thorough understanding of these questions, and their answers, so that our democratic institutions will be able to make prudent choices as public policy is shaped for the future. 5. What We Propose We strongly favor work in biotechnology that will lead to cures for diseases and disabilities, and are excited by the promise of stem cells from adult donors and other ethical avenues of research. We see that around the world other jurisdictions have begun to develop ethical standards within which biotech can flourish. We note that Germany, which because of its Nazi past has a unique sensitivity to unethical science and medicine, has enacted laws that prohibit all cloning and other unethical biotech options. We note that the one international bioethics treaty, the European Convention on Human Rights and Biomedicine, outlaws all inheritable genetic changes and has been amended to prohibit all cloning. _____________________________________________________________ We therefore seek as an urgent first step a comprehensive ban on all human cloning and inheritable genetic modification. This is imperative to prevent the birth of a generation of malformed humans (animal cloning has led to grotesque failures), and the establishment of vast experimental embryo farms with millions of cloned humans.. We emphasize: All human cloning must be banned. There are those who argue that cloning can be sanctioned for medical experimentation--so-called "therapeutic" purposes. No matter what promise this might hold--all of which we note is speculative--it is morally offensive since it involves creating, killing, and harvesting one human being in the service of others. No civilized state could countenance such a practice. Moreover, if cloning for experiments is allowed, how could we ensure that a cloned embryo would not be implanted in a womb? The Department of Justice has testified that such a law would be unenforceable. We also seek legislation to prohibit discrimination based on genetic information, which is private to the individual. We seek a wide-ranging review of the patent law to protect human dignity from the commercial use of human genes, cells, and other tissue. We believe that such public policy initiatives will help ensure the progress of ethical biotechnology while protecting the sanctity of human life. We welcome all medical and scientific research as long as it is firmly tethered to moral truth. History teaches that whenever the two have been separated, the consequence is disaster and great suffering for humanity. (Signed) Carl Anderson Supreme Knight [16]Knights of Columbus Gary Bauer President [17]American Values Robert H. Bork Senior Fellow [18]The American Enterprise Institute Nigel M. de S. Cameron, Ph.D. Dean, [19]Wilberforce Forum Director, [20]Council for Biotechnology Policy Dr. Ben Carson Neurosurgeon [21]Johns Hopkins Hospital, Dept. of Neurosurgery Samuel B. Casey Executive Director & CEO [22]Christian Legal Society Charles W. Colson Chairman [23]The Wilberforce Forum, [24]Prison Fellowship Ministries Ken Connor President [25]Family Research Council Paige Comstock Cunningham, J.D. Board Chair and former President [26]Americans United for Life Dr. James Dobson [27]Focus on the Family Dr. Maxie D. Dunnam [28]Asbury Theological Seminary C. Christopher Hook, M.D. [29]Mayo Clinic Deal W. Hudson Editor and Publisher [30]CRISIS magazine Dr. Henk Jochemsen Director [31]Lindeboom Institute Dr. D. James Kennedy Senior Pastor [32]Coral Ridge Presbyterian Church Dr. John Kilner President [33]Center for Bioethics and Human Dignity C. Everett Koop, M.D., Sc.D. [34]C. Everett Koop Institute at Dartmouth Former U.S. Surgeon General Bill Kristol Chairman, [35]Project for The New American Century Editor, [36]The Weekly Standard Jennifer Lahl Executive Director [37]The Center for Bioethics and Culture Dr. Richard D. Land President [38]The Ethics & Religious Liberty Commission of the Southern Baptist Convention Dr. C. Ben Mitchell [39]Trinity International University Editor, [40]Ethics & Medicine R. Albert Mohler, Jr. President [41]The Southern Baptist Theological Seminary Fr. Richard Neuhaus [42]Institute for Religion and Public Life David Prentice, Ph.D. Professor, Life Sciences [43]Indiana State University Sandy Rios President [44]Concerned Women for America Dr. Adrian Rogers Senior Pastor [45]Bellevue Baptist Church Dr. William Saunders Senior Fellow & Director, Center for Human Life & Bioethics [46]Family Research Council Rev. Louis P. Sheldon Chairman [47]Traditional Values Coalition David Stevens, M.D. Executive Director [48]Christian Medical Association Joni Eareckson Tada President [49]Joni and Friends Paul Weyrich Chairman and CEO [50]The Free Congress Foundation Ravi Zacharias President [51]Ravi Zacharias International Ministries Biotech Manifesto Signature Form If you agree with this statement, [52]click here to join with Chuck Colson, James Dobson, Joni Eareckson Tada, Dr. Richard Land, and the other signatories listed above in signing on to the Biotech Manifesto. References 15. http://www.thecbc.org/redesigned/manifesto_signer.php 16. http://www.kofc.org/ 17. http://www.ouramericanvalues.org/ 18. http://www.aei.org/ 19. http://www.wilberforce.org/ 20. http://www.biotechpolicy.org/ 21. http://www.hopkinsmedicine.org/hopkinshospital 22. http://www.clsnet.org/ 23. http://www.wilberforce.org/ 24. http://www.pfm.org/ 25. http://www.frc.org/ 26. http://www.unitedforlife.org/ 27. http://www.family.org/ 28. http://www.ats.wilmore.ky.us/ 29. http://www.mayo.edu/ 30. http://www.crisismagazine.com/ 31. http://www.lindeboominstituut.nl/ 32. http://www.crpc.org/ 33. http://www.chbd.org/ 34. http://www.dartmouth.edu/dms/koop/index.shtml 35. http://www.newamericancentury.org/ 36. http://www.weeklystandard.com/ 37. http://www.thecbc.org/ 38. http://www.erlc.com/ 39. http://www.tiu.edu/ 40. http://www.ethicsandmedicine.com/ 41. http://www.sbts.edu/ 42. http://www.firstthings.com/ 43. http://www.indstate.edu/ 44. http://www.cwfa.org/ 45. http://www.bellevue.org/ 46. http://www.frc.org/ 47. http://www.traditionalvalues.org/ 48. http://www.cmdahome.org/ 49. http://www.joniandfriends.org/ 50. http://www.freecongress.org/ 51. http://www.gospelcom.net/rzim 52. http://www.thecbc.org/redesigned/manifesto_signer.php From checker at panix.com Fri May 13 15:50:35 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 11:50:35 -0400 (EDT) Subject: [Paleopsych] Science-Spirit: Testing Our Ethical Limits Message-ID: Testing Our Ethical Limits http://science-spirit.org/new_detail.php?news_id=537 Reproductive genetic testing promises to screen out fatal diseases, pinpoint donor matches, and, potentially, design our offspring. Is that really what we want? by Trey Popp, In 1995, Laurie Strongin gave birth to her first child--a son who carried with him an unexpected inheritance. Henry had Fanconi anemia, a rare, genetically determined disease that culminates in childhood death. His only real hope for reaching adolescence was a cord stem cell transplant from a sibling with the same Human Leukocyte Antigen (HLA) tissue type. The next year, Strongin and her husband, Allen Goldberg, had a second son. While Jack beat the one-in-four chance of having Fanconi anemia, he wasn't a tissue match. Those odds were slimmer. It was during that second pregnancy that doctors presented Strongin and Goldberg with an enticing possibility. It might be possible, they said, to augment the standard process of in vitro fertilization (IVF) with a new technique called preimplantation genetic diagnosis (PGD). This would allow them to screen embryos in a search for one that didn't have Fanconi anemia and was a donor match. If the resulting pregnancy was successful, blood from the newborn's umbilical cord, which contains the hematopoietic stem cells that are more successfully grafted into a recipient than bone marrow cells, would be used to change Henry's fate. "The decision was easy for us," Strongin says. "It felt like it would save two lives at once." Statistically, it would not be easy to produce an embryo that had the right set of cell-surface antigens because not only were the odds long to begin with, but every round of hormone treatment would produce only a handful of eggs. The first three cycles didn't yield the right combination, but Strongin and Goldberg pressed on. The next three cycles went the same way. "It was a new science," Strongin recalls, "and I think that a lot of times, new things don't work the first time." And sometimes, new things don't work at all until it's too late. As Strongin was going through her ninth cycle, Henry died. Since that time, at least 2,000 babies have been born with the aid of PGD. Costs are decreasing, now hovering around $15,000-- of which IVF constitutes the vast bulk of the expense. Parents have used the technology to prevent the birth of babies with Tay-Sachs, cystic fibrosis, Down syndrome, and a variety of other conditions that arise from single-gene mutations or chromosomal abnormalities. It also has been used successfully to the same end that eluded Laurie Strongin. And Henry. Reproductive genetic testing technologies like PGD are becoming increasingly common, but, for the most part, are familiar only to couples burdened either with infertility or unfortunate genes. As with many new technologies, regulation is haphazard and standard practices have yet to evolve--which is, in its own right, cause for concern. But the debate over the ethics of reproductive genetics is gathering momentum on other levels as well. "One foundational ethical concern for some of our members is the notion that screening requires discarding those embryos that don't meet the prerequired standard," says Carter Snead, general counsel for The President's Council on Bioethics. "The destruction of embryos is a source of great ethical disquiet for some members of our council. "There are also worries from some members of the council that the more genetic control you're exercising over your child's conception, the more we move in the direction of manufacture, where children are not regarded as random gifts, but are regarded as the object of the will of the parents," Snead continues, adding concerns about the potential psychological harm done to children selected for a specific trait to the list of worries. "And this becomes even more amplified when you're talking about nontherapeutic or nonmedical traits." Awareness of these issues is beginning to spread to the general public. The Genetics and Public Policy Center at Johns Hopkins University recently unveiled what it bills as the most comprehensive measurement of public opinion about reproductive genetics to date. "There was a vacuum in the whole area of reproductive genetic policy because it's so contentious and so divisive that nobody was really occupying that niche except in an advocacy posture," says Kathy Hudson, director of the center. "There were the technology enthusiasts on one end, who are portrayed as having never met a technology they wouldn't like to try and never met a regulation that they would support. And at the other end are conservative religious voices who are very wary of the hand of science intervening in human reproduction." What emerged from Hudson's work was a picture with considerably more nuance--especially in comparison to the rigidly polarized abortion debate that occupies similar ground. Two-thirds of Americans seem to be comfortable with the idea of using PGD to prevent fatal childhood diseases or to produce a tissue match in a case like the one Strongin and Goldberg faced--even though the technology involves the creation of some embryos that will be destroyed. Selfdescribed fundamentalist and evangelical Christians tend to be more wary, but slightly more than half of them support the use of PGD when searching for an HLA tissue match. Approval levels do fall across all groups, however, when it comes to reducing the likelihood of adultonset diseases, choosing a child's gender, or selecting traits like intelligence or strength (which is currently impossible). Unsurprisingly, survey participants of all social and ethnic backgrounds voiced fears that advances in reproductive genetic technologies may lead to a new era of eugenics. Yet even those who advocate some sort of ethics-based regulation are reluctant to limit their own choices. "I think that people are able to hold, at the same time, views and values that appear to conflict," Hudson says. "They are concerned about eugenic applications, but they want to make sure that individual, private reproductive decisions are protected. So on the one hand, they're worried. On the other hand, they're worried about what you would do to address that first worry." Aside from the general laws governing medical laboratories and the doctorpatient relationship, there are few rules that specifically address genetics testing. "It's incredible to me that there is no regulation whatsoever," says Arthur Caplan, director of the University of Pennsylvania Center for Bioethics. "There isn't even regulation of what tests can be offered by whom--forget about whether people should pick it. If I open up a clinic and say I'm going to do genetic testing on embryos, I don't even have to satisfy any kind of standard licensure requirement." As a matter of principle, Caplan does not oppose using PGD to weed out gene mutations or even to select for desirable traits, were such a thing to become possible. But, he cautions, focusing simply on reproductive genetic technologies may have the indirect effect of diverting our attention from more pressing problems. "If there really was a demand for taller, faster, stronger kids, I would worry that that might distort the ability of people who are truly sick and truly disabled to get resources. They don't get them now, frequently, and there would be even less chance if a lot of our resources went toward designing our descendants. So I see it more as a justice issue." The question of equitable distribution of healthcare resources surfaces often in bioethics circles. Glenn McGee, editor of the American Journal of Bioethics, has written extensively about reproductive genetics testing. He observes that evaluating PGD in economic terms can be a powerful argument in its favor. "PGD framed as simple medical endeavor is a no-brainer. Take cystic fibrosis: Who wouldn't say we should have PGD for free for any couple that's at risk? It costs $2 million, $3 million, $4 million in average lifetime expenditure, at least half of which is covered by state insurance. Plus, you've got the social cost of people going bankrupt trying to care for their child. So why not cover it?" But not everyone is so certain that doctors, genetics counselors, or academics ought to be making these kinds of decisions. "Professionals have stupid ideas about life with disability," says Adrienne Asch, who teaches ethics at Wellesley College in Massachusetts. "Uninformed, uneducated, narrow notions. And their curricula and media images and bioethics have not helped them adequately. " The idea that cystic fibrosis qualifies as a condition to be eradicated by PGD infuriates Asch. "The mean life expectancy is over thirty years for cystic fibrosis. You can live longer than that," she says. "You can go to school, you can grow up, you can play with your playmates, you can have friends, you can get married, you can have a job. Yes, you're going to die, and you're going to die sooner than if you don't have cystic fibrosis, but you can have a very interesting and complicated and rich life before you die." Asch is not alone in arguing against the notion that life with a disability is to be avoided at all costs. She cites the National Down Syndrome Congress, the Spina Bifida Association, and similar groups as allies. "People claim that the only ones who talk about this are right-wing, religious zealots," Asch says. "They're wrong. Lots of people talk about what we want out of life." So what do we want out of life? For better or worse, technologies like PGD are allowing parents to avoid that question by asking another: What do we want out of our children? In a free and unregulated market, services will spring up to offer all kinds of answers. For a little more than $18,000, a clinic currently operating in California offers parents PGD solely for the purpose of selecting the gender of their offspring. Business is good. "Whatever people say when you poll them, all that really matters is, are there clients?" says McGee, the bioethicist. "And there just are. There are clients that want PGD, and the more services that are offered, the more people line up. It's quite clear from my research that the more these services become available, the more people will avail themselves of them." Looking to the future, McGee worries about the psychological impact high-tech pregnancies may have on the children resulting from them. "The birds and the bees are gone," he laments. "With multiple people involved, you've got the ants and the termites. The children's story for a child who's made with PGD is just different. There's no fate. There's no accident. It's all about planned parenthood. And in some ways, that's a good thing. It's not that it's bad--it's that kids need to have some sense that they're not a product." There is little question that now is the time to start thinking about where reproductive genetic technologies may take us, and who should have a say in plotting that course. But there is also good cause not to get carried away. The age of ants and termites has not yet dawned, and may never. Mark Hughes, the doctor who directed Strongin's PGD almost a decade ago, cautions that our ability to manipulate reproductive outcomes is severely limited not just by our lack of knowledge, but by biology itself. Single-gene disorders are one thing, he says, but most diseases and traits are governed by a variety of genes working in concert and mediated by environmental factors. The odds against the appearance of a desired trait are only lengthened by the limitations of our reproductive systems. "Biology is going to put up a wall," Hughes says, insisting PGD is simply not a sensible tool for customizing offspring. Consider Laurie Strongin, who was overwhelmed by the odds when just two gene markers were in play. "It's dangerous to simplify this into something about parental preference and ease of use," she warns. "Since we went through all this, I've heard people trying to compare PGD to the idea of designer babies, worrying about parents taking a willy-nilly approach to their reproductive choices. There's nothing willy-nilly about choosing life or death." From checker at panix.com Fri May 13 15:51:04 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 11:51:04 -0400 (EDT) Subject: [Paleopsych] LRC: Sabine Barnhart: Germany's Population Decline Message-ID: Sabine Barnhart: Germany's Population Decline http://www.lewrockwell.com/barnhart/barnhart31.html 5.4.29 Contradictions occur when truth clashes with thinking that purports to be rational. It results in behaviors whose consequences are opposite from what a policy or a thought process originally intended would happen. This is also known as the "law of unintended consequences." The policy maker or planner often compromises with societys misguided ideas by endorsing behaviors that are actually damaging to society. Societies will pressure its elected representatives into legitimizing lifestyles that often have a reverse affect on the productivity of its nation. People feel then entitled to receive its cake which the politicians have baked, but yet continue to bemoan the system when there is no delivery and continue to bemoan its taste and keep sending it back to the kitchen for another try upon try. Their short-term vision cannot see future failures of their philosophy. Governments, who are placed on their throne by the will of the people, will enforce these irrational policies on its citizens. Government policies in cooperation with the majority voices of its society create more poverty and hubris with their legislations if one takes a closer look at statistics. The data is a visible outcry of an alarming crisis in their economic and social arena of their countries. Germany is currently finding itself in such a crisis. With a birth rate of 8.45 per 1000 people in 2004 (down from 9.35 in 2000), its 82.5 million population is aging quickly. Along with its higher death rate than birth rate, Germany finds itself at a net population drop of 143,000 people in 2004. Even if Germany intends to fill the gap with an influx of immigration from the East through its attractive welfare system, it cannot keep up with the reality that people are not reproducing at a rate that would keep their social security system from breaking apart. Although attempts are made to import highly skilled workers, Germany is currently finding itself in an immigration mess which places Germanys Foreign Minister, Joschka Fischer, under pressure due to his lax visa regulations allowing thousands of immigrants from Ukraine to enter the country in 2000 through 2003. His political opposition is claiming an increase in illegal workers and rise in criminal activity. Unemployment, which in February 2005 stood at 10.4%, presents an additional obstacle for those people who wish to start a family. Job stimulation is restricted by governments interference of placing too many demands and high taxes on its businesses. Social welfare has to be supported as well as the motivation to increase population. And yet, the businesses and industries that could provide employment for its people is being punished and restricted by labor laws and labor unions, forcing businesses to move to countries in the East where taxes are lower. Capital that is essential for economic growth, is diverted to support policies that in the long run will drain it further and suffocate any chance for a substantially growing economy, thus endangering the incentives for production and damaging the chances of those stuck on the bottom rung of the economic latter. The uncertainty of this shaky policy instills a fear in every second German of losing his job with an increase of mental depression in its population. The German government has spent already over 150 Million Euros on family issues trying to raise the birth rate. Since 1964 the birth rate fell from 1.4 Million to 700,000 each year showing a consistent 40-year pattern that the government has not been able to break with policies. Its incentive to financially reward average income parents with two children by adding approximately 2000 Euros each year to their paycheck is not enough to instill the desire to marry and have children. Another consistent pattern appears in a study done in 1998 showing that approximately a fourth of all children born in EU countries are born out of wedlock which in 1980 was only at 10% of all births. Countries that show the highest numbers of out of wedlock births are Italy at 8.3%, with Sweden (3.6%) showing the lowest desire to marry. Even Germany shows this increasing trend each year where almost every fourth child is now born outside the union of marriage. Renate Schmidt, Bundesfamilienministerin (Federal Minister of Family Affairs), also finds this trend very alarming. A recent study by Unicef concluded that 40% of all German children in single parent households live in poverty. Unicef defines poverty as living in a household with less than half of the average income. Her Social Party Democrats (SPD) intends to solve the problem through stronger financial funding by increasing the monthly child support to 140 Euros for single parent households. Every tenth child in Germany is now classified as living in poverty by government standards and has become the burden of its society. It is willingly removing the responsibility of those individuals who have engaged in behaviors that are freely being endorsed by society at large. Its consequence falls on the wallet of the state that rewards the behavior of its citizen through financial aids money paid by the tax payers. Pity is not a policy that can change behavior when its vicious cycle burdens the productivity of its society. The left-wing voices of society in most Western nations are now urging the Church to lower its standards by embracing behavior that is contradictory to a healthy society. The recent appointment of the new Pope, Benedict XVI, has received this opposition in its native Germany because of his orthodox stance on church dogma. However, evaluating the statistics above, one can see the loud voices of irrational thoughts do not conclude a healthy outcome for their society. A healthy society consists of man and woman and their children within the sanctity of marriage by providing a nurturing family environment that raises productive and healthy children. It is in this union where genuine care is being extended. Children see that they were wanted for the sake of love and not for the sake of being an integral part of the social security program to support a retired populace. Their existence becomes a personal and loving choice and not a financial one, since government only desires its offspring for additional taxing. It is in the union of marriage where children learn to love and receive the foundation of moral teachings. Endorsing any other behavior outside this context of truth will result in consequences that have become a social disaster that only furthers a continued decline of living standards. Frau Renate Schmidt should see that family consists of values, and that values can only be instilled by parents and by adhering to moral codes that cannot be rewarded when transgressed. The transgression of a law demands a retribution, which courts of law are to determine. The Church can teach its spiritual rehabilitation that no secular policies can mend. Contradictions will cease when truth is accepted as reality. This is when the scales will fall from peoples eyes and they will recognize that they have been blinded. Germany knows the basic principals of that reality, because its small communities were once founded on moral behavior. Commitment for improving Germanys situation does not start from the top down, but starts from within and a change of thinking in its people. The new Pope may have his hands full by re-introducing this concept to his parishioners. Government cannot replace what God and family can offer. If Germans can shed the false sense of security of their government and its socialistic policies, it can rejuvenate itself again into a thriving culture and healthier society that is still strongly present in its traditions. All it needs is freedom from its irrational policies so people will want to marry and have children that can be provided for by the fruits of their own labor. Sabine Barnhart [[9]send her mail] moved to the US in 1980 and lives in Fort Worth, TX with her three children. For the past 15 years she has been working for an international service company. [10]Sabine Barnhart Archives References 9. mailto:SabineMaria2003 at aol.com 10. http://www.lewrockwell.com/barnhart/barnhart-arch.html From checker at panix.com Fri May 13 18:28:46 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 14:28:46 -0400 (EDT) Subject: [Paleopsych] JLE: Cariero, Heckman, and Masterov: Labor Market Discrimination and Racial Differences in Premarket Factors Message-ID: Labor Market Discrimination and Racial Differences in Premarket Factors The Journal of Law and Economics, vol. XLVIII (April 2005) PEDRO CARNEIRO University College London JAMES J. HECKMAN University of Chicago DIMITRIY V. MASTEROV University of Chicago ABSTRACT This paper investigates the relative significance of differences in cognitive skills and discrimination in explaining racial/ethnic wage gaps. We show that cognitive test scores for exams taken prior to entering the labor market are influenced by schooling. Adjusting the scores for racial/ethnic differences in education at the time the test is taken reduces their role in accounting for the wage gaps. We also consider evidence on parental and child expectations about education and on stereotype threat effects. We find both factors to be implausible alternative explanations for the gaps we observe. We argue that policies need to address the sources of early skill gaps and to seek to influence the more malleable behavioral abilities in addition to their cognitive counterparts. Such policies are far more likely to be effective in promoting racial and ethnic equality for most groups than are additional civil rights and affirmative action policies targeted at the workplace. * This research was supported by a grant from the American Bar Foundation and National Institutes of Health grant R01-HD043411. We thank an anonymous referee for helpful comments. Carneiro was supported by Funda??o Ci?ncia e Tecnologia and Funda??o Calouste Gulbenkian. We thank Derek Neal and Rodrigo Soares for helpful comments and Maria Isabel Larenas, Maria Victoria Rodriguez, and Xing Zhong for excellent research assistance. I. INTRODUCTION IT is well documented that civil rights policy directed toward the South raised black economic status in the 1960s and 1970s.1 Yet substantial gaps remain in the market wages of African-American males and females compared with those of white males and females.2 There are sizable wage gaps for Hispanics as well. Columns 1 of Table 1 report, for various ages, the mean hourly log wage gaps for a cohort of young black and Hispanic males and females.3 The reported gaps are not adjusted for differences in schooling, ability, or other market productivity traits. The table shows that, on average, black males earned wages that were 25 percent lower than those of white males in 1990. Hispanic males earned wages that were 17.4 percent lower in the same year. The gaps increase for males as the cohort ages. For women, there are smaller gaps for blacks and virtually no gap at all for Hispanics, and the gaps for women show no clear trend with age.4 Joseph Altonji and Rebecca Blank5 report similar patterns using data from the March supplement to the Current Population Survey for 196898. TABLE 1 Change in the Black-White Log Wage Gap Induced by Controlling for Age-Corrected AFQT Scores, 19902000 These gaps are consistent with claims of pervasive labor market discrimination against minorities. Minority workers with the same ability and training as white workers may be receiving lower wages. There is, however, another equally plausible explanation consistent with the same evidence. Minorities may bring less skill and ability to the market. Although there may be discrimination or disparity in the development of these valuable skills, the skills may be rewarded equally across all demographic groups in the labor market. Clearly, a variety of intermediate explanations that combine both hypotheses are consistent with the data just presented. The two polar interpretations of market wage gaps have profoundly different policy implications. If persons of identical skill are treated differently in the labor market on the basis of race or ethnicity, a more vigorous enforcement of civil rights and affirmative action in the marketplace would appear to be warranted. On the other hand, if the gaps are solely due to unmeasured abilities and skills that people bring to the labor market, then a redirection of policy toward fostering skills should be emphasized as opposed to a policy of ferreting out discrimination in the workplace. Derek Neal and William Johnson6 shed light on the relative empirical importance of market discrimination and skill disparity in accounting for wage gaps by race. Controlling for scholastic ability measured in the mid-teenage years, they substantially reduce but do not fully eliminate wage gaps for black males in 199091 data. They more than eliminate the gaps for black females. Columns 2 in Table 1 show our version of the estimates reported in the Neal-Johnson study, expanded to cover additional years.7 For black males, controlling for an early measure of ability cuts the black-white wage gap in 1990 by 76 percent. For Hispanic males, controlling for ability essentially eliminates the wage gap with whites. For women, the results are even more striking. Wage gaps are actually reversed, and controlling for ability produces higher wages for minority females. This evidence suggests that the endowments people bring to the labor market play a substantial role in accounting for minority wage gaps. This paper critically examines the Neal-Johnson argument and brings fresh evidence to bear on it. With some important qualifications, our analysis supports their conclusion that factors determined outside of the market play the major role in accounting for minority-majority wage differentials in modern labor markets. In producing the wage gaps shown in Table 1, we follow a practice suggested by Neal and Johnson and do not adjust for the effects of racial and economic differences in schooling, occupational choice, or work experience on wages. Racial and ethnic differences in these factors may reflect responses to labor market discrimination and should not be controlled for in regressions that estimate the "full effect" of race on wages through all channels since doing so may spuriously reduce estimated wage gaps by introducing a proxy for discrimination into the control variables. While the motivation for their procedure is clear, their qualitative claim is false. Including schooling in a wage regression raises the estimated wage gaps and produces more evidence of racial disparity. Gaps when schooling is fixed and not fixed are both of interest and answer different questions. Gaps in measured ability by ethnicity and race are substantial. Figure 1 plots the ability distribution as measured by age-corrected Armed Forces Qualification Test (AFQT) scores8 for males and females in the National Longitudinal Survey of Youth of 1979 (NLSY79).9 As noted by Richard Herrnstein and Charles Murray,10 ability gaps are a major factor in accounting for a variety of racial and ethnic disparities in socioeconomic outcomes. Stephen Cameron and James Heckman11 show that controlling for ability, blacks and Hispanics are more likely to enter college than are whites.12 (22 kB) FIGURE 1.Density of age-corrected AFQT scores for NLSY79 males born after 1961 Neal and Johnson13 argue that ability measured in the teenage years is a "premarket" factor, meaning that it is not affected by expectations or actual experiences of discrimination in the labor market. They offer no explicit criterion for determining which factors are premarket and which are not. Schooling affects test scores,14 and levels of minority schooling are lower than white schooling levels, both generally and in the samples used by Neal and Johnson. Their test score is contaminated by schooling attainment at the date of the test. When their test scores are adjusted for this factor, adjusted wage gaps increase. The gaps in ability evident in Figure 1 stem in part from lower levels of schooling by minorities at the time of the test and may also arise from lowered academic effort in anticipation of future discrimination in the labor market. If skills are not rewarded fairly, the incentive to acquire them is diminished for those subject to prejudicial treatment. Discrimination in the labor market might sap the incentives of children and young adults to acquire skills and abilities but may also influence the efforts they exert in raising their own offspring. This means that even after adjusting their test scores for schooling, measured ability may not be a true premarket factor. Neal and Johnson15 mention this qualification in their original paper, and their critics have subsequently reiterated it. The gaps in ability may also be a consequence of adverse environments. Even if all wage gaps are due to ability, uncontaminated by expectations of market discrimination, the appropriate policy for eliminating ability gaps is not apparent from Table 1. Should policies focus on early ages through enriched Head Start programs or on improving schooling quality and reducing school dropout and repetition rates that plague minority children at later ages? This paper demonstrates that ability gaps open up very early. Minorities enter school with substantially lower measured ability than whites. The black-white ability gap widens as the children get older and obtain more schooling, but the contribution of formal education to the widening of the gap is small when compared to the size of the initial gap. There is a much smaller widening of the Hispanic-white ability gap with schooling. Our evidence and that of James Heckman, Maria Isabel Larenas, and Sergio Urzua16 suggest that school-based policies are unlikely to have substantial effects on eliminating minority ability gaps. Factors that operate early in the life cycle of the child are likely to have the greatest impact on ability. The early emergence of ability gaps indicates that child expectations play only a limited role in accounting for such gaps since very young children are unlikely to have formed expectations about labor market discrimination and to make decisions based on those expectations. However, parental expectations of future discrimination may still play a role in shaping children's outcomes. The early emergence of measured ability differentials also casts doubt on the empirical importance of the "stereotype threat"17 as a major factor contributing to black-white test score differentials. The literature on this topic claims that black college students at selective colleges perform worse on tests when they are told that the tests may be used to confirm stereotypes about black-white ability differentials. The empirical importance of this effect is in dispute in the psychology literature.18 The children in our data are tested at very young ages and are unlikely to be aware of stereotypes about minority inferiority or be affected by the stereotype threat that has been empirically established only for students at elite colleges. In addition, large gaps in test scores are also evident for Hispanics, a group for whom the stereotype threat has not been documented. The stereotype threat literature claims that measured test scores for minorities understate their true ability. Unless the effect is uniform across ability levels, incremental ability should be rewarded differently between blacks and whites. We find no evidence of such an effect. Adjusting for the schooling attainment of minorities at the time that they take tests provides an empirically important qualification to the Neal-Johnson study.19 An extra year of schooling has a greater impact on test scores for whites and Hispanics than for blacks. Adjusting the test score for schooling disparity at the date of the test leaves more room for interpreting wage gaps as arising from labor market discrimination. This finding does not necessarily overturn the conclusions of the Neal-Johnson analysis. At issue is the source of the gap in schooling attainment at the date of the test. The Neal-Johnson premarket factors are a composite of ability and schooling and are likely to reflect both the life cycle experiences and the expectations of the child. To the extent that they reflect expectations of discrimination as embodied in schooling that affects test scores, the scores are contaminated by market discrimination and are not truly premarket factors. An open question is how much of the gap in schooling is due to expectations about future discrimination. The evidence from data on parents' and children's expectations tells a mixed story. Minority child and parent expectations about the children's schooling prospects are as optimistic at ages 1617 as those of their white counterparts, although actual schooling outcomes of whites and minorities are dramatically different. Differential expectations at these ages cannot explain the gaps in ability evident in Figure 1. For children 14 and younger, parent and child expectations about schooling are much lower for blacks than for whites, although only slightly lower for Hispanics than for whites. All groups are still rather optimistic in light of subsequent schooling attendance and performance. At these ages, differences in expectations across groups may lead to differential investments in skill formation. While lower expectations may be a consequence of perceived labor market discrimination, they may also reflect child and parental perception of the lower endowments possessed by minorities, so this evidence is not decisive. A focus on cognitive skill gaps, while traditional,20 misses important noncognitive components of social and economic success. We show that noncognitive (behavioral) gaps also open up early. Previous work shows that they play an important role in accounting for market wages. Policies that focus solely on improving cognitive skills miss an important and promising determinant of socioeconomic success and disparity that can be affected by policy.21 The rest of the paper proceeds in the following way. Section II presents evidence on the evolution of test score gaps over the life cycle of the child. Section III discusses the evidence on stereotype threat. Section IV presents our evidence on how adjusting for schooling at the date of the test affects the conclusions of the Neal-Johnson analysis and how schooling affects test scores differentially for minorities. Section V discusses our evidence on child and parental expectations. Section VI presents evidence on noncognitive skills that parallels the analysis of Section II. Section VII concludes. 1 John J. Donohue & James J. Heckman, Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks, 29 J. Econ. Literature 1603 (1991). 2 The literature on African-American economic progress in the twentieth century is surveyed in James J. Heckman & Petra Todd, Understanding the Contribution of Legislation, Social Activism, Markets and Choice to the Economic Progress of African Americans in the Twentieth Century (unpublished manuscript, Am. Bar Found. 2001). 3 These gaps are for a cohort of young persons aged 2628 in 1990 from the National Longitudinal Survey of Youth of 1979 (NLSY79). They are followed for 10 years until they reach ages 3638 in 2000. 4 However, the magnitudes (but not the direction) of the female gaps are less reliably determined, at least for black women. Derek Neal, The Measured Black-White Wage Gap among Women Is Too Small, 112 J. Pol. Econ. S1 (2004), shows that racial wage gaps for black women are underestimated by these types of regressions since they do not control for selective labor force participation. This same line of reasoning is likely to hold for Hispanic women. 5 Joseph Altonji & Rebecca Blank, Gender and Race in the Labor Markets, in 3C Handbook of Labor Economics 3143 (Orley Ashenfelter & David Card eds. 1999). 6 Derek Neal & William Johnson, The Role of Premarket Factors in Black-White Wage Differences, 104 J. Pol. Econ. 869 (1996). 7 We use a sample very similar to the one used in their study. It includes individuals born only in 196264. This exclusion is designed to alleviate the effects of differential schooling at the test date on test performance and to ensure that the AFQT is taken before the individuals enter the labor market, so that it is more likely to be a premarket factor. 8 Age-corrected AFQT is the standardized residual from the regression of the AFQT score on age at the time of the test dummy variables. AFQT is a subset of four out of 10 Armed Services Vocational Aptitude Battery (ASVAB) tests used by the military for enlistment screening and job assignment. It is the summed score from the word knowledge, paragraph comprehension, mathematics knowledge, and arithmetic reasoning ASVAB tests. 9 In our Web appendix (http://jenni.uchicago.edu/JLE/FILES/JLE_Appendix_2004-07-20_dvm.pdf), we show that the same patterns emerge when we divide the sample by gender. 10 Richard Herrnstein & Charles Murray, The Bell Curve (1994). 11 Stephen V. Cameron & James J. Heckman, The Dynamics of Educational Attainment for Black, Hispanic, and White Males, 109 J. Pol. Econ. 455 (2001). 12 Sergio Urzua, The Educational White-Black Gap: Evidence on Years of Schooling (Working paper, Univ. Chicago, Dep't Econ. 2003), shows that this effect arises from greater minority enrollment in 2-year colleges. Controlling for ability, whites are more likely to attend and graduate from 4-year colleges. Using the Current Population Survey, Sandra E. Black & Amir Sufi, Who Goes to College? Differential Enrollment by Race and Family Background (Working Paper No. w9310, Nat'l Bur. Econ. Res. 2002), finds that equating the family background of blacks and whites eliminates the black-white gap in schooling only at the bottom of the family background distribution. Furthermore, the gaps are eliminated in the 1980s but not in the 1990s. 13 Neal & Johnson, supra note 6. 14 See Karsten Hansen, James J. Heckman, & Kathleen Mullen, The Effect of Schooling and Ability on Achievement Test Scores, 121 J. Econometrics 39 (2004). 15 Neal & Johnson, supra note 6. 16 James Heckman, Maria Isabel Larenas, & Sergio Urzua, Accounting for the Effect of Schooling and Abilities in the Analysis of Racial and Ethnic Disparities in Achievement Test Scores (Working paper, Univ. Chicago, Dep't Econ. 2004). 17 See Claude Steele & Joshua Aronson, Stereotype Threat and the Test Performance of Academically Successful African Americans, in The Black-White Test Score Gap 401 (Christopher Jencks & Meredith Phillips eds. 1998). 18 See Paul Sackett, Chaitra Hardison, & Michael Cullen, On Interpreting Stereotype Threat as Accounting for African AmericanWhite Differences in Cognitive Tests, 59 Am. Psychologist 7 (2004). 19 Neal & Johnson, supra note 6. 20 See, for example, Christopher Jencks & Meredith Phillips, The Black-White Test Score Gap (1998). 21 See Pedro Carneiro & James J. Heckman, Human Capital Policy, in Inequality in America: What Role for Human Capital Policies? 77 (James Heckman & Alan Krueger eds. 2003). II. MINORITY-WHITE DIFFERENCES IN EARLY TEST SCORES AND EARLY ENVIRONMENTS This section summarizes evidence from the literature and presents original empirical work that demonstrates that minority-white cognitive skill gaps emerge early and persist through childhood and the adolescent years. Christopher Jencks and Meredith Phillips22 and Greg Duncan and Jeanne Brooks-Gunn23 document that the black-white test score gap is large for 3- and 4-year-old children. Using the Children of the NLSY79 (CNLSY) survey, a sample of children of the mothers in the 1979 National Longitudinal Survey of Youth data, a variety of studies show that even after controlling for many variables such as individual, family, and neighborhood characteristics, the black-white test score gap is still sizable.24,25 These studies also document that there are large black-white differences in family environments. Ronald Ferguson26 summarizes this literature and presents evidence that black children come from much poorer and less educated families than white children, and they are also more likely to grow up in single-parent households. Studies summarized by Ferguson27 find that the achievement gap is high even for blacks and whites attending high-quality suburban schools.28 The common finding across these studies is that the black-white gap in test scores is large and that it persists even after one controls for family background variables. Children of different racial and ethnic groups grow up in strikingly different environments.29 Even after accounting for these environmental factors in a correlational sense, substantial test score gaps remain. Furthermore, these gaps tend to widen with age and schooling: black children show slower measured ability growth with schooling or age than do white children. This paper presents additional evidence from the children of the persons interviewed in the CNLSY. We have also examined the Early Childhood Longitudinal Survey (ECLS) analyzed by Ferguson30 and Roland Fryer and Steven Levitt31 as well as the Children of the Panel Study of Income Dynamics (CPSID) and find similar patterns. We broaden previous analyses to include Hispanic-white differentials. Figure 2 shows the average percentile Peabody Individual Achievement Test (PIAT) Math32 scores for males in different age groups by race. (Results for females show the same patterns and are available in our Web appendix.33 For brevity, in this paper we focus only on the male results.) Racial and ethnic test score gaps are found as early as ages 56 (the earliest ages at which we can measure math scores in CNLSY data).34 On average, black 5- and 6-year-old boys score almost 18 percentile points below white 5- and 6-year-old boys (that is, if the average white is at the 50th percentile of the test score distribution, the average black is at the 32nd percentile of this distribution). The gap is a bit smaller16 percentbut still substantial for Hispanics. These findings are duplicated for many other test scores and in other data sets and are not altered if we use median test scores instead of means. Furthermore, as shown in Figure 3, even when we use a test taken at younger ages, racial gaps in test scores can be found at ages 12.35 In general, test score gaps emerge early and persist through adulthood. (21 kB) FIGURE 2.Percentile PIAT Math score by race and age group for CNLSY79 males (23 kB) FIGURE 3.Average percentile Parts of the Body Test score by race and age for CNLSY79 males. For brevity, we focus on means and medians in this paper. However, Figures 1 and 4 illustrate that there is considerable overlap in the distribution of test scores across groups in recent generations. Many black and Hispanic children at ages 56 score higher on a math test than the average white child. Statements that we make about medians or means do not apply to all persons in these distributions. (28 kB) FIGURE 4.Density of percentile PIAT Math scores at ages 56 for CNLSY79 males Figure 2 shows that the black-white percentile PIAT Math score gap widens with age. By ages 1314, the average black is ranked more than 22 percentiles below the average white. In fact, these gaps persist through adulthood. At 1314, Hispanic boys are almost 16 points below the average white. When blacks and Hispanics enter the labor market, on average they have a much poorer set of cognitive skills than do whites. Thus, it is not surprising that their average labor market outcomes are so much worse. Furthermore, these skill gaps emerge very early in the life cycle, persist, and, if anything, widen for some groups. Initial conditions (early test scores) are very important since skill begets skill.36 The research surveyed by Pedro Carneiro and James Heckman37 suggests that enhanced cognitive stimulation at early ages is likely to produce lasting gains in achievement test scores in children from disadvantaged environments. If the interventions are early enough, they also appear to raise IQ scores, at least for girls.38 Home and family environments at early ages, and even the mother's behavior during pregnancy, play crucial roles in the child's development, and black children grow up in environments that are significantly more disadvantaged than those of white children. Figure 5 shows the distributions of long-term or "permanent" family income for blacks, whites, and Hispanics.39 Minority children are much more likely to grow up in low-income families than are white children. In our Web appendix,40 we show that there are also large differences in the level of education and cognitive ability (as measured by the AFQT) of mothers in different ethnic and racial groups (see also Figure 1). Maternal AFQT score is a major predictor of children's test scores.41 Figure 6 documents that white mothers are much more likely to read to their children at young ages than are minority mothers, and we obtain similar results at other ages.42 Using this reading variable and other variables in CNLSY such as the number of books, magazines, toys, and musical recordings, family activities (eating together, outings), methods of discipline and parenting, learning at home, television-watching habits, parental expectations for the child (chores, time use), and home cleanliness and safety, we can construct an index of cognitive and emotional stimulationthe home score. This index is always higher for whites than for minorities.43 The Web appendix also shows that blacks are more likely than whites to grow up in single-parent homes. Hispanics are less likely than blacks to grow up in a single-parent home, although they are much more likely to do so than are whites. (28 kB) FIGURE 5.Density of log permanent income for CNLSY79 males and females (33 kB) FIGURE 6.How often mother reads to child at age 2 by race and sex for CNLSY79 males and females. Each bar represents the number of people who report falling in a particular reading frequency cell divided by the total number of people in their race and sex group. Even after controlling for numerous environmental and family background factors, racial and ethnic test score gaps remain at ages 34 for most tests and for virtually all the tests at later ages. Figure 7 shows that, even after adjusting for measures of family background,44 the black-white gap in percentile PIAT Math scores at ages 56 is almost 8 percentile points and at ages 1314 is close to 11 percentile points. Hispanic-white differentials are reduced more by such adjustments, falling to 7 points at ages 56 and to 4 points at ages 1314. For some tests, differentials frequently are positive or statistically insignificant.45 Measured home and family environments play an important role in the formation of these skills, although they are not the whole story.46 (28 kB) FIGURE 7.Adjusted percentile PIAT Math score by race and age group for CNLSY79 males. Early test scores for blacks and Hispanics are similar, although Hispanics often perform slightly better. Figure 2 shows that for the PIAT Math score, the Hispanic-black gap is about 2 percentile points.47 This is much smaller than either the black-white or the Hispanic-white gap. For the PIAT Math, the black-white gap widens dramatically, especially at later ages, but the Hispanic-white gap does not change substantially with age. For other tests, even when there is some widening of the Hispanic-white gap with age, it tends to be smaller than the widening in the black-white gap in test scores. In particular, when we look at the AFQT scores displayed in Figure 1, which are measured using individuals at ages 1623, Hispanics clearly have higher scores than do blacks. In contrast, Figure 4 shows a strong similarity between the math scores of blacks and Hispanics at ages 56, although there are other tests at which, even at these early ages, Hispanics perform substantially better than blacks. When we control for the effects of home and family environments on test scores, the Hispanic-white test score gap either decreases or is constant over time, while the black-white test score gap tends to widen with age. 22 Jencks & Phillips, supra note 20. 23 Greg Duncan & Jeanne Brooks-Gunn, Consequences of Growing up Poor (1997). 24 In a similar study based on the Early Childhood Longitudinal Survey (ECLS), Roland Fryer & Steven Levitt, Understanding the Black-White Test Score Gap in the First Two Years of School, 86 Rev. Econ. Stat. 447 (2004), eliminates the black-white test score gap in math and reading for children at the time they are entering kindergarten, although not in subsequent years. However, the raw test score gaps at ages 34 are much smaller in ECLS than in CNLSY and other data sets that have been used to study this issue, so their results are anomalous in the context of a larger literature. 25 For a description of CNLSY and NLSY79, see Bureau of Labor Statistics, NLS Handbook, 2001 (2001). 26 Ronald Ferguson, Why America's Black-White School Achievement Gap Persists (unpublished manuscript, Harvard Univ. 2002). 27 Ronald Ferguson, What Doesn't Meet the Eye: Understanding and Addressing Racial Disparities in High Achieving Suburban Schools (Special Ed., Policy Issues Rep. 2002). 28 This is commonly referred to as the "Shaker Heights study," although it analyzed many other similar neighborhoods. 29 See also the discussion in David J. Armor, Maximizing Intelligence (2003). 30 Ferguson, supra note 26. 31 Fryer & Levitt, supra note 24. 32 Peabody Individual Achievement Test in Mathematics (PIAT Math) measures the child's attainment in mathematics as taught in mainstream education. It consists of 84 multiple-choice questions of increasing difficulty, beginning with recognizing numerals and progressing to geometry and trigonometry. The percentile score was calculated separately for each sex at each age. 33 Note 9 supra. 34 Instead of using raw scores or standardized scores, we choose to use ranks, or percentiles, since test score scales have no intrinsic meaning. Our results are not sensitive to this procedure. 35 This is not always the case for women, as shown in our Web appendix (supra note 9). The Parts of the Body Test attempts to measure the young child's receptive vocabulary knowledge of orally presented words as a means of estimating intellectual development. The interviewer names each of 10 body parts and asks the child to point to that part of the body. The score is computed by summing the number of correct responses. The percentile score was calculated separately for each sex at each age. 36 See James J. Heckman, Policies to Foster Human Capital, 54 Res. Econ. 3 (2000). 37 Carneiro & Heckman, supra note 21. 38 See Frances Campbell et al., Early Childhood Education: Young Adult Outcomes from the Abecedarian Project, 6 Applied Developmental Sci. 42 (2002). 39 Values of permanent income are constructed by taking the average of all nonmissing values of annual family income at ages 018 discounted to child's age 0 using a 10 percent discount rate. 40 Note 9 supra. 41 For example, the correlation between percentile PIAT math score and age-corrected maternal AFQT is .4. 42 See the results for all ages in our Web appendix, supra note 9. 43 As shown in our Web appendix (id.), where we document that both cognitive and emotional stimulation indexes are always higher for whites than for blacks at all ages. 44 Scores are adjusted by permanent family income, mother's education, and age-corrected AFQT and home scores. "Adjusted" indicates that we equalized the family background characteristics across all race groups by setting them at the mean to purge the effect of disparities in family environments. 45 In our Web appendix (supra note 9), tables 1A and 1B report that even after controlling for different measures of home environment and child stimulation, the black-white test score gap persists, even though it drops considerably. Results for other tests and other samples can be found in our Web appendix. Even though for some test scores early black-white test score gaps can be eliminated once we control for a large number of characteristics, it is harder to eliminate them at later ages. In the analysis presented here, the most important variable in reducing the test score gap is the mother's cognitive ability, as measured by the AFQT. 46 However, the home score includes variables such as the number of books, which are clearly choice variables and likely to cause problems in this regression. The variables with the largest effect on the minority-white test score gap are maternal AFQT and raw home score. 47 The test score is measured in percentile rank The black-white gap is slightly below 18, while the Hispanic-white gap is slightly below 16. This means that the black-Hispanic gap should be around 2. III. THE STEREOTYPE THREAT The fact that substantial racial and ethnic test score gaps open up early in the life cycle of children casts doubt on the empirical importance of the "stereotype threat." It is now fashionable in some circles to attribute gaps in black test scores to racial consciousness on the part of black test takers stemming from the way test scores are used in public discourse to describe minorities.48 The claim is that blacks perform below their true abilities on standardized tests when a stereotype threat is present. The empirical importance of the stereotype threat in accounting for test score differentials has been greatly overstated in the popular literature.49 No serious empirical scholar assigns any quantitative importance to stereotype threat effects as a major determinant of test score gaps. Stereotype threats could not have been important when blacks took the first IQ tests at the beginning of the twentieth century, which documented the racial differentials that gave rise to the stereotype. Yet racial IQ gaps are comparable across time.50 Young children, like the ones studied in this paper, are unlikely to have the heightened racial consciousness about tests and their social significance of the sort claimed to be found by Claude Steele and Joshua Aronson51 in college students at a few elite universities. Moreover, sizable gaps are found for young Hispanic malesa group for which the stereotype threat remains to be investigated. Additional evidence on the unimportance of stereotype threat is presented in Table 2.52 According to the stereotype threat literature, minority test scores understate true ability. If stereotyping affects the test score gap differently across ability levels, the effect of a unit of ability on wages for a black should be different than it is for a white. If the understatement is uniform across all ability levels, the coefficient on a dummy variable for race is overstated in a log wage regression (that is, measured discrimination is understated). If the stereotype threat operates when minorities take the AFQT, their scores should have a different incremental effect on wages than majority AFQT scores.53 We test this hypothesis using the empirical model in Table 2. We estimate the effect of black and Hispanic AFQT scores relative to the effect of white AFQT scores on log wages as extracted from the NLSY79. This amounts to testing for racial AFQT interactions in a log wage equation. While there is some (weak) evidence that black scores have a larger effect on log wages than white scores, the black-AFQT interaction coefficients are small in magnitude and imprecisely determined. For Hispanics, the estimated AFQT interaction coefficients are negative and, again, not precisely determined. In our Web appendix, we also graph the mean log wage by AFQT decile by race. There is no particular pattern of convergence or divergence across ability levels when evaluated over common supports. TABLE 2 Pooled Log Wage Regressions for NLSY Males, 19902000 The stereotype literature substitutes wishful thinking for substantial evidence. There is no evidence that it accounts for an important fraction of minority-white test score gaps or that test scores are not good measures of productivity.54 48 Steele & Aronson, supra note 17. 49 See the analysis in Sackett, Hardison, & Cullen, supra note 18. 50 Charles Murray, The Secular Increase in IQ and Longitudinal Changes in the Magnitude of the Black-White Difference: Evidence from the NLSY (paper presented at the Behavior Genetics Association Meeting, Vancouver 1999), reviews the evidence on the evolution of the black-white IQ gap. In the 1920sa time when such tests were much more unreliable and black educational attainment much lowerthe mean black-white difference was .86 standard deviations. The largest black-white difference appears in the 1960s, with a mean black-white difference of 1.28 standard deviations. The difference ranges from a low of .82 standard deviations in the 1930s to 1.12 standard deviations in the 1970s. However, none of the samples prior to 1960 are nationally representative, and the samples were often chosen so as to effectively bias the black mean upward. 51 Steele & Aronson, supra note 17. 52 See our Web appendix (supra note 9) for evidence on females. 53 Let Y = 0 + 1T + , where E( T) = 0. The same equation governs black and white outcomes. The term T is the true test score, and T* is the test score under stereotype threat: Suppose Cov(, U) = 0. Our Web appendix, supra note 9, shows that under random sampling, the coefficient on the test score for whites is 1 and for blacks is Intercepts are 0 and where E(T) is the mean of T, is the variance of T, and is the variance of U. Thus, the intercepts for blacks are upward biased. The slope for blacks in general may be greater than or less than 1, depending on whether the gap widens with T or shrinks . When = 0 (U = 0) and 1 = 1, the slopes are the same for blacks and whites, but the intercepts for blacks are upward biased. This method underestimates the amount of discrimination. 54 A circular version of the stereotype threat argument would claim that minorities also underperform at the workplace because of stereotype threat there, so using measured wages to capture productivity understates true black productivity. This form of the stereotype threat argument is irrefutable. All measures are contaminated. IV. THE DIFFERENTIAL EFFECT OF SCHOOLING ON TEST SCORES We have established that cognitive test scores are correlated with home and family environments and that test score gaps increase with age and schooling. The research of Karsten Hansen, James Heckman, and Kathleen Mullen55 and Heckman, Larenas, and Urzua56 shows that the AFQT scores used by Neal and Johnson57 are affected by the schooling attainment of individuals at the time they take the test. Therefore, one reason for the divergence of black and white test scores over time may be differential schooling attainments. Figure 8 shows the schooling completed at the test date for the six demographic groups in the age ranges of the NLSY used by Neal and Johnson. Blacks have completed (slightly) less schooling at the test date than whites but substantially more than Hispanics. (35 kB) FIGURE 8.Highest category of schooling completed at the test date by race, sex, and age for NLSY79 males and females born after 1961. Each bar represents the number of people who report falling in a particular reading frequency cell divided by the total number of people in their race and sex group. Table 3 presents estimates of the effect of schooling at test date on AFQT scores for individuals in different demographic groups in the NLSY, using a version of the nonparametric method developed by Hansen, Heckman, and Mullen.58 Their method isolates the causal effect of schooling attained at the test date on test scores controlling for unobserved factors that lead to selective differences in schooling attainment. This table shows that the effect of schooling on test scores is much larger for whites and Hispanics than it is for blacks over most ranges of schooling. As a result, even though Hispanics have fewer years of completed schooling than blacks at the time they take the AFQT, on average Hispanics score better on the AFQT than do blacks. TABLE 3 Effect of Years of Schooling on AFQT Scores for Individuals in NLSY79 There are different explanations for these findings. Carneiro and Heckman, Cunha and Heckman, and Cunha and colleagues59 suggest that one important feature of the learning process is complementarity and self-productivity between initial endowments of human capital and subsequent learning.60 Higher levels of human capital raise the productivity of learning.61 Since minorities and whites start school with very different initial conditions, their learning paths can diverge dramatically over time. A related explanation may be that blacks and nonblacks learn at different rates because blacks attend lower quality schools than whites.62 Janet Currie and Duncan Thomas63 show that test score gains of participants in the Head Start program tend to fade completely for blacks but not for whites. They suggest that one reason may be that blacks attend worse schools than whites, and therefore blacks are not able to maintain initial test score gains. Both early advantages and disadvantages as well as school quality are likely to be important factors in the human capital accumulation process. In light of the greater growth in test scores of Hispanics that is parallel to that of whites, explanations based on schooling quality are not entirely compelling. Hispanics start from similar initial disadvantages in family environments and face school and neighborhood environments similar to those faced by blacks.64 They also have early levels of test scores similar to those found in the black population.65 To analyze the consequences of correcting for different levels of schooling at the test date, we reanalyze the Neal-Johnson66 data using AFQT scores corrected for the race- or ethnicity-specific effect of schooling while equalizing the years of schooling attained at the date of the test across all racial/ethnic groups. The results of this adjustment are presented in Table 4. This adjustment is equivalent to replacing each individual's AFQT score by the score we would measure if he or she would have stopped his or her formal education after eighth grade.67 In other words, we use eighth-grade-adjusted AFQT scores for everyone. Since the effect of schooling on test scores is higher for whites than for blacks, and whites have more schooling than blacks at the date of the test, this adjustment reduces the test scores of whites much more than those for blacks. The black-white male wage gap is cut only in half (as opposed to 76 percent) when we use this new measure of skill, and a substantial unexplained residual remains. The adjustment has little effect on the Hispanic-white wage gap, but a wage gap for black women emerges when using the schooling-adjusted measure that did not appear in the original Neal-Johnson study. TABLE 4 Change in the Black-White Log Wage Gap Induced by Controlling for Schooling-Corrected AFQT Scores, 19902000 Adjusting for schooling at the date of the test reduces the test score gap. This evidence raises the larger question of what a premarket factor is. Neal and Johnson do not condition on schooling in explaining black-white wage gaps, arguing that schooling is affected by expectations of adverse market opportunities facing minorities and that conditioning on such a contaminated variable would spuriously reduce the estimated wage gap. We present direct evidence on this claim below. Their reasoning is not entirely coherent. If expectations of discrimination affect schooling, the very logic of their premarket argument suggests that they should control for the impact of schooling on test scores before using test scores to measure premarket factors. Neal and Johnson68 assume that schooling at the time the test is taken is not affected by expectations of discrimination in the market, while later schooling is. This distinction is arbitrary. A deeper investigation of the expectation formation process and feedback is required. One practical conclusion with important implications for interpretation of the evidence is that the magnitude of the wage gap one can eliminate by performing a Neal-Johnson analysis depends on the age at the time the test is taken. We find that the earlier the test is taken, the smaller the unadjusted test score gap, and the larger the fraction of the wage gap that is unexplained by the residual. Figure 9 shows how adjusting measured ability for schooling at the time of the test at different levels of attained schooling affects the adjusted wage gap for black males. For example, the log wage gap that we obtain when using eleventh-grade test scores corresponds to that using an AFQT correction equal to 11. The later the grade at which we adjust the test score, the lower the estimated gap. This is because a test score gap opens up at later schooling levels, and hence adjustment reduces the gap by a larger amount at later schooling levels.69 (30 kB) FIGURE 9.Residual black-white log wage gap in 1991 by grade at which we evaluate schooling-corrected AFQT scores for NLSY79 males. Finally we show that adjusting for "expectations-contaminated" completed schooling by entering it as a direct regression in a log wage equation does not operate in the fashion conjectured by Neal and Johnson. Table 5 shows that when we adjust wage differences for completed schooling as well as schooling-adjusted AFQT, wage gaps widen relative to the simple adjustment. This runs contrary to the simple intuition that schooling embodies expectations of market discrimination, so conditioning on it will eliminate wage gaps.70 The deeper issue, not resolved in this paper or the literature, is what productivity factors to condition on in measuring discrimination. Schooling and measured ability are both valid candidate productivity variables. Conditioning on them singly or jointly and eliminating spurious endogeneity effects produces conceptually different measures of the wage gap, all of which answer distinct but economically interesting questions. Both variables may be affected by discrimination. Looking only at outcome equations, one cannot settle what is a productivity characteristic and what is contaminated and what is not.71,72 Deleting potential contaminated variables does not, in general, produce the conceptually desired measure of discrimination. TABLE 5 Change in the Black-White Log Wage Gap Induced by Controlling for Schooling-Corrected AFQT Scores and Highest Grade Completed, 19902000 Ours is a worst-case analysis for the Neal-Johnson study.73 If we assign all racial and ethnic schooling differences to expectations of discrimination in the labor market, the results for blacks are less sharp than Neal and Johnson claim. Yet even in the worst-case scenario, adjusting for ability corrected for schooling and schooling as a direct effect on wages substantially reduces minority-majority wage gaps over the unadjusted case. The evidence presented in Section II about the early emergence of ability differentials is reinforced by the early emergence of differential grade repetition gaps for minorities documented by Cameron and Heckman.74 Most of the schooling gap at the date of the test emerges in the early years at ages when child expectations about future discrimination are unlikely to be operative. One might argue that these early schooling and ability gaps are due to parental expectations of poor labor markets for minority children. We next examine data on child and parental expectations. 55 Hansen, Heckman, & Mullen, supra note 14. 56 Heckman, Larenas, & Urzua, supra note 16. 57 Neal & Johnson, supra note 6. 58 Hansen, Heckman, & Mullen, supra note 14. Heckman, Larenas, & Urzua, supra note 16, presents a more refined analysis of the racial/ethnic wage gap using the analysis of Hansen, Heckman, & Mullen that supports all of our main conclusions. See also the note to Table 3. 59 Carneiro & Heckman, supra note 21; Flavio Cunha & James Heckman, The Technology of Skill Formation (unpublished manuscript, Univ. Chicago 2004); Flavio Cunha et al., Interpreting the Evidence on Life Cycle Skill Formation, in Handbook of Education Economics (Finis Welch & Eric Hanushek eds., forthcoming 2005). 60 For example, see the model in Yoram Ben-Porath, The Production of Human Capital and the Life Cycle of Earnings, 75 J. Pol. Econ. 352 (1967). See also Cunha et al., supra note 59. 61 See the evidence in James Heckman, Lance Lochner, & Christopher Taber, Explaining Rising Wage Inequality: Explorations with a Dynamic General Equilibrium Model of Labor Earnings with Heterogeneous Agents, 1 Rev. Econ. Dynamics 1 (1998). 62 Cunha & Heckman, supra note 59, shows that complementarity implies that early human capital increase the productivity of later investments in human capital and that early investments that are not followed up by later investments in human capital are not productive. 63 Janet Currie & Duncan Thomas, School Quality and the Longer-Term Effects of Head Start, 35 J. Hum. Resources 755 (2000). 64 The evidence for CNLSY is presented in our Web appendix (supra note 9). 65 Heckman, Larenas, & Urzua, supra note 16, presents a more formal analysis of the effect of schooling quality on test scores, showing that schooling inputs explain little of the differential growth in test scores among blacks, whites, and Hispanics. 66 Neal & Johnson, supra note 6. 67 However, the score is affected by attendance in kindergarten, 8 further years of schooling, and any school quality differentials in those years. 68 Neal & Johnson, supra note 6. 69 The figure omits the results for the 16-and-over category because the low number of minorities makes correction of test scores to that level much less reliable than correction to the other schooling levels. The unadjusted line shows the black-white log wage gap we observe if we do not depend on the grade to which we are correcting the test score. The adjusted line shows the black-white log wage gap after we adjust for the AFQT scores corrected to different grades. In our Web appendix (supra note 9), we present the same analysis for females and Hispanics. 70 The simple intuition, however, can easily be shown to be wrong, so the evidence in these tables is not decisive on the presence of discrimination in the labor market. The basic idea is that if both schooling and the test score are correlated with an unmeasured discrimination component in the error term, the bias for the race dummy may be either positive or negative depending on the strength of the correlation among the contaminated variables and their correlation with the error term. See the discussion in our Web appendix (id.), where we show that if both schooling and test score are correlated with factors leading to discrimination in earnings, the estimated discrimination effect may be upward or downward biased by adding schooling as a regressor. 71 See Robert Bornholz & James J. Heckman, Measuring Disparate Impacts and Extending Disparate Impact Doctrine to Organ Transplantation, 48 Persp. Biology & Med. S95 (2005). 72 As pointed out to us by an anonymous referee, another reason for excluding years of schooling from the log wage equation is that schooling overstates the amount of human capital black children receive relative to white children, say because of differential schooling quality. If this effect is strong enough, including years of schooling will overstate the racial wage differential. Table 3 shows that years of schooling for black children have less effect on human capital (the test score) than years of schooling for white children. However, Heckman, Larenas, & Urzua, supra note 16, shows that measured schooling quality accounts for little of the gap or the growth in the gap between blacks and whites. 73 Neal & Johnson, supra note 6. 74 Cameron & Heckman, supra note 11. V. THE ROLE OF EXPECTATIONS The argument that minority children perform worse on tests because they expect to be less well rewarded in the labor market than whites for the same test score or schooling level is implausible because expectations of labor market rewards are unlikely to affect the behavior of children as early as ages 3 or 4, when test score gaps are substantial across different ethnic and racial groups. The argument that minorities invest less in skills because both minority children and minority parents have low expectations about their performance in school and in the labor market has mixed empirical backing. Data on expectations are hard to find, and when they are available they are often difficult to interpret. For example, in the NLSY97, black 17- and 18-year-olds report that the probability of dying next year is 22 percent, while whites report a probability of dying of 16 percent.75 Both numbers are absurdly high. Minorities usually report higher expectations than whites of committing a crime, being incarcerated, and being dead next year, and these adverse expectations may reduce their investment in human capital. Expectations reported by parents and children for the adolescent years for a variety of outcomes are given in our Web appendix.76 Schooling expectations measured in the late teenage years are very similar for minorities and whites. They are slightly lower for Hispanics. Table 6 reports the mean expected probability of being enrolled in school next year for black, white, and Hispanic 17- and 18-year-old males. Among those individuals enrolled in 1997, on average whites expect to be enrolled next year with 95.7 percent probability. Blacks expect that they will be enrolled next year with a 93.6 percent probability. Hispanics expect to be enrolled with a 91.5 percent probability. If expectations about the labor market are adverse for minorities, they should translate into adverse expectations for the child's education. Yet these data do not reveal this. Moreover, all groups substantially overestimate actual enrollment probabilities. The difference in expectations between blacks and whites is very small and is less than half the difference in actual (realized) enrollment probabilities (81.9 percent for whites versus 76.4 percent for blacks). The gap is wider for Hispanics. Table 7 reports parental schooling expectations for white, black, and Hispanic males for the same individuals used to compute the numbers in Table 6. It shows that, conditional on being enrolled in 1997 (the year the expectation question is asked), black parents expect their sons to be enrolled next year with a 90.9 percent probability, while for whites this expectation is 95.4 percent. For Hispanics, this number is lower (88.5 percent) but still substantial. Parents overestimate enrollment probabilities for their sons, but black parents have lower expectations than white parents. For females, the racial and ethnic differences in parental expectations are smaller than those for males.77 TABLE 6 Juvenile Expectations about School Enrollment in 1998: NLSY97 Males TABLE 7 Parental Expectations about Youth School Enrollment in 1998: NLSY79 Males For expectations measured at earlier ages the story is dramatically different. Figures 10 and 11 show that, for the CNLSY group, both black and Hispanic children and their parents have more pessimistic expectations about schooling than do white children, and more pessimistic expectations may lead to lower investments in skills, less effort in schooling, and lower levels of ability. These patterns are also found in the CPSID and ECLS groups.78 (33 kB) FIGURE 10.Child's own expected educational level at age 10 by race and sex for CNLSY79 males and females. Each bar represents the number of people who report falling in a particular educational level cell divided by the total number of people in their race and sex group. (34 kB) FIGURE 11.Mother's expected educational level for the child at age 6 by race and sex for CNLSY79 males and females. Each bar represents the number of people who report falling in a particular educational level divided by the total number of people in their race and sex group. If the more pessimistic expectations of minorities are a result of perceived market discrimination, then lower levels of investment in children that translate into lower levels of ability and skill at later ages are attributable to market discrimination. Ability would not be a premarket factor. However, lower expectations for minorities may not be a result of discrimination but just a rational response to the fact that minorities do not do as well in school as whites. This may be due to environmental factors unrelated to expectations of discrimination in the labor market. Whether this phenomenon itself is a result of discrimination is an open question. Expectation formation models are very complex and often lead to multiple equilibria and therefore are difficult to test empirically. However, the evidence reported here does not provide much support for the claim that the ability measure used by Neal and Johnson79 is substantially contaminated by expectational effects. 75 See our Web appendix, table 3, for evidence on expectations from NLSY97 (supra note 9). 76 Id. 77 Id. 78 For CNLSY teenagers, expectations across racial groups seem to converge at later ages. See our Web appendix (id.). 79 Neal & Johnson, supra note 6. VI. THE EVIDENCE ON NONCOGNITIVE SKILLS Controlling for scholastic ability in accounting for minority-majority wage gaps captures only part of the endowment differences between groups but receives most of the emphasis in the literature on black-white gaps in wages. An emerging body of evidence, summarized by Samuel Bowles, Herbert Gintis, and Melissa Osborne,80 Carneiro and Heckman,81 and Heckman, Stixrud, and Urzua,82 documents that noncognitive skillsmotivation, self-control, time preference, and social skillsare important in explaining socioeconomic success.83 The CNLSY has life cycle measures of noncognitive skills. Mothers are asked age-specific questions about the antisocial behavior of their children such as aggressiveness or violent behavior, cheating or lying, disobedience, peer conflicts, and social withdrawal. The answers to these questions are grouped in different indices.84 Figure 12 shows that there are important racial and ethnic gaps in the antisocial behavior index that emerge in early childhood. The higher the score, the worse the behavior. By ages 56, the average black is roughly 10 percentile points above the average white in the distribution of this score.85 The results shown in Figure 13, where we adjust the gaps by permanent family income, mother's education, and age-corrected AFQT and home scores, also show large reductions.86 (26 kB) FIGURE 12.Average percentile antisocial behavior score by race and age group for CNLSY79 males. (30 kB) FIGURE 13.Adjusted percentile antisocial behavior score by race and age group for CNLSY79 males. Section II documents that minority and white children face substantial differences in family and home environments while growing up. The evidence presented in this section shows that these early environmental differences account (in a correlational sense) for most of the minority-white gap in noncognitive skills, as measured in the CNLSY. Carneiro and Heckman87 document that noncognitive skills are more malleable than cognitive skills and are more easily shaped by interventions. More motivated children achieve more and have higher measured achievement test scores than less motivated children of the same ability. The largest effects of interventions in childhood and adolescence are on noncognitive skills that promote learning and integration into the larger society. Improvements in these skills produce better labor market outcomes and less engagement in criminal activities and other risky behavior. Promotion of noncognitive skill is an avenue for policy that warrants much greater attention. 80 Samuel Bowles, Herbert Gintis, & Melissa Osborne, The Determinants of Earnings: A Behavioral Approach, 39 J. Econ. Literature 1137 (2001). 81 Carneiro & Heckman, supra note 21. 82 James Heckman, Jora Stixrud, & Sergio Urzua, Evidence on the Importance of Cognitive and Noncognitive Skills on Social and Economic Outcomes (unpublished manuscript, Univ. Chicago 2004). 83 Some of the best evidence for the importance of noncognitive skills in the labor market is from the General Education Development (GED) program. This program examines high school dropouts to certify that they are equivalent to high school graduates. In its own terms, the GED program is successful. James J. Heckman & Yona Rubinstein, The Importance of Noncognitive Skills: Lessons from the GED Testing Program, 91 Am. Econ. Rev. 145 (2001), shows that GED recipients and ordinary high school graduates who do not go on to college have the same distribution of AFQT scores (the test graphed in Figure 1). Yet GED recipients earn the wages of high school dropouts with the same number of years of completed schooling. They are more likely to quit their jobs, engage in fighting or petty crime, or be discharged from the military than are high school graduates who do not go on to college or other high school dropouts. Intelligence alone is not sufficient for socioeconomic success. Minority-white gaps in noncognitive skills open up early and widen over the life cycle. 84 The children's mothers were asked 28 age-specific questions about frequency, range, and type of specific behavior problems that children ages 4 and over may have exhibited in the previous 3 months. Factor analysis was used to determine six clusters of questions. The responses for each cluster were then dichotomized and summed to produce a raw score. The percentile score was then calculated separately for each sex at each age from the raw score. A higher percentile score indicated a higher incidence of problems. The antisocial behavior index we use in this paper consists of measures of cheating and telling lies, bullying and cruelty to others, not feeling sorry for misbehaving, breaking things deliberately (if age is less than 12), disobedience at school (if age is greater than 5), and trouble getting along with teachers (if age is greater than 5). 85 In our Web appendix (supra note 9), we show that these differences are statistically strong. Once we control for family and home environments, gaps in most behavioral indices disappear. 86 See our Web appendix, tables 2A and 2B, for the effect of adjusting for other environmental characteristics on the antisocial behavior score (id.). 87 Carneiro & Heckman, supra note 21. VII. SUMMARY AND CONCLUSION This paper discusses the sources of wage gaps between minorities and whites. For all minorities but black males, adjusting for the ability that minorities bring to the market eliminates wage gaps. The major source of economic disparity by race and ethnicity in U.S. labor markets is in endowments, not in payments to endowments. This evidence suggests that strengthened civil rights and affirmative action policies targeted at the labor market are unlikely to have much effect on racial and ethnic wage gaps, except possibly for those specifically targeted toward black males.88 Policies that foster endowments have much greater promise. On the other hand, this paper does not provide any empirical evidence on whether the existing edifice of civil rights and affirmative action legislation should be abolished. All of our evidence on wages is for an environment in which affirmative action laws and regulations are in place. Minority deficits in cognitive and noncognitive skills emerge early and then widen. Unequal schooling, neighborhoods, and peers may account for this differential growth in skills, but the main story in the data is not about growth rates but rather about the size of early deficits. Hispanic children start with cognitive and noncognitive deficits similar to those of black children. They also grow up in similarly disadvantaged environments and are likely to attend schools of similar quality. Hispanics complete much less schooling than blacks. Nevertheless, the ability growth by years of schooling is much higher for Hispanics than for blacks. By the time they reach adulthood, Hispanics have significantly higher test scores than do blacks. Conditional on test scores, there is no evidence of an important Hispanic-white wage gap. Our analysis of the Hispanic data illuminates the traditional study of black-white differences and casts doubt on many conventional explanations of these differences since they do not apply to Hispanics, who also suffer from many of the same disadvantages. The failure of the Hispanic-white gap to widen with schooling or age casts doubt on the claim that poor schools and bad neighborhoods are the reasons for the slow growth rate of black test scores. Deficits in noncognitive skills can be explained (in a statistical sense) by adverse early environments; deficits in cognitive skills are less easily eliminated by the same factors. We have reexamined the Neal-Johnson89 analysis that endowments acquired before people enter the labor market explain most of the minority-majority wage gap. Neal and Johnson use an ability test taken in the teenage years as a measure of endowment unaffected by discrimination. They omit schooling in adjusting for racial and ethnic wage gaps, arguing that schooling choices are potentially contaminated by expectations of labor market discrimination. Yet they do not adjust their measure of ability by the schooling attained at the date of the test, which would be the appropriate correction if their argument were correct. Adjusting wage gaps by both completed schooling and the schooling-adjusted test widens the wage gaps for all groups. This adjustment effect is especially strong for blacks. Nonetheless, half of the black-white male wage gap is still explained by the adjusted score. At issue is how much of the majority-minority difference in schooling at the date of the test is due to expectations of labor market discrimination and how much is due to adverse early environments. While this paper does not settle this question definitively, test score gaps emerge early and are more plausibly linked to adverse early environments. The lion's share of the ability gaps at the date of the test emerge very early, before children can have clear expectations about their labor market prospects. The analysis of Sackett, Hardison, and Cullen90 and the emergence of test score gaps in young children cast serious doubt on the importance of stereotype threats in accounting for poorer black test scores. It is implausible that young minority test takers have the social consciousness assumed in the stereotype literature. If true, black skills are understated by the tests, and the market return to ability should be different for blacks than for whites. We find no evidence of such an effect. Gaps in test scores of the magnitude found in recent studies were found in the earliest tests developed at the beginning of the twentieth century, before the results of testing were disseminated and a stereotype threat could have been "in the air." The recent emphasis on the stereotype threat as a basis for black-white test score differences ignores the evidence that tests are predictive of schooling attainment and market wages. It diverts attention away from the emergence of important skill gaps at early ages, which should be a target of public policy. Effective social policy designed to eliminate racial and ethnic inequality for most minorities should focus on eliminating skill gaps, not on discrimination in the workplace of the early twenty-first century. Interventions targeted at adults are much less effective and do not compensate for early deficits. Early interventions aimed at young children hold much greater promise than strengthened legal activism in the workplace. 88 However, even for black males, a substantial fraction of the racial wage gap can be attributed to differences in skill. 89 Neal & Johnson, supra note 6. 90 Sackett, Hardison, & Cullen, supra note 18. BIBLIOGRAPHY a.. Altonji, Joseph, and Blank, Rebecca. "Gender and Race in the Labor Market." In Handbook of Labor Economics, edited by Orley Ashenfelter and David E. Card, pp. 3C:31434259. New York: Elsevier Science Press, 1999. b.. Armor, David J. Maximizing Intelligence. New Brunswick, N.J.: Transaction Publishers, 2003. c.. Ben-Porath, Yoram. "The Production of Human Capital and the Life Cycle of Earnings." Journal of Political Economy 75 (1967): 35265. d.. Black, Sandra E., and Sufi, Amir. "Who Goes to College? Differential Enrollment by Race and Family Background." Working Paper No. w9310. Cambridge, Mass.: National Bureau of Economic Research, 2002. e.. Bornholz, Robert, and Heckman, James J. "Measuring Disparate Impacts and Extending Disparate-Impact Doctrine to Organ Transplantation." Perspectives in Biology and Medicine 48 (2005): S95S122. f.. Bowles, Samuel; Gintis, Herbert; and Osborne, Melissa. "The Determinants of Earnings: A Behavioral Approach." Journal of Economic Literature 39 (2001): 113776. g.. Bureau of Labor Statistics. NLS Handbook 2001. Washington, D.C.: U.S. Department of Labor, 2001. h.. Cameron, Stephen V., and Heckman, James J. "The Dynamics of Educational Attainment for Black, Hispanic, and White Males." Journal of Political Economy 109 (2001): 45599. i.. Campbell, Frances; Ramey, Craig; Pungello, Elizabeth; Sparling, Joseph; and Miller-Johnson, Shari. "Early Childhood Education: Young Adult Outcomes from the Abecedarian Project." Applied Developmental Science 6 (2002): 4257. j.. Carneiro, Pedro, and Heckman, James J. "Human Capital Policy." In Inequality in America: What Role for Human Capital Policies? edited by James J. Heckman and Alan B. Krueger, pp. 77240. Cambridge, Mass.: MIT Press, 2003. k.. Cunha, Flavio, and Heckman, James J. "The Technology of Skill Formation." Paper presented at the Society of Economic Dynamics and Control, Florence, 2004, and at the American Economic Association annual meeting, San Diego, 2004. l.. Cunha, Flavio; Heckman, James J.; Lochner, Lance; and Masterov, Dimitriy V. "Interpreting the Evidence on Life Cycle Skill Formation." In Handbook of Education Economics, edited by Finis Welch and Eric Hanushek. New York: North Holland, forthcoming 2005. m.. Currie, Janet, and Thomas, Duncan. "School Quality and the Longer-Term Effects of Head Start." Journal of Human Resources 35 (2000): 75574. n.. Donohue, John J., and Heckman, James J. "Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks." Journal of Economic Literature 29 (1991): 160343. o.. Duncan, Greg, and Brooks-Gunn, Jeanne. Consequences of Growing up Poor. New York: Russell Sage, 1997. p.. Ferguson, Ronald. "What Doesn't Meet the Eye: Understanding and Addressing Racial Disparities in High Achieving Suburban Schools." Special Edition, Policy Issues Report. Naperville, Ill.: North Central Regional Educational Laboratory, 2002. q.. Ferguson, Ronald. "Why America's Black-White School Achievement Gap Persists." Unpublished manuscript. Cambridge, Mass.: Harvard University, 2002. r.. Fryer, Roland, and Levitt, Steven. "Understanding the Black-White Test Score Gap in the First Two Years of School." Review of Economics and Statistics 86 (2004): 44764. s.. Hansen, Karsten; Heckman, James J.; and Mullen, Kathleen. "The Effect of Schooling and Ability on Achievement Test Scores." Journal of Econometrics 121 (2004): 3998. t.. Heckman, James J. "Policies to Foster Human Capital." Research in Economics 54 (2000): 356. u.. Heckman, James J.; Larenas, Maria Isabel; and Urzua, Sergio. "Accounting for the Effect of Schooling and Abilities in the Analysis of Racial and Ethnic Disparities in Achievement Test Scores." Working paper. Chicago: University of Chicago, Department of Economics, 2004. v.. Heckman, James J.; Lochner, Lance; and Taber, Christopher. "Explaining Rising Wage Inequality: Explorations with a Dynamic General Equilibrium Model of Labor Earnings with Heterogeneous Agents." Review of Economic Dynamics 1 (1998): 158. w.. Heckman, James J., and Rubinstein, Yona. "The Importance of Noncognitive Skills: Lessons from the GED Testing Program." American Economic Review 91 (2001): 14549. x.. Heckman, James J.; Stixrud, Jora; and Urzua, Sergio. "Evidence on the Importance of Cognitive and Noncognitive Skills on Social and Economic Outcomes." Unpublished manuscript. Chicago: University of Chicago, 2004. y.. Heckman, James J., and Todd, Petra. "Understanding the Contribution of Legislation, Social Activism, Markets and Choice to the Economic Progress of African Americans in the Twentieth Century." Unpublished manuscript. Chicago: American Bar Foundation, 2001. z.. Herrnstein, Richard, and Murray, Charles. The Bell Curve: Intelligence and Class Structure in American Life. New York: Free Press, 1994. aa.. Jencks, Christopher, and Phillips, Meredith. The Black-White Test Score Gap. Washington, D.C.: Brookings Institution, 1998. ab.. Murray, Charles. "The Secular Increase in IQ and Longitudinal Changes in the Magnitude of the Black-White Difference: Evidence from the NLSY." Paper presented at the Behavior Genetics Association meeting, Vancouver, 1999. ac.. Neal, Derek. "The Measured Black-White Wage Gap among Women Is Too Small." Journal of Political Economy 112 (2004): S1S28. ad.. Neal, Derek, and Johnson, William. "The Role of Premarket Factors in Black-White Wage Differences." Journal of Political Economy 104 (1996): 86995. ae.. Sackett, Paul; Hardison, Chaitra; and Cullen, Michael. "On Interpreting Stereotype Threat as Accounting for African AmericanWhite Differences in Cognitive Tests." American Psychologist 59 (2004): 713. af.. Steele, Claude, and Aronson, Joshua. "Stereotype Threat and the Test Performance of Academically Successful African Americans." In The Black-White Test Score Gap, edited by Christopher Jencks and Meredith Phillips, pp. 40127. Washington, D.C.: Brookings Institution, 1998. ag.. Urzua, Sergio. "The Educational White-Black Gap: Evidence on Years of Schooling." Unpublished manuscript. Chicago: University of Chicago, Department of Economics, 2003. From checker at panix.com Fri May 13 18:28:58 2005 From: checker at panix.com (Premise Checker) Date: Fri, 13 May 2005 14:28:58 -0400 (EDT) Subject: [Paleopsych] eBay: Certificate of Ownership for the Entire Universe Message-ID: Certificate of Ownership for the Entire Universe eBay item 6530828170 (Ends May-17-05 08:29:31 PDT) http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=4174&item=6530828170 Current bid: US $102.50 Place Bid > Time left: 4 days 1 hour 10-day listing, Ends May-17-05 08:29:31 PDT Start time: May-07-05 08:29:31 PDT History: [20]15 bids (US $1.00 starting bid) High bidder: [21]tcieri [s.gif] ( [22]2 ) Item location: Randolph, New Jersey United States Ships to: Worldwide Shipping costs: Check item description and payment instructions or contact seller for details The item you are bidding on is a Certificate of Ownership for the Entire Universe. Apparently, my previous auction for the entire universe was in violation of ebays regulations. This was quite frustrating, as it had approximately 1500 viewers in less than 24 hours and over 31 bids with over 8 days to go. I also had numerous comments which were often quite hilarious and which made the previous 24 hours some of the most interesting hours of my life. Since I was not perfectly clear last time around, I am re-listing my item with some slight tweaks. What you will receive is a beautiful 8 x 10 certificate with gold-leaf borders that I created in the privacy of my own studio. There is only one such Certificate in existence, and it will come to you signed and dated by myself, Paul N Grech. I am an artist from NJ, and this auction is part of a 9-day interactive art project where I decided two days ago that people really need to start accepting accountability for their lives. Since nobody as yet has claimed ownership of this wonderful universe, I came to the realization that it might as well be YOU. The certificate is my artistic decree that the universe is a nothing but a cascading of ideas shared by billions of human beings, including yourself. With this certificate, you might be entitled to billions of stars and planets, a number of black holes, a bunch of elusive dark matter, and the collective unconscience of our unique yet infantile species. Seeing as how the human mind observes the universe, the mind also responsible for defining and essentially creating reality. It is quite possible that there is no universe outside of your own consciousness. It is equally possible that there are zillions of universes occurring simultaneously in and around us as we speak. Since there is no way to know for sure, the winning bidder will receive only the certificate. How you wish to interpret your ownership from here on is entirely up to you. Upon receipt of the certificate, you may shape and hone the universe as you see fit. However, I encourage you to think and act carefully; the tone of your every thought, emotion, action, and lack of action will have consequences that affect everything around you. If you find that the universe is engaged in petulance and/or is mistreating you, then I highly consider you shift your attitude toward one that is conducive to the well being of everyone. We are all counting on you. Shipping of the certificate is $5.00. I can provide pictures later as I wanted to re-list as quickly as possible so as not to disappoint anyone who had previously bid. The bidding will start at one dollar because I really want you to take charge. Disclaimer: This certificate shall NOT interfere with any contracts, agreements, or conditions of material, intellectual, or spiritual ownership as they currently exist on planet earth. _________________________________________________________________ On May-09-05 at 11:06:48 PDT, seller added the following information:Somebody who recently purchased a star was threatening to sue me! Show All Questions http://contact.ebay.com/ws/eBayISAPI.dll?ShowAllQuestions&requested=the.murr&iid=6530828170&frm=284&redirect=0&ShowASQAlways=1&SSPageName=PageAskSellerQuestion_VI Question & Answer Answered On Q: Okay--so the universe exists only in our individual minds. Do you mean to tell me you don't look both ways before crossing a highway? Do you just imagine the vehicles out of existence because they interfere with your reality? May-13-05 A: NOt sure I understand your question. You should also know that, just because I made this silly claim about the universe being an intellectual construct does not mean that it is true or that I buy into it. Q: I am concerned about the price. The certificate may have artistic value, but what it represents is of questionable value. Given that the universe is expanding indefinitely, it is for all practical perposes infinite, and given that the average cubic yard contains a few atoms of hydrogen gas, which is worthless, this means that the value of the entire universe is zero, when you multiply zero by infinity. Your product is overpriced. May-13-05 A: Well..the "matter" comprising your body is mostly empty space, but that wouldn't make you worthless, would it? The whole is always greater than the sum of its parts. As for overpricing...The net energy of the universe is zero, and so if your interpret value as measure of total energy then yes, this auction is a total rip-off! Q: OK I'm interested but I need to know exactly how this item works- I need a basic run-down of all the physics(i.e.. the grand unifying theory). Plus what happens behind an event horizon. Is there any other forms of lime on it other than life? I mean I don't want it compleatly covered in mould. May-13-05 A: I am not a physicist, but I dont believe there is an acceptable grand unified theory at this time. As for what happens BEHIND an event horizon..next time I'm in the neighborhood I will check for you. If I had to guess though, I'd speculate that the term behind is entirely relative because it would require directionalities (such as front/back, now/then, here/there), none of which are irrelevant in the singularity of a black hole. Maybe theres even a portal to another universe where lawyers and taxes don't exist. Q: Have you lost your mind? May-13-05 A: More or less...but probably more "less" than "more". Q: Hi. I'm wondering though--if the universe(s) exist as part of the imaginative or world-building or world-observing-and-integrative faculty of one's (and others, assuming there are others) minds, wouldn't my ownership of the universe in the first case already be effective, and in the second, involve ownership of (part of--but a very important part of) another human being? Isn't this contrary to something--some amendment to the constitution, or general social morality, etc? So aren't I either buying something I already "own" (or a certification of ownership entitling me to ownership of it)--or something I'm not allowed to own? And therefore wasn't ebay completely justified in taking down this odious slavery-inducing or self-selling type of auction??? Hunh? thanks, G May-11-05 A: First off, I dont claim to have the answer(s).. I suggest offer possibilities. But can you honestly tell me that YOU know whats going on, or that you have the answers? As for the latter portion of your questionthe answer is no..the auction will NOT violate anyones rights or privileges because: a) if you are referring to rights guaranteed by either the US Constitution or the Declaration of Independence, both of these documents were conceived and written on earth, and I specifically stated that my auction shall not interfere with any contracts or agreements as they currently exist on this planet. Furthermore, these are American documents, and so the portion of the planet that enjoys these rights afforded by these documents is in the minority. b) in order for someones rights to be violated I would have to assume that someone else exists, and that would preclude one of the aspects of the argument which suggests the universe is entirely a fabrication. As for whether you alre Q: What exactly did your previous auction violate? May-09-05 A: The previous auction was in violation because it did not list an actual item, bu rather, a set of ideas. This time around I'm auctioning off a certificate which is a physical object. Q: I was woundering how much it would cost to ship to "crunchie" or the larger galexy in what you call: M55 I'm in one of the outer arms Thanks ^^^^^ May-09-05 A: "Crunchie", "The Milky Way"...is everytyhing out there named after some type of candy bar? As for shipping..it would probably be cheaper if you just relocated. From wtroytucker at yahoo.com Fri May 13 21:20:02 2005 From: wtroytucker at yahoo.com (W. Troy Tucker) Date: Fri, 13 May 2005 14:20:02 -0700 (PDT) Subject: [Paleopsych] Featuring Dave: The Bell Curve â Latest Addition to the Meta-Analysis In-Reply-To: Message-ID: <20050513212002.42810.qmail@web40610.mail.yahoo.com> Nonsense. --- Premise Checker wrote: > The Bell Curve ? Latest Addition to the > Meta-Analysis > http://www.featuringdave.com/logicalmeme/2005/04/bell-curve-latest-addition-to-meta.html > Tuesday, April 26, 2005 > > [4]Marginal Revolution reports on some > politically incorrect research Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html From anonymous_animus at yahoo.com Sat May 14 21:10:07 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sat, 14 May 2005 14:10:07 -0700 (PDT) Subject: [Paleopsych] race and income In-Reply-To: <200505141800.j4EI0KR29390@tick.javien.com> Message-ID: <20050514211007.41608.qmail@web30812.mail.mud.yahoo.com> Regarding race, testing (IQ or other) and income... why are we regarding human beings purely as numers in the FIRST place? Isn't that a bit reductionist, considering the trend of science is toward context and integration? I'm sure there are uses for correlating race, income and IQ. I'm not against compiling statistics. But I'd hate to see that way of thinking about human beings replace the actual human reality. If it turned out that one racial group was simply not as good at testing as another, what would the civilized response be? I think a good response would be to evolve beyond the concept of slotting people by numbers and earning potential, and toward companies and communities where everyone is valued. Not just as a number, but as a many-faceted person who contributes non-quantifiable value. Improving test-taking skills and coming up with new exercises to improve IQ (video games, maybe) is a good idea, but I think the underlying paradigm that people are numbers is the real problem. On the race/IQ issue, one group is saying "Look at this information, we believe it's accurate, and it's political correctness to avoid looking at it". Another says "That's all explained by stress from income disparity, life span, etc." I tend to go with that explanation. Howard has mentioned in his books how being on the bottom of the pecking order stresses the body and brain and affects mood and intellect. Some feel strongly that it has to do with innate ability, not poverty or social exclusion. But *either way* you have to ask, "Why are we classifying people by IQ or income in the first place?" Modern thinkers are aware of systems theory. In a system with many variables, it's simpler to focus on one (favors tidy explanations that fit on bumperstickers), but then you can lose the magic of a multivariable system where emergent patterns are possible. Think of the emergent patterns we could grow in schools, corporations and neighborhoods if we thought about not one variable like income or IQ, but about the full complexity of human experience and learning, individually and in groups? Would that bring up fears of socialism? No reason that can't be resolved with self-owned businesses where each person gets a share according to his value to the group. Michael __________________________________ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail From checker at panix.com Sun May 15 00:48:37 2005 From: checker at panix.com (Premise Checker) Date: Sat, 14 May 2005 20:48:37 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Freakonomics': Everything He Always Wanted to Know Message-ID: 'Freakonomics': Everything He Always Wanted to Know http://www.nytimes.com/2005/05/15/books/review/15HOLTL.html [First chapter appended.] By JIM HOLT FREAKONOMICS A Rogue Economist Explores the Hidden Side of Everything. By Steven D. Levitt and Stephen J. Dubner. 242 pp. William Morrow. $25.95. A FEW years ago, a young economist named Steven D. Levitt became briefly notorious for collaborating on a research paper that contained a strikingly novel thesis: abortion curbs crime. What Levitt and his co-author claimed, specifically, was that the sharp drop in the United States crime rate during the 1990's -- commonly attributed to factors like better policing, stiffer gun laws and an aging population -- was in fact largely due to the Roe v. Wade decision two decades earlier. The logic was simple: unwanted children are more likely to grow up to become criminals; legalized abortion leads to less unwantedness; therefore, abortion leads to less crime. This conclusion managed to offend nearly everyone. Conservatives were outraged that abortion was seemingly being promoted as a solution to crime. Liberals detected a whiff of racist eugenics. Besides, what business did this callow economist have trespassing on the territory of the criminologist? Economics is supposed to be about price elasticities and interest rates and diminishing marginal utilities, not abortion and crime. That is what makes it so useful to undergraduates seeking relief from insomnia. Levitt has strayed far from the customary paddock of the dismal science in search of interesting problems. How do parents of different races and classes choose names for their children? What sort of contestants on the TV show ''The Weakest Link'' are most likely to be discriminated against by their fellow contestants? If crack dealers make so much money, why do they live with their moms? Such everyday riddles are fair game for the economist, Levitt contends, because their solution involves understanding how people react to incentives. His peers seem to agree. In 2003, Levitt was awarded the John Bates Clark Medal, bestowed every two years on the most accomplished American economist under 40. ''Freakonomics,'' written with the help of the journalist Stephen J. Dubner, is an odd book. For one thing, it proudly boasts that it has no unifying theme. For another, each chapter begins with a quotation from the under-author (Dubner) telling us how great the over-author (Levitt) is: a ''master of the simple, clever solution,'' a ''noetic butterfly'' (!), ''genial, low-key and unflappable,'' etc. Yet a little self-indulgence can be tolerated in a book as instructive and entertaining as this one. (''Freakonomics'' grew out of [2]a profile Dubner wrote about Levitt in The New York Times Magazine, where I am also a contributor, but we've never met.) The trivia alone is worth the cover price. Did you know that Ku Klux Klan members affixed a ''kl'' to many words (thus two Klansmen would hold a ''klonversation'' in the local ''klavern'') or that the secret Klan handshake was ''a left-handed, limp-wristed fish wiggle''? In the mid-1940's, a Klan infiltrator began to feed such intelligence to writers for the radio show ''The Adventures of Superman,'' who incorporated it into the plotline, thereby making the Klan look ridiculous in the eyes of the public and driving down its membership. Levitt uses the rise and fall of the K.K.K. to illustrate the power of hoarded information. He finds a parallel in the world of real estate, where brokers employ code words in advertisements to let potential buyers know that an apartment can be bought for less than its listing price. ''Spacious'' and ''great neighborhood'' are associated with a low closing price, whereas ''state of the art'' and ''maple'' are associated with a high price. Sometimes Levitt seeks out his raw material, and sometimes -- as with a stack of spiral notebooks kept by a Chicago crack gang -- it falls into his lap. These notebooks, obtained by a graduate student, Sudhir Venkatesh, who spent a scary period all but living with the gang, contained sales figures, wages, dues, even death benefits paid to families of murdered members over a four-year period, at the peak of the crack boom. By analyzing them, Levitt and Venkatesh were able to work out the organization of the crack business, which turned out to be rather like that of McDonald's. The leader of the gang did fairly well, making around $100,000 a year (tax free). But the gang's ''foot soldiers,'' who sold the crack on the streets, cleared only $3.30 an hour -- less than the minimum wage. For this pittance they ran a one-in-four risk of being killed during the period in question, worse than the odds for a Texas death-row inmate. Why would anyone take such a job? Like other ''glamour professions,'' the crack trade is best viewed as a tournament, Levitt observes. You have to start out at the bottom to have a shot at the top job. Levitt is happiest grappling with questions that have the potential to overturn the ''conventional wisdom.'' ''Where did all the criminals go?'' proved to be the perfect instance of such a question. The sudden and precipitous crime drop in the 1990's took everyone by surprise. Plenty of plausible-sounding hypotheses were put forward to explain it. But when Levitt turned an economist's eye to the data, he found that most of the supposed causes -- innovative policing strategies, stricter gun control, a strong economy, the aging of the population -- had a negligible effect. Others could be shown to play a limited role: increased imprisonment seemed to account for a third of the crime drop; the crash of the crack market for 15 percent; the hiring of more cops for another 10 percent. And the balance? Here is where Levitt and his collaborator, John Donohue of Stanford Law School, showed unsettling originality. Since abortion was legalized in 1973, around a million and a half women a year have ended unwanted pregnancies. Many of the women taking advantage of Roe v. Wade have been unmarried, poor and in their teens. Childhood poverty and a single-parent household are two of the strongest predictors of future criminality. As it happens, the crime rate started to drop in the early 1990's, just as children in the first post-Roe cohort were hitting their late teens, the criminal's prime. Hence Levitt and Donohue's audacious claim: the crime drop was, in economists' parlance, an ''unintended benefit'' of legalized abortion. A controlled experiment to test the truth of this theory is obviously out of the question. In ''Freakonomics,'' however, Levitt does the next best thing, teasing out subtle correlations that render the abortion-crime link more probable. (States like New York and California that legalized abortion before Roe v. Wade, for example, showed the earliest drops in crime.) In the social sciences, that is about as close as you can get to demonstrating causation. To insulate himself from the charge that he is advocating abortion as the cure for crime, Levitt does a little cost-benefit calculation. Suppose, for the sake of argument, we say a fetus is worth one one-hundredth of a person. Even then, he shows, the number of averted murders would not justify the number of abortions. This is clever but disingenuous. Anti-abortion groups do not hesitate to cite undesirable consequences of abortion. Why shouldn't abortion rights advocates get to cite its desirable consequences, like a drop in crime resulting from fewer unwanted children? Economists can seem a little arrogant at times. They have a set of techniques and habits of thought that they regard as more ''rigorous'' than those of other social scientists. When they are successful -- one thinks of Amartya Sen's important work on the causes of famines, or Gary Becker's theory of marriage and rational behavior -- the result gets called economics. It might appear presumptuous of Steven Levitt to see himself as an all-purpose intellectual detective, fit to take on whatever puzzle of human behavior grabs his fancy. But on the evidence of ''Freakonomics,'' the presumption is earned. Jim Holt reviews books for The New Yorker and The New York Review of Books, among other publications. ----------------- http://www.nytimes.com/2005/05/15/books/chapters/0515-1st-levitt.html?pagewanted=print First chapter of 'Freakonomics' By STEVEN D. LEVITT and STEPHEN J. DUBNER Imagine for a moment that you are the manager of a day-care center. You have a clearly stated policy that children are supposed to be picked up by 4 p.m. But very often parents are late. The result: at day's end, you have some anxious children and at least one teacher who must wait around for the parents to arrive. What to do? A pair of economists who heard of this dilemma - it turned out to be a rather common one - offered a solution: fine the tardy parents. Why, after all, should the day-care center take care of these kids for free? The economists decided to test their solution by conducting a study of ten day-care centers in Haifa, Israel. The study lasted twenty weeks, but the fine was not introduced immediately. For the first four weeks, the economists simply kept track of the number of parents who came late; there were, on average, eight late pickups per week per day-care center. In the fifth week, the fine was enacted. It was announced that any parent arriving more than ten minutes late would pay $3 per child for each incident. The fee would be added to the parents' monthly bill, which was roughly $380. After the fine was enacted, the number of late pickups promptly went ... up. Before long there were twenty late pickups per week, more than double the original average. The incentive had plainly backfired. Economics is, at root, the study of incentives: how people get what they want, or need, especially when other people want or need the same thing. Economists love incentives. They love to dream them up and enact them, study them and tinker with them. The typical economist believes the world has not yet invented a problem that he cannot fix if given a free hand to design the proper incentive scheme. His solution may not always be pretty - it may involve coercion or exorbitant penalties or the violation of civil liberties - but the original problem, rest assured, will be fixed. An incentive is a bullet, a lever, a key: an often tiny object with astonishing power to change a situation. We all learn to respond to incentives, negative and positive, from the outset of life. If you toddle over to the hot stove and touch it, you burn a finger. But if you bring home straight A's from school, you get a new bike. If you are spotted picking your nose in class, you get ridiculed. But if you make the basketball team, you move up the social ladder. If you break curfew, you get grounded. But if you ace your SATs, you get to go to a good college. If you flunk out of law school, you have to go to work at your father's insurance company. But if you perform so well that a rival company comes calling, you become a vice president and no longer have to work for your father. If you become so excited about your new vice president job that you drive home at eighty mph, you get pulled over by the police and fined $100. But if you hit your sales projections and collect a year-end bonus, you not only aren't worried about the $100 ticket but can also afford to buy that Viking range you've always wanted - and on which your toddler can now burn her own finger. An incentive is simply a means of urging people to do more of a good thing and less of a bad thing. But most incentives don't come about organically. Someone - an economist or a politician or a parent - has to invent them. Your three-year-old eats all her vegetables for a week? She wins a trip to the toy store. A big steelmaker belches too much smoke into the air? The company is fined for each cubic foot of pollutants over the legal limit. Too many Americans aren't paying their share of income tax? It was the economist Milton Friedman who helped come up with a solution to this one: automatic tax withholding from employees' paychecks. There are three basic flavors of incentive: economic, social, and moral. Very often a single incentive scheme will include all three varieties. Think about the anti-smoking campaign of recent years. The addition of a $3-per-pack "sin tax" is a strong economic incentive against buying cigarettes. The banning of cigarettes in restaurants and bars is a powerful social incentive. And when the U.S. government asserts that terrorists raise money by selling black-market cigarettes, that acts as a rather jarring moral incentive. Some of the most compelling incentives yet invented have been put in place to deter crime. Considering this fact, it might be worthwhile to take a familiar question - why is there so much crime in modern society? - and stand it on its head: why isn't there a lot more crime? After all, every one of us regularly passes up opportunities to maim, steal, and defraud. The chance of going to jail - thereby losing your job, your house, and your freedom, all of which are essentially economic penalties - is certainly a strong incentive. But when it comes to crime, people also respond to moral incentives (they don't want to do something they consider wrong) and social incentives (they don't want to be seen by others as doing something wrong). For certain types of misbehavior, social incentives are terribly powerful. In an echo of Hester Prynne's scarlet letter, many American cities now fight prostitution with a "shaming" offensive, posting pictures of convicted johns (and prostitutes) on websites or on local-access television. Which is a more horrifying deterrent: a $500 fine for soliciting a prostitute or the thought of your friends and family ogling you on www.HookersAndJohns.com . . . From checker at panix.com Sun May 15 00:48:48 2005 From: checker at panix.com (Premise Checker) Date: Sat, 14 May 2005 20:48:48 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Translation Nation': Spanglish Message-ID: 'Translation Nation': Spanglish http://www.nytimes.com/2005/05/15/books/review/15ERICKSO.html By STEVE ERICKSON TRANSLATION NATION Defining a New American Identity in the Spanish-Speaking United States. By H?ctor Tobar. 307 pp. Riverhead Books. $24.95. EARLY in this chronicle of an emerging Latino United States, someone in Tijuana trying to make his way north remarks to H?ctor Tobar, ''I think that the border will disappear before we lose the desire to cross.'' ''Translation Nation: Defining a New American Identity in the Spanish-Speaking United States'' crosswires de Tocqueville's ''Democracy in America'' with Che Guevara's ''Motorcycle Diaries'' as Tobar makes the journey from West Coast to East, from America's future into its past, from Hollywood's ''seamier half'' -- is there another half? -- to Texas, Florida and New York, with stops in the heartland of Nebraska and Idaho, where a Hispanic America proves as enduring as it would seem unlikely. ''Today,'' Tobar notes, ''Los Angeles and California are quietly exporting their people and their way of life eastward across the continent.'' Tobar's book is a triumph of observation. In one account after another, from that of the couple who work in a Tyson chicken plant in Alabama, where the author goes ''undercover'' as a factory worker, ''hoping to see America through the innocent eyes of the wandering migrant,'' to the story of the marine from Guatemala who dies in the Persian Gulf, Tobar vividly and movingly captures the conflict between the immigrant ideal to which America has always aspired and the presiding white culture's deep ambivalence about the immigrant presence. While Tobar is an impressive reporter -- the former national Latino affairs correspondent for The Los Angeles Times, he is now the paper's Buenos Aires bureau chief -- ''Translation Nation'' is often most compelling when it's telling his own story, which begins in Los Angeles, ''my tierra,'' he calls it, ''my homeland.'' Los Angeles -- ''the place to which I will always return'' -- is the nerve center for Tobar's quest. For Latinos, it's the ultimate American city, a city of immigrants in a country of them, trafficking in identity, reinvention and the opportunity for people to recast themselves in the image of their hopes. When Tobar was a young boy, the image of his hopes bore more resemblance to the Lakers basketball star Jerry West than to the revered Che, for Tobar's leftist parents and other Latinos a Jesus-like martyr whose hold on their imaginations was mythic. But as Los Angeles grew increasingly Hispanic over the next 40 years, Tobar was increasingly pulled back to the roots of his Guatemalan family. Che became a more complicated symbol, both of an oppressive immigrant past and of a utopian future that America at once promised and betrayed. Growing up, Tobar wrestled with notions of identity, and he still does. When he returns to Alabama, ''liberated of all disguises,'' four years after his reporting trip, he goes to the new Roman Catholic church the Latinos have built, and decides to take communion. Should he surrender to his ''middle-class squeamishness'' and receive the wafer in his hand, or open his mouth to the priest's offering like ''a real hombre''? In a moment he decides: ''I opened my mouth.'' Tobar's life, and the life of a Spanish-speaking United States, contain the paradoxical story of the American dream -- pockmarked by the realities of racism and economic exploitation -- that transforms its aspirants and is in turn transformed by them. If ''Translation Nation'' is haunted by how America's border will disappear before the desire to cross it does, Tobar is more preoccupied with the border than with the desire. Therein lies what is most conspicuously absent from his otherwise fine book. ''What it means to be an American citizen,'' Tobar writes, ''and what makes you a citizen, has been a fluid concept throughout this country's history. . . . Throughout the 17th and 18th centuries, the time when most modern American political institutions were being founded, the dominant strain in political philosophy had it that only property owners were qualified to vote or hold office.'' True enough, but as those institutions have evolved so has the idea of citizenship. It is typical of ''Translation Nation'' that Tobar defines being an American not in terms of what it means but in terms of what it doesn't. Too many stories here, including that of Tobar's own parents, duck the question: Why is the desire to cross the border stronger than the border itself? ''Even if they catch us 100 times,'' that man in Tijuana goes on to say, ''we're going to get in one day.'' After finishing ''Translation Nation,'' the reader remains uncertain whether the determination to try 101 times is something that Tobar somehow finds too incomprehensible or unimportant to talk about, or whether he considers it an intangible too troublesome to contemplate for everything it insinuates. For all the ways that America routinely fails its promise, it's also, uniquely, a country defined by an idea rather than by common territory or tradition. It wouldn't undercut Tobar's eloquent complaint about the injustices of the nation that so many Latinos sacrifice so much to adopt as their own -- if anything, it would inform it -- to acknowledge that, in a world of countries that build borders to keep people in, America has felt compelled to build borders to keep out the millions and millions of people for whom its allure involves more than just a better job. Steve Erickson is the author of ''Our Ecstatic Days'' and editor of the literary magazine Black Clock. From checker at panix.com Sun May 15 00:49:05 2005 From: checker at panix.com (Premise Checker) Date: Sat, 14 May 2005 20:49:05 -0400 (EDT) Subject: [Paleopsych] Wolfram: Three Years of A New Kind of Science Message-ID: Three Years of A New Kind of Science From: Stephen Wolfram Date: Sat, 14 May 2005 09:44:38 -0500 To: eugen at leitl.org Subject: Three Years of A New Kind of Science Today it is three years since I published my book A New Kind of Science. It seems like a lot longer than that--so much has happened in the intervening time. What started as a book is steadily emerging as a major intellectual movement with its own structure and community. The first year after the book came out was dominated by a certain amount of "paradigm shift turbulence." But by the second year, many serious projects were starting, and indicators like the publication rate of NKS-based papers began to climb. Now, in the third year, a recurring theme has been the emergence of a growing group of exceptional individuals who are planning to base their careers on NKS. There are scores of NKS-based Ph.D. theses underway, and all sorts of NKS-based corporate ventures--as well as our own growing NKS R&D operation in Boston. Later this year, the first full-length independent book based on NKS will be published, and the first independent NKS conference will be held. In late June, we will be holding our third annual NKS Summer School--for which there were a record number of exceptional applicants. We are planning to have our next major NKS conference in spring 2006; we'll be announcing the details shortly. There will also be an NKS mini-course at our Wolfram Technology Conference this October. This year I myself have mostly been in a tool-building phase, working on major new Mathematica technology that, among other things, will be very important for NKS research--and which I can't wait to use. There's a lot more in the pipeline too. We're developing plans for a new kind of publishing medium for NKS (partly based on the Complex Systems journal that I've been publishing since 1986). We're also planning later this year to start regular "live experiments," in which I'll be leading public web-conferenced explorations into the computational universe. Also in the next few months we're planning to release a rather unexpected consumer-oriented application of NKS, which I expect we'll all be hearing quite a bit about. As we begin the fourth year of NKS, I feel more optimistic than ever before about its promise--and its significance in science, technology, the arts, and beyond. It will be fascinating to see where the most important NKS-based breakthroughs come from, and what they will be. I hope you'll have the opportunity to take part in the excitement of the upcoming years of early NKS growth. -- Stephen Wolfram http://www.wolframscience.com From checker at panix.com Sun May 15 00:49:21 2005 From: checker at panix.com (Premise Checker) Date: Sat, 14 May 2005 20:49:21 -0400 (EDT) Subject: [Paleopsych] Nextbook; Why are American psychologists wary of transforming your soul? Message-ID: Why are American psychologists wary of transforming your soul? Andrew Heinze makes explicit an unspoken connection. http://www.nextbook.org/cultural/print.html?id=129 A gateway to Jewish literature, culture and ideas 5.2.11 INTERVIEW BY Kristin Eliasberg From Freud to Ann Landers, Jewish psychologists and advice columnists have been instrumental in shaping the collective American psyche, but often took pains to downplay their background. And as a rule, historians have avoided examining how the shared heritage of these popular thinkers has affected the modern understanding of the self. In Jews and the American Soul, Andrew Heinze argues that, whether or not they were conscious of it, they share a moral sensibility grounded in the Hebrew Bible. Why write a book specifically about Jewish psychologists in America? To show that Jews in the 20th century were central actors in the development of American ideas of the psyche, the soul, human nature, and so forth. People have been aware that there are a lot of famous Jews--that's not surprising--but even among historians, there's no sense that anything distinctively Jewish was conveyed into mainstream American thought in such an important area as popular psychology. Joseph Jastrow, Alfred Adler, Sigmund Freud, Joyce Brothers, Erich Fromm You talk about many Jewish figures in psychological thought since the 1890s. What ties them all together? The Jews involved in popularizing psychological ideas tended to say things that they had in common but were not shared with non-Jewish counterparts. For example, in the early 1900s they reacted much more to mainstream ideas about intelligence and the degree to which different ethnic groups might have different capacities and personality traits. So many were immigrants and were fighting the nativists who were using these psychological arguments to shut down immigration. The Jewish thinkers seemed to almost have a consensus against these hereditarian ideas. And what they said about the most basic questions of human nature changed the American conversation. Is there one thing they all have in common? I would say a Jewish moral perspective--a specific Jewish moral sensibility marching into the mainstream culture. How would you characterize that perspective? In a lot of ways it would overlap with a Christian moral perspective: both read the Book of Proverbs, for example, which has moral instruction. But it also exists in tension with a Christian moral perspective. In the early years of psychology, there were tendencies to almost euphoric views about how one can totally transform oneself. The Jews didn't jump on that bandwagon; they pulled back and followed a more guarded optimism. They didn't subscribe to those notions partly because of the Jewish rationalist tradition, but also because they had no model like the Christian conversion experience. Especially for Protestants, there's a model of complete transformation--the Holy Ghost can enter you and purge you of sin. There really wasn't anything like that in mainstream Judaism. The Jewish approach comes more from the tradition of musar, to keep working at moral improvement. It's hopeful, but emphasizes self-discipline and emotional self-control. The people I talk about emphasized evil, took a slightly darker view about human nature. I ascribe that to the basic Jewish historic sensibility--the Inquisition, pogroms, the whole history of persecution. How do self-proclaimed secularists--[1]Freud, [2]Adler, even [3]Dr. Joyce Brothers--fit this mold? All of these people grew up in a world which was Jewish in important ways. Many of them were immigrants or came from immigrant families; some grew up in religious households. It's only after you introduce those biographical details that it makes sense: These people were raised as Jews. They may have become secular, but it's not like you forget your parents or that you went to a Jewish school or studied the Bible. What was really interesting with Freud and Adler was that they were both fascinated and inspired by the Bible stories that they learned in Vienna. [4]Joseph Jastrow was one of the first to really write for a mass audience--was one of the first radio psychologists, had his own newspaper column. Jastrow is a great example of someone projecting his own perspective into the public. His father was a Talmudic scholar and wrote a [5]dictionary of Talmudic terms that is still in use. He was also the brother-in-law to [6]Henrietta Szold. In the secular guise of popular psychology Jastrow carried on what rabbis used to do traditionally--correspondence with Jews who had questions about how to apply Jewish laws, how should you live, the right way to live. You've said you had to be very careful not to overreach. Why? Well, I could have decided to just write really speculatively, if I had just gone out and said, "This is a Jewish idea, that is Jewish," maybe I could have written a best seller. But it was important to make a dent in the way that people teach and write about 20th century American history. Academics especially can be very wary about any ascription of Jewishness to any of these ideas. Even with [7]Erich Fromm, who was a yeshiva bocher, Orthodox until he was 20, you couldn't make more of a case about how completely Jewishly saturated his whole perspective was. Yet somehow no one had ever really emphasized that before. Believe me, I have gotten into trouble in academic circles for trying to isolate this as a Jewish experience. Some people were really bugged about the fact that I was singling out Jews. Why? [8]David Hollinger pointed out that a persistent inhibition, based on the legitimate desire to avoid ethnic stereotyping, has kept scholars from investigating the ways in which Jewishness might have figured into what intellectuals chose to write and talk about. Put into normal language, there's one real good reason to be wary about saying this is a Jewish idea and that's a Jewish idea: that was the kind of thing the Nazis did. They posited that there is a German or Aryan consciousness or mind or soul or psyche and a Jewish one and that they are essentially different. Among Jews there is a hesitancy to talk too much about Jewish influence. And this is even more true for historians who are not themselves Jewish. Those are tricky waters. Does this inhibition impede scholarship, or is it a good thing? It can be good. For example, when I submitted some portions of the book as an essay to The Journal of American History it was intriguing to them, because no one had done this before in this way. But they also raised serious issues. I was forced to actually prove what I thought. But in another instance, the [9]NEH refused my second grant application. From the readers' comments, I got the sense some of them weren't evaluating the credentials of the project but were going on this off-the-cuff feeling that there's nothing Jewish here, why are you singling out these Jews? You organized a panel at the Scholars' Conference in American Jewish History on the "crisis of relevance." What is that crisis? Jews will generally come up in courses and texts when it's time to talk about the big immigration between the 1870s and the 1920s, the same time you talk about the Italians and the Slavs. What I was saying is that in several areas of American history, and I single out the economy and the growth of American culture and society, Jews have been important in shaping the American experience, way out of proportion to their numbers. But we know almost nothing about that. Why do you think that is? The burden of responsibility falls on scholars that are dealing with the history of American Jews to make the case--not just talking within Jewish intramural circles, but making the case to people who do U.S. history in general. The people who do this work need to impress others within the field, so that the ways in which Jews were involved in the reshaping of American popular culture and intellectual life become part of the larger American story, not just something that gets recycled every year in the same Jewish venues. What's your next project? I switched gears completely and decided to move into fiction. I wrote a coming-of-age story set in a New Jersey boarding school in the 1970s. The protagonist is a Jewish boy from a lower-middle-class family who attends this elite school on a scholarship and becomes involved with boys, and a few girls, from backgrounds very different from his own. Kristin Eliasberg has written about the law for New York Times, the Chicago Tribune, and the Boston Globe. References 2. http://www.alfredadler-ny.org/alfred_adler.htm 3. http://www.stayhealthy.com/drjoyce/ 4. http://psych.wisc.edu/jastrow.html 5. http://www.yucommentator.com/main.cfm/include/detail/storyid/474743 6. http://www.jafi.org.il/education/100/people/bios/soldz.html 7. http://www.nypl.org/research/chss/spe/rbk/faids/Fromm/frommpart2.html#Bio 8. http://history.berkeley.edu/faculty/Hollinger/ 9. http://www.neh.fed.us/ From checker at panix.com Sun May 15 00:49:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 14 May 2005 20:49:33 -0400 (EDT) Subject: [Paleopsych] Pierre van den Berghe reviews Frank Salter's book On Genetic Interests Message-ID: Pierre van den Berghe reviews Frank Salter's book _On Genetic Interests_ (Peter Lang, 2003, 388 pp. ?23.00). in _Nations and Nationalism_ (Blackwell) 2005 vol. 11 (1), 163-165. This is the kind of book which social scientists should read if they ever hope to become literate about human biology and its implications for our social behaviour. For many, if not most social scientists, human sociobiology (or evolutionary psychology, or behavioural ecology, or ethology, or whatever label you want to give to the biology of behaviour) is simply anathema, on both theoretical and ideological grounds. However, increasing minorities of anthropologists, psychologists, economists, political scientists and sociologists are beginning to absorb the social implications of human evolution and genetics. All ideological trends, by the way, are represented among these 'revisionists'. Salter's book is divided into three parts. First, he expands W. D. Hamilton's 'inclusive kinship' theory to ethnies. Then he draws the policy implications of ethnic nepotism. Finally, he concludes with the ethics thereof. No summary can do justice to a work so rich and novel in content, but let me try. In the first 75 pages, Salter essentially extends Hamiltonian kin selection from family to ethny, as I suggested should be done a quarter of a century ago. But he goes an important step beyond my simple formulation of ethnic nepotism as an extended and attenuated form of family nepotism. Salter persuasively argues that one should not only take into account genes shared by common descent, as in the classical Hamiltonian coefficient of relatedness, but all shared genes, whether by common descent or not, as in R. A. Fisher's coefficient of kinship. According to H. Harpending's calculations, the average coefficient of kinship in many ethnies that have practiced a large degree of endogamy for generations (the very definition of an ethny) is about as high as between half-siblings, aunt and nephew, or grand-parent and grand-child. Thus, ethnic nepotism is not a weak form of family nepotism, but virtually a proxy for it. I was even more right than I knew! Furthermore, as we have many more co-ethnics than relatives, the aggregate mass of genes shared with the former dwarfs that shared with the latter. The liberal, 'multicultural' counter to this is that, since all humans share something like 99.9 per cent of their genes with one another, we are all virtually identical twins under the skin, and should extend our fraternal embrace to humanity as a whole. We are also over 98 per cent chimp, by the way. Relatedness is always relative to others with whom we compete for scarce resources. Therefore, it is those extra shared genes within the family or ethny which increase our fitness compared to our less related competitors, and which make us uniquely different from them. Vive la diff?rence is the royal road to inclusive fitness maximisation. Show me a society where parents routinely think their neighbours' children are 99.9 per cent as good as their own, and allocate their bounty according to this fine principle. A few tried (Hutterites, Kibbutzim) but failed. The second and much longer part of the book (some 190 pages) applies this model of ethnic nepotism to policies or strategies of allocation of resources within and between states and ethnies controlling territories with finite carrying capacities. It is impossible to summarise adequately the countless implications of this model to immigration policies, citizenship law, affirmative action, multiculturalism, etc., except to say that Salter sees the universalisation of the genuine nation-state, i.e. the mono-ethnic one, as the best 'stable evolutionary strategy' for our species. Finally, Salter concludes with a shorter section (some 40 pages) on ethics. If we consider it eminently moral for parents to care preferentially for their children so long as they do not harm their neighbours' offspring, why should we stigmatise preference for our own ethnic kind who are, after all, extended kin? He thus advocates an ethic of 'adaptive utilitarianism' in which 'the ultimate form of liberty is the freedom to defend one's genetic interests' (p. 283), restrained only by the equal right of others to do likewise, and a prohibition to harm others. No doubt, Salter's position will raise many accusations of fascism or racism, but let me state here that he is not at all the conventional reactionary! He advocates a liberal, democratic nation-state; he is acutely aware of the problem of elite parasitism; he expresses horror at the orgies of genocide and war provoked by nationalism in the nineteenth and twentieth centuries; he is very critical of globalisation, which he sees as another form of elite parasitism. However, his position also coincides with that of the conventional right in opposing open immigration, multiculturalism, affirmative action and other sacred cows of the politically correct left. As a sociobiologist who is also a political anarchist, I applaud Salter's extension of inclusive fitness to ethnic nepotism, because it helps us understand so much of human affairs; but I am sceptical of his faith in the liberal ethnic state. Salter, to be sure, is well aware of its openness to elite parasitism, but he is nevertheless overly sanguine about its prospects. First, his assumption that the nation-state has an affinity for liberal democracy is contradicted by much historical evidence. Nation-states have come in all political colours, from genocidal fascism, to 'Herrenvolk democracy', to parliamentary, bourgeois democracy, to murderous communism. The nineteenth and twentieth centuries the Golden Age of nation-states have been graveyards of lethal industrialised warfare and routinised genocides. As for elite parasitism, it is not simply a danger to which 'liberal democracies' are exposed: it is the very essence of any state. States are essentially killing machines run by the few to steal from the many. I, for one, do not mourn the current passing of the nation-state, nor the devolution of many of its powers. The track record of the past two centuries does not deserve its eulogy. Finally, there is the question Salter raises early in the book (pp. 7785): do contemporary humans living in urbanised mass societies care about their fitness, and do they continue to behave accordingly? The answer he gives is, I think, correct: yes, but less so than they did when humans lived in their environment of evolutionary adaptation, the savannas of Pleistocene Africa. The real question thus becomes: how much less? I believe our species has recently (in the last century) taken a maladaptive leap by subverting proximate means of fitness maximisation to serve purely hedonistic rather than evolutionary ends. This is blatantly the case with sexual behaviour, which has been almost entirely de-coupled from reproduction. (To be sure, much like Bonobo chimps, we have long been 'copulatorily redundant', and exchanged sex for food, protection, investment in offspring, companionship and fun, but not in the rational, fail-safe way made possible by modern technology.) The current epidemics of obesity and drug addictions are other symptoms of this run-away hedonism rooted in biological predispositions that were once adaptive, but have become acutely maladaptive. How long can this go on? Probably not very long, but who cares, compared to the instant rewards of hedonism? Perhaps we are simply too smart for our own good. Perhaps, as a by-product of our encephalisation which brought acutely painful self-consciousness and modern bio-technology, we no longer want to play our genes' game of reproducing themselves. In a pique of supreme biological hubris, we proclaim ourselves more important than the sum of our genes. The future demise of our entire species pales by comparison with our own individual demise. Apr?s moi le d?luge. From waluk at earthlink.net Sun May 15 02:39:56 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sat, 14 May 2005 19:39:56 -0700 Subject: [Paleopsych] race In-Reply-To: <20050514211007.41608.qmail@web30812.mail.mud.yahoo.com> References: <20050514211007.41608.qmail@web30812.mail.mud.yahoo.com> Message-ID: <4286B67C.6090600@earthlink.net> Thanks for your cogent reply to the question of race. This term is certainly controversial, isn't it? I think that the word "race" is no different from the following terms: "class", "ethnicity", "wealth", "social status", "I. Q." etc. If one examines a cluster of people (or group) they are able to view similarity. If they then examine individuals, then they should view the differences, one to another. I prefer examining the above vocabulary in terms of "yes there is; no there is not" definitions that accompany the above words. Or to use a more moronic expression "both" when one looks at group as well as individual. Taking an international profile, all people fall within the basic three races: white, black, Asian. Yet to place a geographic position for these three races, all whites no longer live in Europe or America and blacks are not the only inhabitants of Africa. Asians live in Asia but they also find homeland in Europe, Russia and America et. al. Because our world has become so mobile within the past century, determining a person's race has become very difficult. Our world is melding into a giant pot and so have our individual ethnicities. Whether or not this is a positive circumstance continues being debatable. Many of us are favorably disposed to a multi-ethnic blend for the new millennium; others are not. Regards, Gerry Reinhart-Waller Michael Christopher wrote: >Regarding race, testing (IQ or other) and income... >why are we regarding human beings purely as numers in >the FIRST place? Isn't that a bit reductionist, >considering the trend of science is toward context and >integration? > >I'm sure there are uses for correlating race, income >and IQ. I'm not against compiling statistics. But I'd >hate to see that way of thinking about human beings >replace the actual human reality. If it turned out >that one racial group was simply not as good at >testing as another, what would the civilized response >be? > >I think a good response would be to evolve beyond the >concept of slotting people by numbers and earning >potential, and toward companies and communities where >everyone is valued. Not just as a number, but as a >many-faceted person who contributes non-quantifiable >value. Improving test-taking skills and coming up with >new exercises to improve IQ (video games, maybe) is a >good idea, but I think the underlying paradigm that >people are numbers is the real problem. > >On the race/IQ issue, one group is saying "Look at >this information, we believe it's accurate, and it's >political correctness to avoid looking at it". Another >says "That's all explained by stress from income >disparity, life span, etc." I tend to go with that >explanation. Howard has mentioned in his books how >being on the bottom of the pecking order stresses the >body and brain and affects mood and intellect. Some >feel strongly that it has to do with innate ability, >not poverty or social exclusion. But *either way* you >have to ask, "Why are we classifying people by IQ or >income in the first place?" > >Modern thinkers are aware of systems theory. In a >system with many variables, it's simpler to focus on >one (favors tidy explanations that fit on >bumperstickers), but then you can lose the magic of a >multivariable system where emergent patterns are >possible. Think of the emergent patterns we could grow >in schools, corporations and neighborhoods if we >thought about not one variable like income or IQ, but >about the full complexity of human experience and >learning, individually and in groups? Would that bring >up fears of socialism? No reason that can't be >resolved with self-owned businesses where each person >gets a share according to his value to the group. > >Michael > > > >__________________________________ >Yahoo! Mail Mobile >Take Yahoo! Mail with you! Check email on your mobile phone. >http://mobile.yahoo.com/learn/mail >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From shovland at mindspring.com Sun May 15 14:42:47 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 15 May 2005 07:42:47 -0700 Subject: [Paleopsych] The new Biology Message-ID: <01C55921.B0AABEE0.shovland@mindspring.com> more at: http://www.brucelipton.com/newbiology.php ? 2001-2005 Bruce H. Lipton, Ph.D. Recent advances in cellular science are heralding an important evolutionary turning point. For almost fifty years we have held the illusion that our health and fate were preprogrammed in our genes, a concept referred to as genetic determinacy. Though mass consciousness is currently imbued with the belief that the character of one's life is genetically predetermined, a radically new understanding is unfolding at the leading edge of science. Cellular biologists now recognize that the environment (external universe and internal-physiology), and more importantly, our perception of the environment, directly controls the activity of our genes. The lecture will broadly review the molecular mechanisms by which environmental awareness interfaces genetic regulation and guides organismal evolution. The quantum physics behind these mechanisms provide insight into the communication channels that link the mind-body duality. An awareness of how vibrational signatures and resonance impact molecular communication constitutes a master key that unlocks a mechanism by which our thoughts, attitudes and beliefs create the conditions of our body and the external world. This knowledge can be employed to actively redefine our physical and emotional well-being. From shovland at mindspring.com Sun May 15 14:44:41 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 15 May 2005 07:44:41 -0700 Subject: [Paleopsych] Insight into Cellular "Consciousness" Message-ID: <01C55921.F5234790.shovland@mindspring.com> more at: http://www.brucelipton.com/cellular.php Dr. Bruce H. Lipton, Ph.D. ? 2001 Reprinted from Bridges, 2001 Vol 12(1):5 ISSEEM>(303) 425-4625 Though a human is comprised of over fifty trillion cells, there are no physiologic functions in our bodies that were not already pre-existing in the biology of the single, nucleated (eukaryotic) cell. Single-celled organisms, such as the amoeba or paramecium, possess the cytological equivalents of a digestive system, an excretory system, a respiratory system, a musculoskeletal system, an immune system, a reproductive system and a cardiovascular system, among others.>In the humans, these physiologic functions are associated with the activity of specific organs.>These same physiologic processes are carried out in cells by diminutive organ systems called organelles.> Cellular life is sustained by tightly regulating the functions of the cell's physiologic systems. The expression of predictable behavioral repertoires implies the existence of a cellular "nervous system." This system reacts to environmental stimuli by eliciting appropriate behavioral responses. The organelle that coordinates the adjustments and reactions of a cell to its internal and external environments would represent the cytoplasmic equivalent of the "brain." From shovland at mindspring.com Sun May 15 14:49:22 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 15 May 2005 07:49:22 -0700 Subject: [Paleopsych] The Human Genome Project: A Cosmic Joke Message-ID: <01C55922.9C7C7750.shovland@mindspring.com> http://www.brucelipton.com/genome.php In considering the minimal number of genes needed to make a human: we would start with a base number of over 70,000 genes, one for each of the over 70,000 proteins found in a human. Then we include the number of regulatory genes needed to provide for the complexity of patterns expressed in our anatomy, physiology and behavior. Lets round-off the number of human genes to a total of an even 100,000, by including a minimalist number of 30,000 regulatory genes. Ready for the Cosmic Joke? The results of the Genome project reveal that there are only about 34,000 genes in the human genome. Two thirds of the anticipated genes do not exist! How can we account for the complexity of a genetically-controlled human when there are not even enough genes to code just for the proteins? From waluk at earthlink.net Sun May 15 16:14:03 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sun, 15 May 2005 09:14:03 -0700 Subject: [Paleopsych] status In-Reply-To: <20050515144554.71786.qmail@web30812.mail.mud.yahoo.com> References: <20050515144554.71786.qmail@web30812.mail.mud.yahoo.com> Message-ID: <4287754B.9050204@earthlink.net> Most people in our parents or grandparents generations don't have positive experiences with other cultures, especially those which foster different religions, employment status, food preparations and treats, economic status, or even household arrangements unless they were tuned into acceptance of those cultural characteristics most resembling their own. Many ethnic groups were viewed with disapproval, especially if they were deemed inferior due to their lower status. In a small community like the one in which I was raised, status in terms of everything that might be used as a point of comparison was identified and boldly placed on the table for self evaluation. I stacked up well in comparison to my neighbors because my family chose to remain in the "old neighborhood" refusing a move to suburbia where most of my father's co-workers lived. My father's job when measured by those my neighbor's dads held was top notch and didn't much pale when I stacked him up against those jobs held by my college friends' fathers. But then, my college was a state university and not a private one. Likely my world would have been viewed through different lenses if my university life had been of a different kind. Regards, Gerry Reinhart-Waller Michael Christopher wrote: >--Probably depends on a person's early experiences >with other cultures. If those experiences are >positive, multiculturalism appears nonthreatening. If >those experiences are embarrassing or painful, it can >be hard to undo that early experience as an adult, and >some adults find it easier to crusade against the >mixing of cultures as a way of avoiding having to >adapt. But mixing is inevitable. It would have been >impossible for some authority to say "Blues and >country music shall never blend!" But it wouldn't have >held back country-blues. Genes recombine, and so do >cultures. It's how nature works. > > From shovland at mindspring.com Sun May 15 16:26:09 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 15 May 2005 09:26:09 -0700 Subject: [Paleopsych] Oscillatory activity in the magnetoencephalogram during auditory processing and action preparation Message-ID: <01C55930.22130020.shovland@mindspring.com> http://www.mp.uni-tuebingen.de/mp/index.php?id=131 Prof. Dr. Jochen Kaiser 1 Prof. Dr. Werner Lutzenberger 2 Dipl.-Psych. Susanne Leiberg 2, 3 1 Institute of Medical Psychology, University of Frankfurt 2 Institute of Medical Psychology and Behavioral Neurobiology, University of Tubingen 3 Graduate School of Neural and Behavioral Sciences, University of Tubingen The topography and time course of high frequency oscillatory activity in the magnetoencephalogram (MEG) is studied by means of a statistical probability mapping (for details see Lutzenberger et al., 2002). Our research has two main foci: Gamma band activity (GBA) during auditory processing We investigate the role of GBA in bottom-up and top-down driven processing of auditory information. Studying working memory for spectral and spatial features of auditory stimuli we have found gamma band activity over the putative auditory ventral and dorsal processing streams and over prefrontal areas. These results suggest that the maintenance of auditory information in working memory relies on the synchronisation of the firing activity in neural networks belonging to the putative auditory processing streams and prefrontal, supposedly executive regions. In upcoming studies we want to gain further information about the role of GBA in top-down processes like working memory and selective attention. We also want to study the alleged relationship between the BOLD signal in fMRI and GBA in MEG using the same experiments and the same subjects. The topographical convergence between our GBA findings in auditory working memory studies and the BOLD findings in fMRI studies using similar paradigms supports the results of recent studies which point to an existing relationship between gamma oscillations and changes in blood oxygen level as measured with fMRI. From shovland at mindspring.com Sun May 15 22:11:08 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 15 May 2005 15:11:08 -0700 Subject: [Paleopsych] EPIGENETIC CONTROL OF GENE EXPRESSION Message-ID: <01C55960.52FC3500.shovland@mindspring.com> http://www.isrvma.org/article/56_2_8.htm Epigenetics refers to modifications in gene expression that are controlled by heritable but potentially reversible changes in DNA methylation and/or chromatin structure. DNA methylation is a post-replication process by which cytosine residues in CpG sequences are methylated, forming gene-specific methylation patterns. Housekeeping genes possess CpG-rich islands at the promoter region that are unmethylated in all cell types, whereas tissue-specific genes are methylated in all tissues except the tissue where the gene is expressed. These methylation patterns obviously correlate with gene expression. Further direct experiments proved that one of the most efficient gene-silencing mechanisms involves DNA methylation. Methylation patterns are established in the embryo by erasure of the gametic methylation patterns in the preimplantation embryo followed by global de novo methylation at the pregastrula stage, leaving CpG islands unmethylated. Finally, specific demethylation shapes the adult gene specific methylation patterns. Once a methylation pattern is established, it is clonally inherited using a maintenance methylasse that copies the methylation pattern on the parental DNA strand to the newly replicating strand. About 1% of the genes do not obey Mendel's genetic rules being expressed monoallelically in a parent-of-origin fashion. This phenomenon was called genomic imprinting and this subset of genes is imprinted by an epigenetic mechanism. The imprint must be established during gametogenesis, maintained during embryo development and erased in the primordial germ cells to set the stage for establishing a new imprint according to the gender of the embryo. From checker at panix.com Mon May 16 20:19:31 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:19:31 -0400 (EDT) Subject: [Paleopsych] NYT: Life at the Top in America Isn't Just Better, It's Longer Message-ID: If I didn't send the first part yesterday, please let me know. Life at the Top in America Isn't Just Better, It's Longer http://www.nytimes.com/2005/05/16/national/class/HEALTH-FINAL.html [Second in a series of articles on class in America.] By JANNY SCOTT Jean G. Miele's heart attack happened on a sidewalk in Midtown Manhattan last May. He was walking back to work along Third Avenue with two colleagues after a several-hundred-dollar sushi lunch. There was the distant rumble of heartburn, the ominous tingle of perspiration. Then Mr. Miele, an architect, collapsed onto a concrete planter in a cold sweat. Will L. Wilson's heart attack came four days earlier in the bedroom of his brownstone in Bedford-Stuyvesant in Brooklyn. He had been regaling his fianc?e with the details of an all-you-can-eat dinner he was beginning to regret. Mr. Wilson, a Consolidated Edison office worker, was feeling a little bloated. He flopped onto the bed. Then came a searing sensation, like a hot iron deep inside his chest. Ewa Rynczak Gora's first signs of trouble came in her rented room in the noisy shadow of the Brooklyn-Queens Expressway. It was the Fourth of July. Ms. Gora, a Polish-born housekeeper, was playing bridge. Suddenly she was sweating, stifling an urge to vomit. She told her husband not to call an ambulance; it would cost too much. Instead, she tried a home remedy: salt water, a double dose of hypertension pills and a glass of vodka. Architect, utility worker, maid: heart attack is the great leveler, and in those first fearful moments, three New Yorkers with little in common faced a single, common threat. But in the months that followed, their experiences diverged. Social class - that elusive combination of income, education, occupation and wealth - played a powerful role in Mr. Miele's, Mr. Wilson's and Ms. Gora's struggles to recover. Class informed everything from the circumstances of their heart attacks to the emergency care each received, the households they returned to and the jobs they hoped to resume. It shaped their understanding of their illness, the support they got from their families, their relationships with their doctors. It helped define their ability to change their lives and shaped their odds of getting better. Class is a potent force in health and longevity in the United States. The more education and income people have, the less likely they are to have and die of heart disease, strokes, diabetes and many types of cancer. Upper-middle-class Americans live longer and in better health than middle-class Americans, who live longer and better than those at the bottom. And the gaps are widening, say people who have researched social factors in health. As advances in medicine and disease prevention have increased life expectancy in the United States, the benefits have disproportionately gone to people with education, money, good jobs and connections. They are almost invariably in the best position to learn new information early, modify their behavior, take advantage of the latest treatments and have the cost covered by insurance. Many risk factors for chronic diseases are now more common among the less educated than the better educated. Smoking has dropped sharply among the better educated, but not among the less. Physical inactivity is more than twice as common among high school dropouts as among college graduates. Lower-income women are more likely than other women to be overweight, though the pattern among men may be the opposite. There may also be subtler differences. Some researchers now believe that the stress involved in so-called high-demand, low-control jobs further down the occupational scale is more harmful than the stress of professional jobs that come with greater autonomy and control. Others are studying the health impact of job insecurity, lack of support on the job, and employment that makes it difficult to balance work and family obligations. Then there is the issue of social networks and support, the differences in the knowledge, time and attention that a person's family and friends are in a position to offer. What is the effect of social isolation? Neighborhood differences have also been studied: How stressful is a neighborhood? Are there safe places to exercise? What are the health effects of discrimination? Heart attack is a window on the effects of class on health. The risk factors - smoking, poor diet, inactivity, obesity, hypertension, high cholesterol and stress - are all more common among the less educated and less affluent, the same group that research has shown is less likely to receive cardiopulmonary resuscitation, to get emergency room care or to adhere to lifestyle changes after heart attacks. "In the last 20 years, there have been enormous advances in rescuing patients with heart attack and in knowledge about how to prevent heart attack," said [2]Ichiro Kawachi, a professor of social epidemiology at the Harvard School of Public Health. "It's like diffusion of innovation: whenever innovation comes along, the well-to-do are much quicker at adopting it. On the lower end, various disadvantages have piled onto the poor. Diet has gotten worse. There's a lot more work stress. People have less time, if they're poor, to devote to health maintenance behaviors when they are juggling two jobs. Mortality rates even among the poor are coming down, but the rate is not anywhere near as fast as for the well-to-do. So the gap has increased." [3]Bruce G. Link, a professor of epidemiology and sociomedical sciences at Columbia University, said of the double-edged consequences of progress: "We're creating disparities. It's almost as if it's transforming health, which used to be like fate, into a commodity. Like the distribution of BMW's or goat cheese." The Best of Care Mr. Miele's advantage began with the people he was with on May 6, when the lining of his right coronary artery ruptured, cutting off the flow of blood to his 66-year-old heart. His two colleagues were knowledgeable enough to dismiss his request for a taxi and call an ambulance instead. And because he was in Midtown Manhattan, there were major medical centers nearby, all licensed to do the latest in emergency cardiac care. The emergency medical technician in the ambulance offered Mr. Miele (pronounced MEE-lee) a choice. He picked Tisch Hospital, part of New York University Medical Center, an academic center with relatively affluent patients, and passed up Bellevue, a city-run hospital with one of the busiest emergency rooms in New York. Within minutes, Mr. Miele was on a table in the cardiac catheterization laboratory, awaiting an angioplasty to unclog his artery - a procedure that many cardiologists say has become the gold standard in heart attack treatment. When he developed ventricular fibrillation, a heart rhythm abnormality that can be fatal within minutes, the problem was quickly fixed. Then Dr. James N. Slater, a 54-year-old cardiologist with some 25,000 cardiac catheterizations under his belt, threaded a catheter through a small incision in the top of Mr. Miele's right thigh and steered it toward his heart. Mr. Miele lay on the table, thinking about dying. By 3:52 p.m., less than two hours after Mr. Miele's first symptoms, his artery was reopened and Dr. Slater implanted a stent to keep it that way. Time is muscle, as cardiologists say. The damage to Mr. Miele's heart was minimal. Mr. Miele spent just two days in the hospital. His brother-in-law, a surgeon, suggested a few specialists. Mr. Miele's brother, Joel, chairman of the board of another hospital, asked his hospital's president to call N.Y.U. "Professional courtesy," Joel Miele explained later. "The bottom line is that someone from management would have called patient care and said, 'Look, would you make sure everything's O.K.?' " Things went less flawlessly for Mr. Wilson, a 53-year-old transportation coordinator for Con Ed. He imagined fleetingly that he was having a bad case of indigestion, though he had had a heart attack before. His fianc?e insisted on calling an ambulance. Again, the emergency medical technician offered a choice of two nearby hospitals - neither of which had state permission to do an angioplasty, the procedure Mr. Miele received. Mr. Wilson chose the Brooklyn Hospital Center over Woodhull Medical and Mental Health Center, the city-run hospital that serves three of Brooklyn's poorest neighborhoods. At Brooklyn Hospital, he was given a drug to break up the clot blocking an artery to his heart. It worked at first, said Narinder P. Bhalla, the hospital's chief of cardiology, but the clot re-formed. So Dr. Bhalla had Mr. Wilson taken to the Weill Cornell Center of NewYork-Presbyterian Hospital in Manhattan the next morning. There, Dr. Bhalla performed an angioplasty and implanted a stent. Asked later whether Mr. Wilson would have been better off if he had had his heart attack elsewhere, Dr. Bhalla said the most important issue in heart attack treatment was getting the patient to a hospital quickly. But he added, "In his case, yes, he would have been better off had he been to a hospital that was doing angioplasty." Mr. Wilson spent five days in the hospital before heading home on many of the same high-priced drugs that Mr. Miele would be taking and under similar instructions to change his diet and exercise regularly. After his first heart attack in 2000, he quit smoking; but once he was feeling better, he had stopped taking several medications, drifted back to red meat and fried foods, and let his exercise program slip. This time would be different, he vowed: "I don't think I'll survive another one." Ms. Gora's experience was the rockiest. First, she hesitated before allowing her husband to call an ambulance; she hoped her symptoms would go away. He finally insisted; but when the ambulance arrived, she resisted leaving. The emergency medical technician had to talk her into going. She was given no choice of hospitals; she was simply taken to Woodhull, the city hospital Mr. Wilson had rejected. Woodhull was busy when Ms. Gora arrived around 10:30 p.m. A triage nurse found her condition stable and classified her as "high priority." Two hours later, a physician assistant and an attending doctor examined her again and found her complaining of chest pain, shortness of breath and heart palpitations. Over the next few hours, tests confirmed she was having a heart attack. She was given drugs to stop her blood from clotting and to control her blood pressure, treatment that Woodhull officials say is standard for the type of heart attack she was having. The heart attack passed. The next day, Ms. Gora was transferred to Bellevue, the hospital Mr. Miele had turned down, for an angiogram to assess her risk of a second heart attack. But Ms. Gora, who was 59 at the time, came down with a fever at Bellevue, so the angiogram had to be canceled. She remained at Bellevue for two weeks, being treated for an infection. Finally, she was sent home. No angiogram was ever done. Comforts and Risks Mr. Miele is a member of New York City's upper middle class. The son of an architect and an artist, he worked his way through college, driving an ice cream truck and upholstering theater seats. He spent two years in the military and then joined his father's firm, where he built a practice as not only an architect but also an arbitrator and an expert witness, developing real estate on the side. Mr. Miele is the kind of person who makes things happen. He bought a $21,000 house in the Park Slope section of Brooklyn, sold it about 15 years later for $285,000 and used the money to build his current house next door, worth over $2 million. In Brookhaven, on Long Island, he took a derelict house on a single acre, annexed several adjoining lots and created what is now a four-acre, three-house compound with an undulating lawn and a 15,000-square-foot greenhouse he uses as a workshop for his collection of vintage Jaguars. Mr. Miele's architecture partners occasionally joked that he was not in the business for the money, which to some extent was true. He had figured out how to live like a millionaire, he liked to say, even before he became one. He had worked four-day weeks for the last 20 years, spending long weekends with his family, sailing or iceboating on Bellport Bay and rebuilding cars. Mr. Miele had never thought of himself as a candidate for a heart attack - even though both his parents had died of heart disease; even though his brother had had arteries unclogged; even though he himself was on hypertension medication, his cholesterol levels bordered on high and his doctor had been suggesting he lose weight. He was a passionate chef who put great store in the healthfulness of fresh ingredients from the Mieles' vegetable garden or the greengrocers in Park Slope. His breakfasts may have been a cardiologist's nightmare - eggs, sausage, bacon, pastina with a poached egg - but he considered his marinara sauce to be healthy perfection: just garlic, oil, tomatoes, salt and pepper. He figured he had something else working in his favor: he was happy. He adored his second wife, Lori, 23 years younger, and their 6-year-old daughter, Emma. He lived within blocks of his two sisters and two of his three grown children from his first marriage. The house regularly overflowed with guests, including Mr. Miele's former wife and her husband. He seemed to know half the people of Park Slope. "I walk down the street and I feel good about it every day," Mr. Miele, a gregarious figure with twinkling blue eyes and a taste for worn T-shirts and jeans, said of his neighborhood. "And, yes, that gives me a feeling of well-being." His approach to his health was utilitarian. When body parts broke, he got them fixed so he could keep doing what he liked to do. So he had had disc surgery, rotator cuff surgery, surgery for a carpal tunnel problem. But he was also not above an occasional bit of neglect. In March 2004, his doctor suggested a stress test after Mr. Miele complained of shortness of breath. On May 6, the prescription was still hanging on the kitchen cabinet door. An important link in the safety net that caught Mr. Miele was his wife, a former executive at a sweater manufacturing company who had stopped work to raise Emma but managed the Mieles' real estate as well. While Mr. Miele was still in the hospital, she was on the Internet, Googling stents. She scheduled his medical appointments. She got his prescriptions filled. Leaving him at home one afternoon, she taped his cardiologist's business card to the couch where he was sitting. "Call Dr. Hayes and let him know you're coughing," she said, her fingertips on his shoulder. Thirty minutes later, she called home to check. She prodded Mr. Miele, gently, to cut his weekly egg consumption to two, from seven. She found fresh whole wheat pasta and cooked it with turkey sausage and broccoli rabe. She knew her way around nutrition labels. Ms. Miele took on the burden of dealing with the hospital and insurance companies. She accompanied Mr. Miele to his doctor's appointments and retained pharmaceutical dosages in her head. "I can just leave and she can give you all the answers to all the questions," Mr. Miele said to his cardiologist, Dr. Richard M. Hayes, one day. "O.K., why don't you just leave?" Dr. Hayes said back. "Can she also examine you?" With his wife's support, Mr. Miele set out to lose 30 pounds. His pasta consumption plunged to a plate a week from two a day. It was not hard to eat healthfully from the Mieles' kitchens. Even the "junk drawer" in Park Slope was stocked with things like banana chips and sugared almonds. Lunches in Brookhaven went straight from garden to table: tomatoes with basil, eggplant, corn, zucchini flower tempura. At Dr. Hayes's suggestion, Mr. Miele enrolled in a three-month monitored exercise program for heart disease patients, called cardiac rehab, which has been shown to reduce the mortality rate among heart patients by 20 percent. Mr. Miele's insurance covered the cost. He even managed to minimize the inconvenience, finding a class 10 minutes from his country house. He had the luxury of not having to rush back to work. By early June, he had decided he would take the summer off, and maybe cut back his work week when he returned to the firm. "You know, the more I think about it, the less I like the idea of going back to work," he said. "I don't see any real advantage. I mean, there's money. But you've got to take the money out of the equation." So he put a new top on his 1964 Corvair. He played host to a large family reunion, replaced the heat exchanger in his boat and transformed the ramshackle greenhouse into an elaborate workshop. His weight dropped to 189 pounds, from 211. He had doubled the intensity of his workouts. His blood pressure was lower than ever. Mr. Miele saw Dr. Hayes only twice in six months, for routine follow-ups. He had been known to walk out of doctors' offices if he was not seen within 20 minutes, but Dr. Hayes did not keep him waiting. The Mieles were swept into the examining room at the appointed hour. Buoyed by the evidence of Mr. Miele's recovery, they would head out to lunch in downtown Manhattan. Those afternoons had the feel of impromptu dates. "My wife tells me that I'm doing 14-hour days," Mr. Miele mused one afternoon, slicing cold chicken and piling it with fresh tomatoes on toast. "She said, 'You're doing better now than you did 10 years ago.' And I said, 'I haven't had sex in a week.' And she said, 'Well?' " Just one unpleasant thing happened. Mr. Miele's partners informed him in late July that they wanted him to retire. It caught him off guard, and it hurt. He countered by taking the position that he was officially disabled and therefore entitled to be paid through May 5, 2005. "I mean, the guy has a heart attack," he said later. "So you get him while he's down?" Lukewarm Efforts to Reform Will Wilson fits squarely in the city's middle class. His parents had been sharecroppers who moved north and became a machinist and a nurse. He grew up in Bedford-Stuyvesant and had spent 34 years at Con Ed. He had an income of $73,000, five weeks' vacation, health benefits, a house worth $450,000 and plans to retire to North Carolina at 55. Mr. Wilson, too, had imagined becoming an architect. But there had been no money for college, so he found a job as a utility worker. By age 22, he had two children. He considered going back to school, with the company's support, to study engineering. But doing shift work, and with small children, he never found the time. For years he was a high-voltage cable splicer, a job he loved because it meant working outdoors with plenty of freedom and overtime pay. But on a snowy night in the early 1980's, a car skidded into a stanchion, which hit him in the back. A doctor suggested that Mr. Wilson learn to live with the pain instead of having disc surgery, as Mr. Miele had done. So Mr. Wilson became a laboratory technician, then a transportation coordinator, working in a cubicle in a low-slung building in Astoria, Queens, overseeing fuel deliveries for the company's fleet. Some people might think of the work as tedious, Mr. Wilson said, "but it keeps you busy." "Sometimes you look back over your past life experiences and you realize that if you would have done something different, you would have been someplace else," he said. "I don't dwell on it too much because I'm not in a negative position. But you do say, 'Well, dag, man, I should have done this or that.' " Mr. Wilson's health was not bad, but far from perfect. He had quit drinking and smoking, but had high cholesterol, hypertension and diabetes. He was slim, 5-foot-9 and just under 170 pounds. He traced his first heart attack to his smoking, his diet and the stress from a grueling divorce. His earlier efforts to reform his eating habits were half-hearted. Once he felt better, he stopped taking his cholesterol and hypertension drugs. When his cardiologist moved and referred Mr. Wilson to another doctor, he was annoyed by what he considered the rudeness of the office staff. Instead of demanding courtesy or finding another specialist, Mr. Wilson stopped going. By the time Dr. Bhalla encountered Mr. Wilson at Brooklyn Hospital, there was damage to all three main areas of his heart. Dr. Bhalla prescribed a half-dozen drugs to lower Mr. Wilson's cholesterol, prevent clotting and control his blood pressure. "He has to behave himself," Dr. Bhalla said. "He needs to be more compliant with his medications. He has to really go on a diet, which is grains, no red meat, no fat. No fat at all." Mr. Wilson had grown up eating his mother's fried chicken, pork chops and macaroni and cheese. He confronted those same foods at holiday parties and big events. There were doughnut shops and fried chicken places in his neighborhood; but Mr. Wilson's fianc?e, Melvina Murrell Green, found it hard to find fresh produce and good fish. "People in my circle, they don't look at food as, you know, too much fat in it," Mr. Wilson said. "I don't think it's going to change. It's custom." At Red Lobster after his second heart attack, Ms. Green would order chicken and Mr. Wilson would have salmon - plus a side order of fried shrimp. "He's still having a problem with the fried seafood," Ms. Green reported sympathetically. Whole grains remained mysterious. "That we've got to work on," she said. "Well, we recently bought a bag of grain something. I'm not used to that. We try to put it on the cereal. It's O.K." In August, Ms. Green's blood pressure shot up. The culprit turned out to be a turkey chili recipe that she and Mr. Wilson had discovered: every ingredient except the turkey came from a can. She was shocked when her doctor pointed out the salt content. The Con Ed cafeteria, too, was problematic. So Mr. Wilson began driving to the Best Yet Market in Astoria at lunch to troll the salad bar. Dr. Bhalla had suggested that Mr. Wilson walk for exercise. There was little open space in the neighborhood, so Mr. Wilson and Ms. Green often drove just to go for a stroll. In mid-October he entered a cardiac rehab program like Mr. Miele's, only less convenient. He would drive into Manhattan after work, during the afternoon rush, three days a week. He would hunt for on-street parking or pay too much for a space in a lot. Then a stranger threatened to damage Mr. Wilson's car in a confrontation over a free spot, so Mr. Wilson switched to the subway. For a time, he considered applying for permanent disability. But Con Ed allowed him to return to work "on restrictions," so he decided to go back, with plans to retire in a year and a half. The week before he went back, he and Ms. Green took a seven-day cruise to Nassau. It was a revelation. "Sort of like helped me to see there's a lot more things to do in life," he said. "I think a lot of people deny themselves certain things in life, in terms of putting things off, 'I'll do it later.' Later may never come." Ignoring the Risks Ms. Gora is a member of the working class. A bus driver's daughter, she arrived in New York City from Krakow in the early 1990's, leaving behind a grown son. She worked as a housekeeper in a residence for the elderly in Manhattan, making beds and cleaning toilets. She said her annual income was $21,000 to $23,000 a year, with health insurance through her union. For $365 a month, she rented a room in a friend's Brooklyn apartment on a street lined with aluminum-sided row houses and American flags. She used the friend's bathroom and kitchen. She was in her seventh year on a waiting list for a subsidized one-bedroom apartment in the adjacent Williamsburg neighborhood. In the meantime, she had acquired a roommate: Edward Gora, an asbestos-removal worker newly arrived from Poland and 10 years her junior, whom she met and married in 2003. Like Mr. Miele, Ms. Gora had never imagined she was at risk of a heart attack, though she was overweight, hypertensive and a 30-year smoker, and heart attacks had killed her father and sister. She had numerous health problems, which she addressed selectively, getting treated for back pain, ulcers and so on until the treatment became too expensive or inconvenient, or her insurance declined to pay. "My doctor said, 'Ewa, be careful with cholesterol,' " recalled Ms. Gora, whose vestigial Old World sense of propriety had her dressed in heels and makeup for every visit to Bellevue. "When she said that, I think nothing; I don't care. Because I don't believe this touch me. Or I think she have to say like that because she doctor. Like cigarettes: she doctor, she always told me to stop. And when I got out of the office, lights up." Ms. Gora had a weakness for the peak of the food pyramid. She grew up on her mother's fried pork chops, spare ribs and meatballs - all cooked with lard - and had become a pizza, hamburger and French fry enthusiast in the United States. Fast food was not only tasty but also affordable. "I eat terrible," she reported cheerily from her bed at Bellevue. "I like grease food and fast food. And cigarettes." She loved the feeling of a cigarette between her fingers, the rhythmic rise and fall of it to her lips. Using her home computer, she had figured out how to buy Marlboros online for just $2.49 a pack. Her husband smoked, her friends all smoked. Everyone she knew seemed to love tobacco and steak. Her life was physically demanding. She would rise at 6 a.m. to catch a bus to the subway, change trains three times and arrive at work by 8 a.m. She would make 25 to 30 beds, vacuum, cart out trash. Yet she says she loved her life. "I think America is El Dorado," she said. "Because in Poland now is terrible; very little bit money. Here, I don't have a lot of, but I live normal. I have enough, not for rich life but for normal life." The precise nature of Ms. Gora's illness was far from clear to her even after two weeks in Bellevue. In her first weeks home, she remained unconvinced that she had had a heart attack. She arrived at the Bellevue cardiology clinic for her first follow-up appointment imagining that whatever procedure had earlier been canceled would then be done, that it would unblock whatever was blocked, and that she would be allowed to return to work. Jad Swingle, a doctor completing his specialty training in cardiology, led Ms. Gora through the crowded waiting room and into an examining room. She clutched a slip of paper with words she had translated from Polish using her pocket dictionary: "dizzy," "groin," "perspiration." Dr. Swingle asked her questions, speaking slowly. Do you ever get chest discomfort? Do you get short of breath when you walk? She finally interrupted: "Doctor, I don't know what I have, why I was in hospital. What is this heart attack? I don't know why I have this. What I have to do to not repeat this?" No one had explained these things, Ms. Gora believed. Or, she wondered, had she not understood? She perched on the examining table, ankles crossed, reduced by the setting to an oversize, obedient child. Dr. Swingle examined her, then said he would answer her questions "in a way you'll understand." He set about explaining heart attacks: the narrowed artery, the blockage, the partial muscle death. Ms. Gora looked startled. "My muscle is dead?" she asked. Dr. Swingle nodded. What about the procedure that was never done? "I'm not sure an angiogram would help you," he said. She needed to stop smoking, take her medications, walk for exercise, come back in a month. "My muscle is still dead?" she asked again, incredulous. "Once it's dead, it's dead," Dr. Swingle said. "There's no bringing it back to life." Outside, Ms. Gora tottered toward the subway, 14 blocks away, on pink high-heeled sandals in 89-degree heat. "My thinking is black," she said, uncharacteristically glum. "Now I worry. You know, you have hand? Now I have no finger." If Mr. Miele's encounters with the health care profession in the first months after his heart attack were occasional and efficient, Ms. Gora's were the opposite. Whereas he saw his cardiologist just twice, Ms. Gora, burdened by complications, saw hers a half-dozen times. Meanwhile, her heart attack seemed to have shaken loose a host of other problems. A growth on her adrenal gland had turned up on a Bellevue CAT scan, prompting a visit to an endocrinologist. An old knee problem flared up; an orthopedist recommended surgery. An alarming purple rash on her leg led to a trip to a dermatologist. Because of the heart attack, she had been taken off hormone replacement therapy and was constantly sweating. She tore open a toe stepping into a pothole and needed stitches. Without money or connections, moderate tasks consumed entire days. One cardiology appointment coincided with a downpour that paralyzed the city. Ms. Gora was supposed to be at the hospital laboratory at 8 a.m. to have blood drawn and back at the clinic at 1 p.m. In between, she wanted to meet with her boss about her disability payments. She had a 4 p.m. appointment in Brooklyn for her knee. So at 7 a.m., she hobbled through the rain to the bus to the subway to another bus to Bellevue. She was waiting outside the laboratory when it opened. Then she took a bus uptown in jammed traffic, changed buses, descended into the subway at Grand Central Terminal, rode to Times Square, found service suspended because of flooding, climbed the stairs to 42nd Street, maneuvered through angry crowds hunting for buses and found another subway line. She reached her workplace an hour and a half after leaving Bellevue; if she had had the money she could have made the trip in 20 minutes by cab. Her boss was not there. So she returned to Bellevue and waited until 2:35 p.m. for her 1 o'clock appointment. As always, she asked Dr. Swingle to let her return to work. When he insisted she have a stress test first, a receptionist gave her the first available appointment - seven weeks away. Meanwhile, Ms. Gora was trying to stop smoking. She had quit in the hospital, then returned home to a husband and a neighbor who both smoked. To be helpful, Mr. Gora smoked in the shared kitchen next door. He was gone most of the day, working double shifts. Alone and bored, Ms. Gora started smoking again, then called Bellevue's free smoking cessation program and enrolled. For the next few months, she trekked regularly to "the smoking department" at Bellevue. A counselor supplied her with nicotine patches and advice, not always easy for her to follow: stay out of the house; stay busy; avoid stress; satisfy oral cravings with, say, candy. The counselor suggested a support group, but Ms. Gora was too ashamed of her English to join. Even so, over time her tobacco craving waned. There was just one hitch: Ms. Gora was gaining weight. To avoid smoking, she was eating. Her work had been her exercise and now she could not work. Dr. Swingle suggested cardiac rehab, leaving it up to Ms. Gora to find a program and arrange it. Ms. Gora let it slide. As for her diet, she had vowed to stick to chicken, turkey, lettuce, tomatoes and low-fat cottage cheese. But she got tired of that. She began sneaking cookies when no one was looking - and no one was. She cooked separate meals for Mr. Gora, who was not inclined to change his eating habits. She made him meatballs with sauce, liver, soup from spare ribs. Then one day in mid-October, she helped herself to one of his fried pork chops, and was soon eating the same meals he was. As an alternative to eating cake while watching television, she turned to pistachios, and then ate a pound in a single sitting. Cruising the 99 Cent Wonder store in Williamsburg, where the freezers were filled with products like Budget Gourmet Rigatoni with Cream Sauce, she pulled down a small package of pistachios: two and a half servings, 13 grams of fat per serving. "I can eat five of these," she confessed, ignoring the nutrition label. Not servings. Bags. Heading home after a trying afternoon in the office of the apartment complex in Williamsburg, where the long-awaited apartment seemed perpetually just out of reach, Ms. Gora slipped into a bakery and emerged with a doughnut, her first since her heart attack. She found a park bench where she had once been accustomed to reading and smoking. Working her way through the doughnut, confectioners' sugar snowing onto her chest, she said ruefully, "I miss my cigarette." She wanted to return to work. She felt uncomfortable depending on Mr. Gora for money. She worried that she was becoming indolent and losing her English. Her disability payments, for which she needed a doctor's letter every month, came to just half her $331 weekly salary. Once, she spent hours searching for the right person at Bellevue to give her a letter, only to be told to come back in two days. The co-payments on her prescriptions came to about $80 each month. Unnerving computer printouts from the pharmacist began arriving: "Maximum benefit reached." She switched to her husband's health insurance plan. Twice, Bellevue sent bills for impossibly large amounts of money for services her insurance was supposed to cover. Both times she spent hours traveling into Manhattan to the hospital's business office to ask why she had been billed. Both times a clerk listened, made a phone call, said the bill was a mistake and told her to ignore it. When the stress test was finally done, Dr. Swingle said the results showed she was not well enough to return to full-time work. He gave her permission for part-time work, but her boss said it was out of the question. By November, her weight had climbed to 197 pounds from 185 in July. Her cholesterol levels were stubbornly high and her blood pressure was up, despite drugs for both. In desperation, Ms. Gora embarked upon a curious, heart-unhealthy diet clipped from a Polish-language newspaper. Day 1: two hardboiled eggs, one steak, one tomato, spinach, lettuce with lemon and olive oil. Another day: coffee, grated carrots, cottage cheese and three containers of yogurt. Yet another: just steak. Ms. Gora decided not to tell Dr. Swingle. "I worry if he don't let me, I not lose the weight," she said. Uneven Recoveries By spring, Mr. Miele's heart attack, remarkably, had left him better off. He had lost 34 pounds and was exercising five times a week and taking subway stairs two at a time. He had retired from his firm on the terms he wanted. He was working from home, billing $225 an hour. More money in less time, he said. His blood pressure and cholesterol were low. "You're doing great," Dr. Hayes had said. "You're doing better than 99 percent of my patients." Mr. Wilson's heart attack had been a setback. His heart function remained impaired, though improved somewhat since May. At one recent checkup, his blood pressure and his weight had been a little high. He still enjoyed fried shrimp on occasion but he took his medications diligently. He graduated from cardiac rehab with plans to join a health club with a pool. And he was looking forward to retirement. Ms. Gora's life and health were increasingly complex. With Dr. Swingle's reluctant approval, she returned to work in November. She had moved into the apartment in Williamsburg, which gave her a kitchen and a bathroom for the first time in seven years. But she began receiving menacing phone calls from a collection agency about an old bill her health insurance had not covered. Her husband, with double pneumonia, was out of work for weeks. She had her long-awaited knee surgery in January. But it left her temporarily unable to walk. Her weight hit 200 pounds. When the diet failed, she considered another consisting largely of fruit and vegetables sprinkled with an herbal powder. Her blood pressure and cholesterol remained ominously high. She had been warned that she was now a borderline diabetic. "You're becoming a full-time patient, aren't you?" Dr. Swingle remarked. References 2. http://www.hsph.harvard.edu/faculty/IchiroKawachi.html 3. http://chaos.cpmc.columbia.edu/sphdir/pers.asp?ID=599 From checker at panix.com Mon May 16 20:19:42 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:19:42 -0400 (EDT) Subject: [Paleopsych] NYT: Class Matters: A Bibliography Message-ID: Class Matters: A Bibliography http://www.nytimes.com/ref/national/class/class-bibliography.html Following is a selection of books that were consulted by reporters and editors working on this series. Bourdieu, Pierre [46]Distinction: A Social Critique of the Judgement of Taste (1984, Harvard University Press) Bowen, William G., et al [47]Equity and Excellence in American Higher Education (Thomas Jefferson Foundation Distinguished Lecture Series) Bowles, Samuel, et al (Editors) [48]Unequal Chances: Family Background and Economic Success (2005, Princeton University Press) Conley, Dalton [49]The Pecking Order: Which Siblings Succeed and Why (2004, Pantheon) Franklin, Benjamin [50]The Autobiography of Benjamin Franklin (Dover Publications) Frank, Robert; Cook, Phillip J. [51]The Winner-Take-All Society: Why the Few at the Top Get So Much More Than the Rest of Us (1996, Penguin) Fussell, Paul [52]Class: A Guide Through the American Status System (1983, Touchstone Books) Kingston, Paul W. [53]The Classless Society (2000, Stanford University Press) Lamont, Michele [54]Money, Morals, & Manners: The Culture of the French and the American Upper-Middle Class (1992, The University of Chicago Press) Lareau, Annette [55]Unequal Childhoods: Class, Race, and Family Life (2003, University of California Press) Neckerman, Kathryn M. (Editor) [56]Social Inequality (2004, Russell Sage Foundation) [Chapter 2, on family structure, and Chapter 10, on working hours, are especially relevant.] Niebuhr, H. Richard [57]The Social Sources of Denominationalism (1929) Scharnhorst, Gary, with Jack Bales [58]The Lost Life of Horatio Alger Jr. (Indiana University Press, 1992) Wood, Gordon S. [59]The Americanization of Benjamin Franklin (2004, Penguin Press) References 46. http://www.hup.harvard.edu/catalog/BOUDIX.html 47. http://www.amazon.com/exec/obidos/tg/detail/-/0813923506/103-8814322-4940642?v=glance 48. http://www.pupress.princeton.edu/titles/7838.html 49. http://www.randomhouse.com/pantheon/catalog/display.pperl?isbn=0375421742 50. http://store.yahoo.com/doverpublications/0486290735.html 51. http://us.penguingroup.com/nf/Book/BookDisplay/0,,0_0140259953,00.html 52. http://www.amazon.com/exec/obidos/tg/detail/-/0671792253/qid=1116038542/sr=1-1/ref=sr_1_1/103-8814322-4940642?v=glance&s=books 53. http://www.sup.org/book.cgi?book_id=3804%203806%20 54. http://www.press.uchicago.edu/cgi-bin/hfs.cgi/00/7899.ctl 55. http://www.ucpress.edu/books/pages/9987.html 56. http://www.russellsage.org/publications/books/0-87154-620-5 57. http://www.hds.harvard.edu/library/bms/bms00630.html 58. http://www.amazon.com/exec/obidos/tg/detail/-/0253206480/qid=1116040132/sr=1-1/ref=sr_1_1/103-8814322-4940642?v=glance&s=books 59. http://us.penguingroup.com/nf/Book/BookDisplay/0,,0_159420019X,00.html From checker at panix.com Mon May 16 20:19:54 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:19:54 -0400 (EDT) Subject: [Paleopsych] NYT: (Shrouding) Why That Doggie in the Window Costs a Lot More Than You Think Message-ID: Why That Doggie in the Window Costs a Lot More Than You Think http://www.nytimes.com/2005/05/16/business/16behavior.html?pagewanted=print By DAVID LEONHARDT THE entry in the Zagat guide made the restaurant sound affordable. So did the menu posted near the door. Entrees were about $15. Even with wine, spending less than $80 for two people seemed like no problem. But once you are seated, the waiter comes by and asks if he may bring you some San Pellegrino sparkling water. Whenever your glass is half-empty, he fills it, and he always replaces a finished bottle with a new one, at $10 apiece. By the time dinner is over, the bill has crept toward $130, all because you did not want to seem too chintzy to order a little fizzy water. Welcome to the ? la carte economy, where consumers seem to face new decisions every few minutes and businesses know precisely when their customers are most vulnerable. The practice of luring customers with a low list price is an old one - it's how television pitchmen get you to buy the Veg-O-Matic - but recently it has come to dominate many areas of the modern economy. The value of being a smart consumer, like the cost of being a na?ve one, has risen sharply. A room at the Marriott may cost only $150, but the final bill looks a lot different once you have handed your car keys to the valet, used the high-speed Internet connection and eaten the $11 oatmeal. Late fees on credit cards have jumped. So have many mutual-fund fees. Two young economists, Xavier Gabaix and David I. Laibson, have come up with a name for this practice: shrouding. Once you start to think about it, you notice shrouding almost everywhere, like the features that add to the price of a new car, the warranty from [2]Best Buy, the burgers at the ballpark and the surcharge Ticketmaster puts on concert tickets. The price you hear is not the price you end up paying. At first blush, shrouding sounds like just a fancy word for rip-off. Whether they are selling marked-up Pellegrino or tacking on "convenience charges," businesses seem to be the ones benefiting from the hidden information. But that is not always the case, which is what makes the concept so interesting. There really are two types of consumers when it comes to shrouding: one who takes advantage of the murkiness and another who gets taken advantage of. To make matters even worse, the less savvy group ends up subsidizing the more sophisticated one. When the economy was simpler - when all phone service came from Ma Bell, for instance - there was far less room or need for shrouding. Today, with SBC, Cingular, [3]Cablevision and Vonage all trying to woo callers, their prices must grab people's attention. The profit can come after the customer is in the fold. "It's easier to customize now and create lots of different options," Mr. Gabaix, a professor at the Massachusetts Institute of Technology, said. Credit cards are the most widely understood case. Few people can tell you the late-fee penalty on their credit cards, but those fees, which have risen sharply, make up a large portion of what many users end up paying. For people who pay their entire balance every month, though, credit cards are a bizarrely good deal. A free loan, a more convenient way to shop and perhaps some frequent-flier miles, too. Who's getting ripped off there? It is possible only because people who run up interest charges and late fees keep the banks in business. Mr. Laibson, a professor at Harvard, has his own favorite example of shrouding: [4]Hewlett-Packard's inkjet printers. The advertised price is as low as $35, but it includes no ink cartridges or paper, two necessary items for printing. "Here's my challenge to you," he said. "Go to the H-P Web site and pretend to be an individual purchaser of a computer, not a business. Go to the deskjet printers. See if you can find the cost of printing on a per-page basis. That is the most important information." All I could find - and it took many misdirected clicks to get there - was an estimate of the number of pages each cartridge would print. A little arithmetic led to a per-page cost of about 10 cents. Print 10 pages a day, and you have already doubled the cost of that apparent bargain, the $35 printer, in little more than a month. John Solomon, a vice president of supplies marketing at Hewlett-Packard, said the company wanted the printing industry to adopt a standard for calculating per-page cost. Without one, other companies could exaggerate the size of a cartridge's print load, making Hewlett-Packard's per-page cost look unfairly high, he said. "We feel it is important, but we have hidden it. We have put it in a place that only people who really care will find it," Mr. Solomon said. "Because it's not really good information if there's not a level playing field." If shrouding were really just a form of gouging, you would also expect new companies to enter the market and sell items for a lower, but still profitable, price. And sometimes that does happen. [5]Netflix has helped cause Blockbuster's current corporate traumas by attracting people who do not want to worry about late fees, an often overlooked part of the price. "Blockbuster has made a lot of money off of people's busy lifestyles," said Leslie J. Kilgore, Netflix's chief marketing officer. More often, though, there is little chance for a competitor to make money while eliminating shrouding. Even in the printer market, some people understand the game. They print their large jobs at the office or switch to a lower-quality printer setting to use less ink on each page. These are the smart, unprofitable customers. They are also the ones who would be attracted to a competitor that was being more upfront about prices than Hewlett-Packard. That does not make for a good business plan. So companies have little choice but to play hide-and-seek with their prices. When enough customers catch on, businesses shroud in a new way. As people have reduced their outstanding credit card balances, banks have increased the penalties for making a single late payment. As liquor sales have dropped, restaurants have found a new beverage to mark up: may we pour you some more Pellegrino? From checker at panix.com Mon May 16 20:20:06 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:20:06 -0400 (EDT) Subject: [Paleopsych] Gordon Bigelow: Let there be markets: the evangelical roots of economics Message-ID: Gordon Bigelow: Let there be markets: the evangelical roots of economics Harper's Magazine, May 2005 v310 i1860 p33(6). Economics, as channeled by its popular avatars in media and politics, is the cosmology and the theodicy of our contemporary culture. More than religion itself, more than literature, more than cable television, it is economics that offers the dominant creation narrative of our society, depicting the relation of each of us to the universe we inhabit, the relation of human beings to God. And the story it tells is a marvelous one. In it an enormous multitude of strangers, all individuals, all striving alone, are nevertheless all bound together in a beautiful and natural pattern of existence: the market. This understanding of markets--not as artifacts of human civilization but as phenomena of nature--now serves as the unquestioned foundation of nearly all political and social debate. As mergers among media companies began to create monopolies on public information, ownership limits for these companies were not tightened but relaxed, because "the market" would provide its own natural limits to growth. When corporate accounting standards needed adjustment in the 1990s, such measures were cast aside because they would interfere with "market forces." Social Security may soon fall to the same inexorable argument. The problem is that the story told by economics simply does not conform to reality. This can be seen clearly enough in the recent, high-profile examples of the failure of free-market thinking--how media giants have continued to grow, or how loose accounting regulations have destroyed countless millions in personal wealth. But mainstream economics also fails at a more fundamental level, in the way that it models basic human behavior. The core assumption of standard economics is that humans are fundamentally individual rather than social animals. The theory holds that all economic choices are acts of authentic, unmediated selfhood, rational statements reflecting who we are and what we want in life. But in reality even our purely "economic" choices are not made on the basis of pure autonomous selfhood; all of our choices are born out of layers of experience in contact with other people. What is entirely missing from the economic view of modern life is an understanding of the social world. This was precisely the diagnosis made five years ago by a group of French graduate students in economics, who published their dissent in an open letter that soon made minor headlines around the world. In the letter the students declared that the economic theory taught in their courses was hopelessly out of touch, absorbed in its own private model of reality. They wrote: We wish to escape from imaginary worlds! Most of us have chosen to study economics so as to acquire a deep understanding of the economic phenomena with which the citizens of today are confronted. But the teaching that is offered ... does not generally answer this expectation.... [T]his gap in the teaching, this disregard for concrete realities, poses an enormous problem for those who would like to render themselves useful to economic and social actors. The discipline of economics was ill, the letter claimed, pathologically distant from the problems of real markets and real people. The students who offered this diagnosis in 2000 were from the most prestigious rank of the French university system, the Grandes Ecoles, and for this reason their argument could not be easily dismissed. Critics who accuse economists of embracing useless theory usually find themselves accused of stupidity: of being unable to understand the elegant mathematics that proves the theory works. But the mathematical credentials of these students were impeccable. The best of a rising generation were revolting against their training, and because of this the press and public paid attention. Orthodox economists counterattacked, first in France and then internationally. Rightwing globalist Robert Solow wrote a savage editorial in Le Monde defending standard economic theory. The debate became so protracted that the French minister of education launched an inquiry. Economics departments around the world are overwhelmingly populated by economists of one particular stripe. Within the field they are called "neoclassical" economists, and their approach to the discipline was developed over the course of the nineteenth century. According to the neoclassical school, people make choices based on a rational calculation of what will serve them best. The terra for this is "utility maximization." The theory holds that every time a person buys something, sells something, quits a job, or invests, he is making a rational decision about what will be most useful to him, what will provide him "maximum utility." "Utility" can be pleasure (as in, "Which of these Disney cruises will make me happiest?") or security (as in, "Which 401(k) will let me retire before age eighty-five?") or self-satisfaction (as in, "How much will I put in the offering plate at church?"). If you bought a Ginsu knife at 3:00 A.M., a neoclassical economist will tell you that, at that time, you calculated that this purchase would optimize your resources. Neoclassical economics tends to downplay the importance of human institutions, seeing instead a system of flows and exchanges that are governed by an inherent equilibrium. Predicated on the belief that markets operate in a scientifically knowable fashion, it sees them as self-regulating mathematical miracles, as delicate ecosystems best left alone. If there is a whiff of creationism around this idea, it is no accident. By the time the term "economics" first emerged, in the 1870s, it was evangelical Christianity that had done the most to spur the field on toward its present scientific self-certainty. When evangelical Christianity first grew into a powerful movement, between 1800 and 1850, studies of wealth and trade were called "political economy." The two books at the center of this new learning were Adam Smith's Wealth of Nations (1776) and David Ricardo's Principles of Political Economy and Taxation (1817). This was the period of the industrial transformation of Britain, a time of rapid urban growth and rapidly fluctuating markets. These books offered explanations of how societies become wealthy and how they can stay that way. They made the accelerated pace of urban life and industrial workshops seem understandable as part of a program that modern history would follow. But by the 1820s, a number of Smith's and Ricardo's ideas had become difficult for the growing merchant and investor class to accept. For Smith, the pursuit of wealth was a grotesque personal error, a misunderstanding of human happiness. In his first book, The Theory of Moral Sentiments (1759), Smith argued that the acquisition of money brings no good in itself; it seems attractive only because of the mistaken belief that fine possessions draw the admiration of others. Smith welcomed acquisitiveness only because he concluded--in a proposition carried through to Wealth of Nations--that this pursuit of "baubles and trinkets" would ultimately enrich society as a whole. As the wealthy bought gold pickle forks and paid servants to herd their pet peacocks, the servants and the goldsmiths would benefit. It was on this dubious foundation that Smith built his case for freedom of trade. By the 1820s and '30s, this foundation had become increasingly troubling to free-trade advocates, who sought, in their study of political economy, not just an explanation of rapid change but a moral justification for their own wealth and for the outlandish sufferings endured by the new industrial poor. Smith, who scoffed at personal riches, offered no comfort here. In The Wealth of Nations, the shrewd man of business was not a hero but a hapless bystander. Ricardo's work offered different but similarly troubling problems. Working from a basic analysis of the profits of land ownership, Ricardo concluded that the interests of different groups within an economy--owners, investors, renters, laborers--would always be in conflict with one another. Ricardo's credibility with the capitalists was unquestionable: he was not a philosopher like Adam Smith but a successful stockbroker who had retired young on his earnings. But his view of capitalism made it seem that a harmonious society was a thing of the past: class conflict was part of the modern world, and the gentle old England of squire and farmer was over. The group that bridled most against these pessimistic elements of Smith and Ricardo was the evangelicals. These were middle-class reformers who wanted to reshape Protestant doctrine. For them it was unthinkable that capitalism led to class conflict, for that would mean that God had created a world at war with itself. The evangelicals believed in a providential God, one who built a logical and orderly universe, and they saw the new industrial economy as a fulfillment of God's plan. The free market, they believed, was a perfectly designed instrument to reward good Christian behavior and to punish and humiliate the unrepentant. At the center of this early evangelical doctrine was the idea of original sin: we were all born stained by corruption and fleshly desire, and the true purpose of earthly life was to redeem this. The trials of economic life--the sweat of hard labor, the fear of poverty, the self-denial involved in saving--were earthly tests of sinfulness and virtue. While evangelicals believed salvation was ultimately possible only through conversion and faith, they saw the pain of earthly life as means of atonement for original sin. * These were the people that writers like Dickens detested. The extreme among them urged mortification of the flesh and would scold anyone who took pleasure in food, drink, or good company. Moreover, they regarded poverty as part of a divine program. Evangelicals interpreted the mental anguish of poverty and debt, and the physical agony of hunger or cold, as natural spurs to prick the conscience of sinners. They believed that the suffering of the poor would provoke remorse, reflection, and ultimately the conversion that would change their fate. In other words, poor people were poor for a reason, and helping them out of poverty would endanger their mortal souls. It was the evangelicals who began to see the business mogul as an heroic figure, his wealth a triumph of righteous will. The stockbroker, who to Adam Smith had been a suspicious and somewhat twisted character, was for nineteenth-century evangelicals a spiritual victor. By the 1820s evangelicals were a dominant force in British economic policy. As Peter Gray notes in his book Famine, Land, and Politics, evangelical Anglicans held significant positions in government, and they applied their understanding of earthly life as atonement for sin in direct ways. Their first major impact was in dismantling the old parish-based system of aiding the poor and aging, a policy battle that resulted in the Poor Law Amendment of 1834. Traditionally, people who could not work or support themselves, including orphans and the disabled, had been helped by local parish organizations. It had been a joint responsibility of church and state to prevent the starvation and avoidable suffering of people who had no way to earn a living. The Poor Law nationalized and monopolized poverty administration. It forbade cash payments to any poor citizen and mandated that his only recourse be the local workhouse. Workhouses became orphanages, insane asylums, nursing homes, public hospitals, and factories for the able-bodied. Protests over the conditions in these prison-like facilities, particularly the conditions for children, mounted throughout the 1830s. But it did not surprise the evangelicals to learn that life in the workhouses was miserable. These early faith-based initiatives regarded poverty as a divinely sanctioned payment plan for a sinful life. This first anti-poverty program in the first industrial economy was not designed to alleviate suffering, nor to reduce the number of poor children in future generations. Poverty was not understood as a problem to be fixed. It was a spiritual condition. Workhouses weren't supposed to help children prepare for life; they were supposed to save their souls. Looking back two centuries at these early debates, it is clear that a pure free-market ideology can be logically sustained only if it is based in a fiery religious conviction. The contradictions involved are otherwise simply too powerful. The premise of the unpleasant workhouse program was that it would create incentives to work. But the program also acknowledged that there were multitudes of people who were either unable to work or unable to find jobs. The founding assumption of the program was that the market would take care of itself and all of us in the process. But the program also had to embrace the very opposite assumption: that there were many people whom the market could not accommodate, and so some way must be found to warehouse them. The market is a complete solution, the market is a partial solution--both statements were affirmed at the same time. And the only way to hold together these incommensurable views is through a leap of faith. Victorian evangelicals took a similar approach to the crisis in Ireland between 1845 and 1850--the Great Hunger, what came to be known as the potato famine. In office at the time of the first reports of starvation, the Tory administration of Robert Peel responded with a program of food supports, importing yellow cornmeal from the United States and selling it cheaply to wholesalers. Corn was an unfamiliar grain in Ireland, but it provided a cheap food source. In 1846, however, a Whig government headed by Lord Russell succeeded Peel and quickly dismantled the relief program. Russell and most of his central staff were fervent evangelicals, and they regarded the cornmeal program as an artificial intervention into the free market. Charles Trevelyan, assistant secretary of the treasury, called the program a "monstrous centralization" and argued that it would simply perpetuate the problems of the Irish poor. Trevelyan viewed the potato-dependent economy as the result of Irish backwardness and self-indulgence. This crisis seemed to offer the opportunity for the Irish to atone. With Russell's backing, Trevelyan stopped the supply of food. He argued that the fear of starvation would ultimately be useful in modernizing Irish agriculture: it would force the poor off land that could no longer support them. The cheap labor they would provide in towns and cities would stimulate manufacturing, and the now depopulated countryside could be used for more profitable cattle farming. He wrote that his plan would "stimulate the industry of the people" and "augment the productive powers of the soil." There was no manufacturing boom. Roughly a million people died; another million emigrated. The population of Ireland dropped by nearly one quarter in the space of a decade. It remains one of the most striking illustrations of the incapacity of markets to run themselves. When government corn supplements stopped, and food prices rose, private charities and workhouses were overwhelmed, and families starved by the sides of roads. When British leadership put its faith in the natural balance of an open market to create the best outcome, the result was disaster. Evangelicals like Trevelyan didn't look smart and pious after the famine; they looked blind to human reality and desperately cruel. Their brand of political economy, grounded in evangelical doctrine, went into retreat and lost influence. The phrase "political economy" itself began to connote a cruel disregard for human suffering. And so a generation later, when the next phase of capitalist boosterism emerged, the term "political economy" was simply junked. The new field was called "economics." What had got the political economists into trouble a generation before was the perception, from a public dominated by Dickens readers, that "political economy" was mostly about politics--about imposing a zealous ideology of the market. Economics was devised, instead, as a science, a field of objective knowledge with iron mathematical laws. Remodeling economics along the lines of physics insulated the new discipline from any charges filed on moral or sentimental grounds. William Stanley Jevons made this case in 1871, comparing the "Theory of Economy" to "the science of Statical Mechanics" (i.e., physics) and arguing that "the Laws of Exchange" in the marketplace "resemble the Laws of Equilibrium." The comparison with physics is particularly instructive. The laws of Newtonian mechanics, like any basic laws of science, depend on the assumption of ideal conditions--e.g., the frictionless plane. In conceiving their discipline as a search for mathematical laws, economists have abstracted to their own ideal conditions, which for the most part consist of an utterly denuded vision of man himself. What they consider "friction" is the better part of what makes us human: our interactions with one another, our irrational desires. Today we often think of science and religion as standing in opposition, but the "scientific" turn made by Jevons and his fellows only served to enshrine the faith of their evangelical predecessors. The evangelicals believed that the market was a divine system, guided by spiritual laws. The "scientific" economists saw the market as a natural system, a principle of equilibrium produced in the balance of individual souls. When Tom DeLay or Michael Powell mentions "the market," he is referring to this imagined place, where equilibrium rules, consumers get what they want, and the fairest outcomes occur spontaneously. U.S. policy debate, both in Congress and in the press, proceeds today as if the neoclassical theory of the free market were incontrovertible, endorsed by science and ordained by God. But markets are not spontaneous features of nature; they are creations of human civilization, like, for example, skating rinks. A rightwing "complexity theorist" will tell you that the regular circulation of skaters around the rink, dodging small children, quietly adjusting speed and direction, is a spontaneous natural order, a glorious fractal image of human totality. But that orderly, pleasurable pattern on the ice comes from a series of human acts and interventions: the sign on the gate that says "stay to the right," the manager who kicks out the rowdy teenagers. Economies exist because human beings create them. The claim that markets are products of higher-order law, products of nature or of divine will, simply lends legitimacy to one particularly extreme view of politics and society. Because the neoclassical theory emphasizes calculations made by individuals, it tends not to focus on the impact of external and social factors like advertising, education, research funding, or lobbying. Consumer behavior, for an orthodox economist, is a kind of perfect free expression. But as any twenty-three-year-old marketing intern, or any six-year-old child, can tell you, buying things is not just about the rational processing of information. The Happy Meal isn't about satisfying hunger; it's about the plastic toy. Houses aren't for shelter, they are for lifestyles. Automobiles aren't transport, they're image projectors. Diamonds aren't ornaments, they are forever. We buy things partly based on who we are, but at the same time we believe that buying things makes us who we are and might make us into someone different. Critical voices within economics have been making this complaint at least as far back as Thorstein Veblen, the late nineteenth-century economist best known for his theory of "conspicuous consumption." In The Theory of the Leisure Class, and in a series of essays in the 1890s, Veblen showed that patterns of consumption and work broadly conform to the boundaries set by class and culture. Neoclassical economists acknowledge that the wealthy sometimes buy things just to show off, but they insist that regular people under normal circumstances buy only what they intrinsically desire. Veblen saw that it was impossible to understand individual economic choices without understanding the world in which those choices were made. A helpful, if disquieting, example comes by way of the twentieth-century anthropologist Marshall Sahlins, who, in his book Culture and Practical Reason, points out that the entire structure of U.S. agriculture "would change overnight if we ate dogs." What Sahlins means is that the powerful social prohibition against using pets as protein will always condition consumer choices. American children and teens do not decide individually that they will for all their lives spare American dogs from the abattoir. This choice is made for them by the historical world into which they are born. They are no more free to eat dog than they are to wear buckskins to basketball practice. As economist Anne Mayhew recently observed, even consumers at the bottom of the wage scale, with absolutely no discretionary income, choose the necessities of life with a common-sense awareness of how their choices will be perceived by neighbors, family, and the wider social world. "Post-autistic economics" (PAE) is the name now taken by those few economists who hope to rescue the discipline from the neoclassical model; the name is an homage to the dissident French students, whose manifesto called the standard model "autistic." It is a hilariously apt (albeit mildly offensive) diagnosis, and it could be just as well applied to Homo economicus himself, the economic actor envisioned by the neoclassical theory, who performs dazzling calculations of utility maximization despite being entirely unable to communicate with his fellow man. Not all PAE economists oppose the premises of the dominant neoclassical school, but they all agree that neoclassical theory cannot stand on its own. In other words, they agree that economics must begin to recognize the social--what the dissident economist Edward Fullbrook calls "intersubjectivity"--and, in the process, give up its pretense to scientific completeness. Until it does, generations of college students will continue to have their worldviews irreparably distorted by basic economics courses, whose right-wing ideology hides behind a cloak of science. The first evangelicals fought for free trade because they thought it would encourage virtuous behavior, but two centuries of capitalism have taught a different lesson, many times over. The wages of sin are often, and notoriously, a private jet and a wicked stock-option package. The wages of hard moral choice are often $5.15 an hour. Free markets don't promote public virtue; they promote private interest. In this way they are neither "free" (that is, independent of human influence) nor uniformly helpful in promoting freedom. Market trends are not truly indicative of the kind of society that Americans wish to create for their children. Consumer demand--for gated homes, exurban sprawl, or fluorescent-dyed sugar titration kits called cereal--does not reflect democratic political choice. If indeed economics is this society's most authoritative version of its own story, ours is a notoriously unreliable narrator. * The definitive source here is Boyd Hilton's masterful book The Age of Atonement: The Influence of Evangelicalism on Social and Economic Thought, 1785-1865. Gordon Bigelow teaches at Rhodes College in Memphis. He is the author of Fiction, Famine, and the Rise of Economics in Victorian Britain and Ireland (Cambridge University Press). From checker at panix.com Mon May 16 20:20:18 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:20:18 -0400 (EDT) Subject: [Paleopsych] NY Post: Mass-Media Meltdown Message-ID: Mass-Media Meltdown http://www.nypost.com/postopinion/opedcolumnists/46203.htm May 10, 2005 -- THE mass-media melt down is happening everywhere you look from the multiplex to the newsstand, from late-night television to drive-time radio. * Hollywood is in a panic, because for nine weeks straight, box office grosses have been lower than last year's. As Gabriel Snyder wrote in yesterday's edition of Variety, "In recent years the first weekend of May has seen a big expansion in the marketplace. But if [the current] estimate of $83 million holds when final figures are tallied, it would be the worst weekend of an already listless year. It is also 26 percent behind last year's summer kickoff frame, when 'Van Helsing' opened to $51.7 million perceived as a disappointment at the time." * The editors and publishers of most major American newspapers are terrified, because declines in newspaper circulation are accelerating at an alarming clip. By one reckoning, the Los Angeles Times lost an astounding 13 percent of its readers in a year's time. * Television networks are reeling from a dramatic contraction of its audience of young male viewers aged 18-34 the cohort most desired by advertisers. According to a controversial Nielsen study, their prime-time viewership has declined by nearly 8 percent. The number has been shrinking for more than a decade. * Talk-radio audiences in major cities like New York and Washington have fallen since the 2004 election. Meanwhile, radio executives who program music stations and who have been packing every hour with increasing numbers of commercials are being forced by their impatient audiences to limit the number of ads and play more music. * The American recording industry is in tatters, increasingly unable to introduce new stars and to sell new music. There are compelling individual explanations for these phenomena. For instance, this year's movies have been extraordinarily uninteresting. And the collapse in newspaper circulation may simply be the result of more honest reporting on the part of publishers chastened by the public exposure last year of fraudulent numbers at papers like Newsday and the Dallas Morning News. But it can't be a coincidence that the five major pillars of the American media movies, television, radio, recorded music and newspapers are all suffering at the same time. And it isn't. Something major has changed over the past year, as the availability of alternative sources of information and entertainment has finally reached critical mass. Newly empowered consumers are letting the producers, creators and managers of the nation's creative and news content know that they are dissatisfied with the product they're being peddled. Take the moviegoing audience. For 25 years, people have been watching movies at home on video and DVD. But only in the past year or so have people been able to afford big flat screens in their homes that offer an aural and visual experience superior in many ways to a movie theater's. The $2,000 price tag for that TV doesn't seem so steep when you consider that an average married couple has to pay upwards of $70 ($22 for two tickets, another $15 for soda and popcorn for two, parking fees, babysitter) to attend a single film. And it doesn't seem like that much of a treat when the movie is being projected onto a filthy piece of billowing white canvas that is never cleaned. And so it goes. Satellite radio makes it possible for people willing to spend $12 a month to listen to superb sound quality without commercials. TiVo and digital video recorders have finally made it easy for people to watch the TV programs they want to watch whenever they want to watch them. And it goes without saying that the Internet has transformed the way people interested in news can get their information. It also goes without saying that the owners and distributors of old media aren't just going to go quietly into that good night. These are still unimaginably valuable platforms. But the key will be understanding that the self-satisfied conduct of media professionals peddling unwatchable nonsense in Hollywood and on TV, and foisting politically correct pseudo-information on increasingly sophisticated consumers of news isn't going to hack it any longer. E-mail: podhoretz at nypost.com From checker at panix.com Mon May 16 20:20:33 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:20:33 -0400 (EDT) Subject: [Paleopsych] Argumentation in a Culture of Discord Message-ID: CHE: Argumentation in a Culture of Discord The Chronicle of Higher Education, 5.5.20 http://chronicle.com/weekly/v51/i37/37b00601.htm By FRANK L. CIOFFI Last October the comedian-philosopher Jon Stewart did writing teachers a great service. Accosting the hosts of CNN's Crossfire, Stewart accused them of shortchanging the American public by failing to offer a forum for genuine debate, and by reducing issues to black/white, right/wrong dichotomies. CNN apparently agreed, as it canceled the show after a 23-year run. And while I certainly admit that Stewart himself argued unfairly, his point nonetheless stands: Our media do not provide a forum for actual debate. Instead they're a venue for self-promotion and squabbling, for hawking goods, for infomercials masquerading as news or serious commentary. In terms of discussing issues, they offer two sides, pick one: Either you are for gay marriage or against it, either for abortion or for life, either for pulling the feeding tube or for "life." This failure to provide a forum for argumentative discourse has steadily eroded students' understanding of "argument" as a concept. For decades my college writing classes have stressed the need to write papers with an argumentative edge. Yet students don't get it. Either they don't understand what I mean, or they reject the whole enterprise. A few years ago, one of them -- "G.M." -- wrote me an e-mail message that exemplifies many students' position: "In reading your ideas over the difficulty of [getting] students to accept an argumentative thesis, I wonder ... how much one could say that it [has been] caused by the pre-millennial movement of pacificism? In my lifetime I have not seen something so polarizing as war and thus I have not felt the amount of momentary certainty that past generations have. ... Violence is on another level entirely, for I do not believe in war, but confrontation's very redeemable qualities are normally overlooked. ... " G.M. seemed to think I was advocating a verbal violence that he -- his whole generation -- was loath to undertake. While I responded that written argument was by its nature nonviolent, I nonetheless understood from whence he drew his conclusions: He saw "argument" in media-defined terms. Part of the problem of teaching argumentative writing is that "argument" means "heated, contentious verbal dispute" as well as "argumentation." Some writing texts make this confusion worse: One in front of me uses a handsome cover illustration by Julia Talcott that shows two people from whose open mouths issue, respectively, a red triangle and a blue circle. I don't think this kind of visual is likely to help matters. Like the figures in "Laughing Stock," the media feature arguers who have entrenched, diametrically opposed positions. Students typically don't want to attempt "argument" or take a controversial position to defend, probably because they've seen or heard enough of the media's models -- Bill O'Reilly, Ann Coulter, or Al Franken, to name a few -- and are sick of them. If I were an 18-year-old college freshman assigned an argumentative essay, I'd groan in despair, either because I found the food-fight-journalism model repulsive or because, like G.M., I didn't feel strongly enough about anything to engage in the furious invective that I had all too often witnessed. Maybe the unanticipated consequence of the culture of contentious argument -- and this, I think, was Stewart's larger point -- is the decline in the general dissemination of intellectual, argumentative discourse more broadly construed. I propose that we teach students more about how intellectual discourse works, about how it offers something exciting -- yet how when it succeeds, it succeeds in only approaching understanding. The philosopher Frank Plumpton Ramsey puts it bluntly but eloquently: "Meaning is mainly potential." Philosophicaland, more generally, argumentativediscourse presents no irrefutable proofs, no indelible answers. In fact, the best writing of this kind tends not to answer but to raise questions, ones that perhaps the audience hadn't previously considered. Or to put it in terms my college-age nephew uses, when you're writing argument, don't go for the slam-dunk. At the same time, we should make students aware that they're not alone on the court. We need, that is, to emphasize more the need for counterarguments, which inevitably force writers to place themselves in the audience's position and to attempt to imagine what that audience values and feels -- what objections it might intelligently raise. In On Liberty, John Stuart Mill asserts that 75 percent of an argument should consist of counterarguments. And, further, writers should not merely parrot these, but must "know them in their most plausible and persuasive form ... must feel the whole force of the difficulty which the true view of the subject has to encounter and dispose of." Presenting and empathizing with counterarguments force an author to go somewhere new, to modify her initial position into one more nuanced, more complex, more problematic -- perhaps to one of greater potential, to use Ramsey's formulation. Now this might be very well for philosophical or literary-critical discourse, but what of scientific discourse? What of historical or legal discourse? I suggest that all these fields require an argumentative stance, if not in the papers that students write at the freshman or even undergraduate level, then in professional journals and monographs, and that stance should be the model for student writing. While these models differ some from field to field, all academic writing starts with a problem, a hypothesis, or a question. And the idea is not to solve this problem or answer that question with previously extant notions. This kind of writing should offer something original, imaginative, something the audience would not have thought of before and might even initially reject. Yet it invites that rejection, seeks out disconfirmatory material, naysaying positions. Working against the initial rejection, it logically persuades the audience how a proposed solution betters other current solutions, covers a wider range of data, or undermines previous notions. In short, this kind of writing looks at other answers and engages them, proving them in need of some rethinking, recontextualizing, or reimagining. And though its answer might not be perfect, it's closer -- it asymptotically approaches a truth. Yet can every student be an Einstein? Should we urge every student to come up with writing that resembles the professional writing of one's discipline, when many students have difficulty constructing paragraphs, constructing sentences, or construing meaning of central texts? Probably not at every level. I know that much writing instruction and many writing programs (such as, for example, the one I direct) are often expected to "help students learn how to punctuate." And I know that's an important tool. I sympathize with professors who must wade through mounds of hastily composed, unproofread, usage-dull essays that bring only a fixed glaze to their readers' eyes. But if we focus on defining our genre and discourse, showing students what it is that we do, we might just get students excited about discovering new ideas, about reimagining old problems, about writing something that somehow matters. Then they will often realize the need to present their ideas in a more "correct," formal English. So they'll work on their papers, putting them through multiple drafts, consulting with tutors, with us. They might even start perusing usage texts. In short, we need to work toward providing students fulfillment in the very process of writing, rather than in only the grade we give to the product. Not surprisingly, that kind of thought and writing process are difficult to teach. It's easier to give "evaluative" writing assignments for which there are more or less clear-cut answers: Summarize this. Give a pr?cis of that. Answer this question. Give us an outline. Fill in the blank. True or false? Using writing only as evaluative tool, these assignments invoke the consumerlike currency-exchange model. Think of how in the course of a semester so much of a discipline's dialectical ambiguity emerges, yet how often we will use "evaluative" writing assignments such as the aforementioned, with the expressed purpose of seeing if students "got" the "material," which even for us is slippery and elusive. And the transitive verb really matters here: I "got" a new iPod; I "got" a pair of Gap jeans; I "got" John Rawls's "veil of ignorance" concept; I "got" an A. This pedagogy resembles the consumer myth: There is an answer (a product, an idea, a methodology, a theory, a grade); it's this. Like consumerism, this pedagogy reduces enormously complex issues down to simplistic solutions: canned answers qua canned soup. Or as one of my colleagues puts it, "Human beings, pork and beans, they're all the same!" By offering such assignments, we unwittingly embrace what the media have led people to believe that intellectual debate and discourse consist of. People on shows such as Crossfire stake out a position, and they iterate and reiterate that position. They give examples of what they mean, and "defend" themselves by ignoring or deliberately misconstruing vicious attacks from the opposing side. But this is not intellectual discourse; it's discourse packaged as product. Academic, intellectual discourse -- true debate, the attempt to genuinely advance knowledge, the use of imaginative arguments in general -- cannot be easily captured in a half-hour television program. Such discourse requires time and labor. It requires sustained analysis and construction of an intended audience. It requires careful marshaling of evidence, organization of ideas, rewriting, rethinking. It may seem a little boring to listen to, and is often too dense to grasp at first hearing. How is this "exciting" or at all attractive? Why would anyone want to engage in "academic" discourse, except for some deferred reward, such as, well, a college degree? Why, in a larger sense, do we do what we do? (It isn't for the money.) I think there are larger rewards to scholarship, to argumentative writing. We have a curiosity about how things work (or fail to), and the writing we do attempts to satisfy that curiosity, to explain problems to ourselves, to others. Though Richard D. Altick's book The Scholar-Adventurers might be a hard sell to the general public, his fundamental idea still stands: There are risk and danger to scholarship; it takes some courage to undertake it. For example, we might figure out more how the universe operates, but that discovery might well undermine our previously held conceptions. So while our writing might not serve to amuse, and it might not gather miscellaneous thumbprints in the waiting room of a car-repair shop, it might just advance human knowledge. Lofty, perhaps, but I think true. Most people never encounter such discourse. And most students, on entering college, have no idea of what it's like. They've come from a culture that wants answers, not nuanced problematizations, not philosophy. They've been conditioned, as have most Americans, to seek out a position where a simple choice will solve the problem. They've been conditioned to see ideas as being part of a marketplace, just like sweatshirts, snowboards, or songs, and when they are asked to produce ideas, they look to that marketplace for a model. And students do this with their research papers as much as with their arguments. How often, in fact, does a student's research paper look like an amateur journalist's report of multiple facts and views, a superficial survey of x number of sources, with no argument even implied? I don't want to disparage consumer culture too much, since I often define myself against its dazzling and dreamy backdrop, but consumer culture (and the media, which are a part of it) often works against us in higher education. It makes arguments all the time, but they're not sound, intellectual arguments. It manufactures a need, it contrives a teleology. For example, now there's an even better TV or home gym or soap to buy; now you can improve your looks, your skin, your mood, your erectile capacity. In short, the consumer myth suggests that some consumer products can end, even satisfy, our hydra-headed desire. So the culture offers the beauteous product with one tentacle, but if you take it, two new beckoning heads pop up. More insidiously, consumer discourse, by concretizing satisfactions for the desires it creates, implies that any desires not satisfiable by culture -- i.e., not purchasable -- can only be perverse or bizarre; any complicated solutions, absurd. Student writing resembles in microcosm the consumer-product myth insofar as it offers contrived problems for which there are equally contrived, predictable, prepackaged solutions. Indeed, this writing too often offers ideas that can be supported relatively easily, with abundant, even overwhelming, evidence. Consider, for example, the "five-paragraph essay" so often taught in high schools around the country and further abetted by the new SAT exam. Paragraph one offers an introduction, including a thesis at the end of the introduction. It's best if this thesis has three points. The subsequent three paragraphs develop and explain these thesis-supporting points. The last paragraph, the conclusion, sums up the paper and restates the thesis. Nothing wrong with that, is there? Well, there is. It resembles the script for commercials. It inhibits, even prohibits freedom of thought. It's static -- more noise than signal. There's no real inquiry going on, no grappling with complexities. It seeks only support, and readily available support at that. It can appear to be heated, resembling the screaming-heads model. But it's one-sided, and it goes nowhere, except to its inevitable end, which resembles or reproduces its beginning. When we try to teach argument in the classroom, we have to fight a model of discourse that, zombielike, still stalks many classrooms. At the same time, we're pressed to provide a better model for studentsthe reasoned, calm approach, the one that engages and responds to counterarguments, that strives only to approach an understanding. The model for this in public discourse is as hard to find as the genre is to explain or justify. It's no surprise that we can't stick an ice pick through the five-paragraph monster's gelid heart. The best argumentative writing expands and transforms the ideas of the writer. It questions itself, actively seeking out emergent problems along the way. And it ends not with a definitive, an in-your-face "So there!" (or a "You should just read the Bible!"), but probably with more complex questions, ones that push the continuum of the subject matter. Of course students don't initially like this model: It's not very tidy. It doesn't offer an easy answer or position. It seems to waver, or to embody a predetermined "flip-flop" mentality. (This is the kind of thing that weakened John Kerry's credibility with voters.) But at the same time, students know that the model is better than the five-paragraph essay. One student told me writing in the argumentative mode was "scary." It's just not something they've been taught to do -- yet its being tantamount to a transgressive act can make it much more attractive. Why so? I think this might stem from a very simple human emotion that both the culture -- and many writing assignments, too -- seems desperate to eradicate: longing. Frederick Exley, in A Fan's Notes, talks about this issue. After college, his protagonist plans to get a certain kind of apartment in New York, a certain kind of job, and a certain kind of girlfriend. He even plans to be a "Genius." He has all these longings that need to be fulfilled. But in fact, what he hadn't really learned in college was that longings are better left unfulfilled: "Literature is born out of the very longing I was so seeking to suppress," he writes. Writing argument is all about longing -- a longing for the truth. And this longing is inherently unsatisfiable. Emerson frequently argued for the value of "conation," that is, the perpetual striving for something. We don't want to perpetually strive -- or long -- for anything, much less the truth. We want more immediate gratification: Get there, solve it, and get out. Consider Iraq -- a war in which our desire for a "that's that" resolution has smashed up against a problem defying easy solution. It's a war that has challenged our American notion of who we are -- are we people who use torture, for example? -- at the same time that it's thrown into relief our "can do" notion of ourselves. And what do people think about this war? I don't think they want to think about it. But it's not that they're lazy or craven. Nor am I implying that the desire for immediate gratification is wicked -- it's just not something provided by intellectual discourse or argument. People simply haven't been given the right models of how to think. That's our job; that's what academic argument's about. Jon Stewart was right to have attacked Crossfire and its brand of discourse. Now it's up to us to create an intellectual alternative -- not just for our students, but for the public as well. Frank L. Cioffi, an assistant professor of writing and director of the writing program at Scripps College, is author of The Imaginative Argument: A Practical Manifesto for Writers, published this month by Princeton University Press. From checker at panix.com Mon May 16 20:20:49 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:20:49 -0400 (EDT) Subject: [Paleopsych] WSJ: Grandma's Behavior While Pregnant Affects Her Grandkids' Health Message-ID: Grandma's Behavior While Pregnant Affects Her Grandkids' Health The Wall Street Journal Sharon Begley May 13, 2005; Page B1 [Thanks to Louis for this.] Although life offers no guarantees, parents-to-be can increase their chances of having a healthy baby by, among other things, undergoing prenatal testing and making sure mom has a healthy pregnancy. But almost 2,500 years after Euripides noticed that "the gods visit the sins of the fathers upon the children," scientists are discovering that nature can be even crueler than the ancient Greek imagined: It can visit the sins of the grandparents on the children. Such "transgenerational" effects are the latest focus of a growing field called fetal programming, or the fetal origins of adult diseases. It examines how conditions in the womb shape physiology in a way that makes people more vulnerable decades later to cardiovascular disease, diabetes, immune problems and other illnesses usually blamed on genetics or lifestyle, not on what arrived via the placenta. If a fetus is poorly nourished, for instance, it can develop a "thrifty phenotype" that makes it really good at getting the most out of every meal. After birth, that lets it thrive if food is scarce, but it's a recipe for Type 2 diabetes in a world of doughnuts and fries. Poor fetal nutrition can lead to hypertension, too: If it causes the fetus to produce too few kidney cells, the adult that the fetus will become won't be able to regulate blood pressure well. Now, in a finding that seems to put our fate even further outside our control, researchers are seeing generation-skipping effects. Last month, scientists reported that a child whose grandmother smoked while pregnant with the child's mother may have twice the risk of developing asthma as a child whose grandma didn't flood her fetus with carcinogens. Remarkably, the risk from grandma's smoking was as great as or greater than from mom's. Kids whose mothers smoked while pregnant were 1.5 times as likely to develop childhood asthma as children of nonsmoking moms. Kids whose grandmothers smoked while pregnant with mom were 2.1 times as likely to develop asthma, scientists reported in the journal Chest. The harmful effects of tobacco, it seems, can reach down two generations even when the intervening generation -- mom -- has no reason to suspect her child may be at risk. "Even if the mother didn't smoke, there was an effect on the grandchild," says Frank Gilliland of the University of Southern California, Los Angeles, who led the study of 908 children. "If smoking has this transgenerational effect, it's a lot worse than we realized." What causes the grandma effect? One suspect is DNA in the fetus's eggs (all the eggs a girl will ever have are made before birth). Chemicals in smoke might change the on-off pattern of genes in eggs, including genes of the immune system, affecting children who develop from those eggs. Men whose mothers smoked don't seem to pass on such abnormalities, probably because sperm are made after birth. Animal data hint at other grandma effects. Last week, scientists reported the first discovery that obesity and insulin resistance, as in Type 2 diabetes, can be visited on the grandkids of female rats that ate a protein-poor diet during pregnancy, lactation or both. Again, this occurred even when those rats' offspring, the mothers of the affected grandkids, were healthy, Elena Zambrano of the Institute of Medical Sciences and Nutrition, Mexico City, and colleagues report in the Journal of Physiology. The findings, says Peter Nathanielsz of the University of Texas Health Sciences Center, San Antonio, "stretch the unwanted consequences of poor nutrition across generations." In people, the type of "nutritional insult" to the fetus doesn't seem to matter. Too few calories, too little protein, too few other nutrients can all lead to diabetes, hypertension and other ills decades later. "That suggests that what links diet to adult diseases is something quite fundamental," says Simon Langley-Evans of the University of Nottingham, England. The key suspects: changes in DNA activity in the fetus or in the balance of hormones reaching it via the placenta. Alarmingly, the list of what can be passed along to the next generation is growing. If you are undernourished as a first-trimester fetus, you won't pad your hips and thighs with enough fat tissue. If, as a child or adult, you take in more calories than you expend, the extras get stored in and around abdominal organs rather than on the thighs and hips, says Aryeh Stein of Emory University, Atlanta. One result is a body shaped like an apple (which brings a higher risk of heart disease). Another is a higher risk of gestational diabetes, in which blood glucose levels rise during pregnancy and too much glucose reaches the fetus. Babies born to moms with gestational diabetes have a higher risk of Type 2 diabetes. When undernourished fetuses grow into adolescents, they don't respond as well to vaccines as babies who had a healthy gestation, scientists led by Thomas McCune of Northwestern University, Evanston, Ill., find. One reason may be that the third trimester is a critical time for development of the thymus, which produces the immune system's T cells. When immune-compromised girls become pregnant, they have less chance of having a healthy pregnancy and a healthy baby. Score another for the grandma effect. From checker at panix.com Mon May 16 20:21:14 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:21:14 -0400 (EDT) Subject: [Paleopsych] Guardian: Learn to love the equation Message-ID: Learn to love the equation http://www.guardian.co.uk/print/0,3858,5189072-111400,00.html The thought process needed to master a mathematical formula is a skill that can empower anyone Marcus du Sautoy Monday May 9, 2005 Look on the side of a bus at the moment and you might be rather shocked to see an onslaught of mathematical symbols. The conglomeration of cosines and Greek letters isn't the outpourings of some disgruntled mathematician writing graffiti about his latest discovery across the nation's bus network. This cryptic equation is part of an advertising campaign. Mathematical equations are now so cool - ice cool - that the drinks firm Diageo believes they can help sell Smirnoff Ice. The advertising brief was to increase the credibility of Smirnoff Ice with guys. The result is Uri, the hero of the new campaign, who lives in some frozen outback with his huskies and friend Gorb. Uri's quirky take on life is captured by his collection of witty utterances: "Never judge a book by its movie" or "Despite the cost of living it's still popular". But Uri's equation is perhaps the most gnomic of all his messages. You might think maths would only help to endear the drink with nerds and trainspotters. But it is a mark of the subject's changing fortunes that mathematical equations have become sufficiently intriguing for brands to sponsor a formula. It isn't only advertising firms that have cottoned on to the power of an equation to promote a product. Science departments across the country, desperate for media coverage, have mercilessly exploited the power of an equation to make the news. However silly the research, if it can be captured by an equation, it's sure to grab the headlines. We've had a formula for parallel parking; a formula to predict the future of a marriage; even a formula to help British people under stand their fear of eating with chopsticks. But what is the secret formula for a good equation? This year we are celebrating the centenary of the most famous equation of all time. Einstein's E=mc2 is probably at the top of most people's list of memorable equations. Like all great equations, Einstein's discovery has the quality of a magic trick: you start with something on one side of the equation and then by mathematical magic the formula transforms it into something that appears completely different. In Einstein's case, the trick was to show how matter (the m in his equation) can be transformed into pure energy (the E), a magic trick that was put to devastating use in the creation of the atom bomb. Simplicity is another important ingredient for the most successful formulas. But simple formulas don't necessarily mean the outcome of the equation is simple. Chaos theory revealed how amazing complexity can result from some of the most innocent looking equations. The power of prediction is also a key part of the best scientific equations. Those scientists who first understood the equations for the motions of the heavenly bodies wielded great power. The Spanish invaders in South America were able to use their prediction of a solar eclipse to defeat the indigenous armies who were terrified by the power of their formulas. The famous British physicist Paul Dirac came up with an equation to predict the behaviour of electrons. His equation, now inscribed on his memorial in Westminster Abbey, won him a Nobel prize. But as well as describing the behaviour of an electron, it also seemed to predict the existence of a new sort of particle called anti-matter. This strange substance would annihilate the matter that surrounds us to produce pure energy. Sounds like science fiction - indeed, the starship Enterprise is fuelled on the stuff. Yet despite early scepticism by scientists, anti-matter was in fact identified as a reality in 1932. But in Einstein's view, the ultimate test for an equation was an aesthetic one. The highest praise for a good theory was not that it was correct or that it was exact, simply that it should be beautiful. Dirac concurred with Einstein's view. When asked in a seminar in Moscow to summarise his philosophy of physics, he wrote on the blackboard in capital letters: "Physical laws should have mathematical beauty." Although Einstein's tops most people's lists of great equations, it is also likely to be the only one on the list. Shouldn't people be able to summon up more than just Einstein's iconographical equation? What about all those formulas we were subjected to at school? A whole debate in parliament was dedicated recently to the delights or otherwise of quadratic equations. One side of the house argued that they should be scrapped from the syllabus. After all, who has ever needed to solve a quadratic equation in real life? But that misses the point of why they should be a core part of the curriculum. The analytical thought process required to master an equation is a skill that will empower anyone, from a solicitor arguing a case in court, to a dinner lady planning the week's school dinners. Although it probably didn't mean to support the mathematical lobby in the debate, I think the message below Uri's equation on the number 149 bus sums it up: "Clear thinking from Smirnoff" ... and mathematics. ? Marcus du Sautoy is professor of mathematics at Oxford University and author of The Music of the Primes [4]dusautoy at maths.ox.ac.uk From checker at panix.com Mon May 16 20:21:26 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:21:26 -0400 (EDT) Subject: [Paleopsych] The Nation: The Family World System Message-ID: The Family World System http://www.thenation.com/docprint.mhtml?i=20050530&s=anderson by PERRY ANDERSON [from the May 30, 2005 issue] Few topics of fundamental importance have, at first glance, generated so much numbing literature as the family. The appearance is unjust, but not incomprehensible. For the discrepancy between the vivid existential drama into which virtually every human being is plunged at birth and the generalized statistical pall of demographic surveys and household studies often looks irremediable: as if subjective experience and objective calibration have no meeting point. Anthropological studies of kinship remain the most technical area of the discipline. Images of crushing dullness have been alleviated, but not greatly altered, by popularizations of the past--works like The World We Have Lost (1965) by Peter Laslett, the doyen of Cambridge family reconstruction--fond albums of a time when "the whole of life went forward in the family, in a circle of loved, familiar faces," within a "one-class society." The one outstanding contemporary synthesis, William Goode's World Revolution and Family Patterns (1963), which argued that the model of the Western conjugal family was likely to become universal, since it best fulfilled the needs of industrialization, has never acquired the standing its generosity of scope and spirit deserved. Family studies are certainly no desert. They are densely populated, but much of the terrain forms a featureless plain of functions and numbers stretching away to the horizon, broken only by clumps of sentiment. Over this landscape, G?ran Therborn's Between Sex and Power rises like a majestic volcano. Throwing up a billowing column of the boldest ideas and arguments, while an awesome lava of evidence flows down its slopes, this is a great work of historical intellect and imagination. It is the fruit of a rare combination of gifts. Trained as a sociologist, Therborn is a highly conceptual thinker, allying the formal rigor of his discipline at its best with a command of a vast range of empirical data. The result is a powerful theoretical structure, supported by a fascinating body of evidence. But it is also a set of macro-narratives that compose perhaps the first true example we possess of a work of global history. Most writing that lays claim to this term, whatever other merits it may display, ventures beyond certain core zones of attention only selectively and patchily. In the case of general histories of the world, of which there are now more than a few, problems of sheer scale alone have dictated strict limits to even the finest enterprises. Therborn, by contrast, in focusing on just one dimension of existence, develops a map of human changes over time that is faithful to the complexity and diversity of the world in an arrestingly new way, omitting no corner of the planet. Not just every inhabited continent is included in this history; differences between nations or regions within each--from China and Japan to Uruguay and Colombia, north to south India, Gabon to Burkina Faso, Turkey to Persia, Norway to Portugal--are scanned with a discriminating eye. Such ecumenical curiosity is the antithesis of Barrington Moore's conviction that, in comparative history, only big countries matter. Not surprisingly, the challenge is the attractive product of a small country. Therborn's sensibility reflects his nationality: In modern times Sweden, situated on the northern margins of Europe, with a population about the size of New Jersey's, has for the most part been an inconspicuous spectator of world politics. But in the affairs of the family, it has more than once been a pace-setter. That a comparative tour de force on them should be written by a Swede is peculiarly appropriate. Surveying the world, Therborn distinguishes five major family systems: European (including New World and Pacific settlements), East Asian, sub-Saharan African, West Asian/North African and Subcontinental, with a further two more "interstitial" ones, Southeast Asian and Creole American. Although each of the major systems is the heartland of a distinctive religious or ethical code--Christian, Confucian, Animist, Muslim, Hindu--and the interstitial ones are zones of overlapping codes, the systems themselves form many "geocultures" in which elements of a common history can override contrasts of belief within them. This cultural backdrop lends color and texture to Between Sex and Power. The book's tone recalls aspects of Eric Hobsbawm, in its crisp judgments and dry wit. While Therborn is necessarily far more statistical in style, something of the same literary and anecdotal liveliness is present too. Amid an abundance of gripping arithmetic, novels and plays, memoirs and marriage ads have their place in the narrative. Most striking of all, in a field so dominated by social or merely technical registers, is the political construction Therborn gives to the history of the family in the twentieth century. What are the central propositions of the book? All traditional family systems, Therborn argues, have comprised three regimes: of patriarchy, marriage and fertility (crudely summarized--who calls the shots in the family, how people hitch up, how many kids result). Between Sex and Power sets out to trace the modern history of each. For Therborn patriarchy is male family power, typically invested in fathers and husbands, not the subordination of or discrimination against women in general--gender inequality being a broader phenomenon. At the beginning of his story, around 1900, patriarchy in this classical sense was a universal pattern, albeit with uneven gradations. In Europe, the French Revolution had failed to challenge it, issuing in the ferocious family clauses of the Napoleonic Code, while subsequent industrial capitalism--in North America as in Europe--relied no less on patriarchal norms as a sheet anchor of moral stability. Confucian and Muslim codes were far more draconian, though the "minute regulations" of the former set some limits to the potential for a "blank cheque" for male power. Arrangements were looser in much of sub-Saharan Africa, Creole America and Southeast Asia. Harshest of all was the Hindu system of North India, in a league of its own for repression. As Therborn notes, this is one of the very few parts of the world where men live longer than women, even today. By 2000, however, patriarchy had become "the big loser of the twentieth century," as Therborn puts it, yielding far more ground than religion or tyranny. "Probably no other social institution has been forced to retreat as much." This roll-back was not just an outcome of gradual processes of modernization, in the bland scheme of structural-functional sociology. It was principally the product of three political hammer blows. The first of these, Therborn shows, came in the throes of the First World War in Sweden, where full legal parity between husband and wife was first enacted, and then, in a more radical series of measures, the October Revolution dismantled the whole juridical apparatus of patriarchy in Russia, with a much more overt emphasis on sexual equality as such. Conduct, of course, was never the same as codification. "The legal family revolution of the Bolsheviks was very much ahead of Russian societal time, and Soviet family practices did not immediately dance to political music, however loud and powerful." But the shock wave in the world generated by the Russian example was, Therborn rightly emphasizes, enormous. The Second World War delivered the next great blow on the other side of the world, again in contrasted neighboring forms. In occupied Japan, General MacArthur's staff imposed a Constitution proclaiming "the essential equality of the sexes"--a notion, of course, that has still to find a place in the American Constitution--and a civil code based on conjugal symmetry. In liberated China, the victory of Communism "meant a full-scale assault on the most ancient and elaborate patriarchy of the world," obliterating all legal traces of the Confucian order. Finally, a third wave of emancipation was unleashed by the youth rebellions of the late 1960s, which segued into modern feminism. (When the revolt of May 1968 erupted in France, the country's High Court was still upholding the French husband's right to forbid his wife to move out, even if he was publicly maintaining a mistress.) Here the inauguration by the United Nations of an international Decade for Women in 1975 (also the ultimate outcome of a Communist initiative, on the part of the Finnish daughter of one of Khrushchev's Politburo veterans) is taken by Therborn as the turning point in a global discrediting of patriarchy, whose last legal redoubt in the United States--in Louisiana--was struck down by the Supreme Court as late as 1981. The rule of the father has not disappeared. In the world at large, West Asia, Africa and South Asia remain the principal holdouts. Islam itself, Therborn suggests, may be less to blame for the resilience of Arab patriarchy than the corruption of the secular forces once opposed to it, abetted by America and Israel. In India, on the other hand, there is no mistaking the degree of misogyny in caste and religion, even if the mediation of patriarchal authority by market mechanisms has its postmodern ambiguities. Surveying the "blatant instrumentalism" of the matrimonial pages of a middle-class Indian press, in which "more than 99 per cent of the ads vaunted socio-economic offers and desires," he wonders: "To what extent are parents the 'agents' of young people, in the same sense as any money-seeking athlete, musician or writer has an agent?" At the opposite extreme is Euro-American postpatriarchy, in which men and women possess equal rights but still far from equal resources--women enjoying on average not much more than half (55-60 percent) the income and wealth of men. In between these poles come the homelands of the Communist revolutions, which did so much to transform the landscape of patriarchy in the last century. The collapse of the Soviet bloc has not seen any restoration in this respect, whatever other regressions it may involve ("the power of fathers and husbands does not seem to have increased," though "that of pimps certainly has"). Therborn speculates that in both Russia and Eastern Europe, the original revolutionary gains may prove Communism's most lasting legacy. In China, on the other hand, there is much further to go, amid more signs of recidivist urges in civil society. Still, he points out, not only is gender inequality in wages and salaries far lower in the PRC than in Taiwan--by a factor of three--but patriarchy proper, as indicated by conjugal residence and division of labor, continues to be weaker. The first part of Therborn's story is thus eminently political. As he remarks, this is logical enough, since patriarchy is about power. His second part moves to sex. In questions of marriage, Europe--or, more precisely, Western Europe and those of its marchlands affected by German colonization in the Middle Ages--diverged from the rest of the world far earlier than in matters of patriarchy. In this zone a unique marital regime had already developed in pre-industrial times, combining late monogamy, significant numbers of unmarried people and Christian norms of conjugal duty, contradictorily surrounded by a certain penumbra of informal sex. The key result was "neo-locality," or the exit of wedded couples from parental households. Everywhere else in the world, Therborn maintains, the rule was universal marriage, typically at earlier ages, as the necessary entry into adulthood. (He does not make it clear whether he thinks this applies to all pre-class societies, where such a rule might be doubted.) Paradoxically, although patterns of marriage might be thought to have varied more widely around the world than forms of patriarchy, Therborn has much less to say about them. Polyandry is never mentioned, the map of monogamy is unexplored, nor is any taxonomy of polygamy offered beyond a tacit distinction between elite and mass variants (the latter peculiar to sub-Sahara). The base line of his tale of marriage is set by a contrast between two deviant areas and all other arrangements. The first of these is the West European anomaly, with its subsequent overseas projections into North America and the Pacific. The second is the Creole, born in plantation and mining zones of the Caribbean and Latin America with a substantial black, mulatto or mestizo population, where a uniquely deregulated sexual regime developed. Some startling figures emerge from Therborn's comparison. If sexual mores in Europe first became widely relaxed in aristocratic circles of the eighteenth century, flouting of conventional norms reached epidemic proportions among the lower classes of many cities in the nineteenth, if only by reason of the costs of marriage. At various points in the latter part of the century, a third of all births in Paris, half in Vienna and more than two-thirds in Klagenfurt were out of wedlock. By 1900 such figures had fallen, and national averages of illegitimacy had become quite modest (Austrians still outpacing African-Americans, however). Matters were much wilder in the Creole system, readers of Garc?a M?rquez will not be surprised to learn. "Iberian colonial America and the West Indies were the stage of the largest-scale assault on marriage in history." In the mid-nineteenth century between a third and half of the population of Bahia never tied the knot; in the Rio de la Plata region, extramarital births were four to five times the levels in Spain and Italy; around 1900 as many as four-fifths of sexual unions in Mexico City may have been without benefit of clergy. These were the colorful exceptions. Throughout Asia, Africa, Russia and most of Eastern Europe, marriage in one form or another was inescapable. A century later, Therborn's account suggests, much less has changed than in the order of patriarchy. Creole America has become more marital, at least in periods of relative prosperity, but remains the most casual about the institution. In Asia, now mostly monogamous, and sub-Saharan Africa, still largely polygamous, marriage continues to be a universal norm--with pockets of slippage only in the big cities of Japan, Southeast Asia and South Africa--but the age at which it is contracted has risen. If divorce of one kind or another has become nearly universal as a legal possibility, its practice is much more restricted--in the Hindu "cow belt," virtually zero. At the top end of the scale, in born-again America and post-Communist Russia, any wedding guest is entitled to be quizzical: Half of all marriages break up. But with successive attempts at conjugal bliss, the crude marriage rate has not fallen in the United States. Globally, it would seem, the predominant note is stability. In one zone, however, Therborn tracks a major change. After marrying as never before in the middle decades of the century, Western Europeans started to secede from altar and registry in increasing numbers. Sweden was once again the vanguard country, and it still remains well ahead of its Scandinavian neighbors, not to speak of lands farther south. The innovation it pioneered, from the late 1960s onward, was mass informal cohabitation. Thirty years later, the great majority of Swedish women giving birth to their first child--nearly 70 percent--were either cohabiting or single mothers. Marriage might or might not follow cohabitation. What became a minority option, in one country after another--Britain, France, Germany--was marriage before it. In Catholic France and Protestant England alike, extramarital births jumped from 6-8 percent to 40-42 percent in the space of four decades. Manifestly, the sexual revolution of the 1960s and '70s lay behind this spectacular transformation. Therborn notes the arrival of the pill and IUD as facilitating conditions, but he is more interested in consequences. What did it add up to? In effect, a double liberation: more partners and--especially for women--more pleasure. In Finland in the early 1970s, women had bedded an average of three men; in the early '90s the number had risen to six (by then the gap in erotic satisfaction between the sexes had closed). In Sweden the median number of women's lovers more than tripled during the same period, a much greater increase than for men. "More than anything else," Therborn concludes, "this is what the sexual revolution has brought: a long period for pre-marital sex, and a plurality of sexual partners over a lifetime becoming a 'normal' phenomenon, in a statistical as well as in a moral sense." How far does the United States conform to the emergent European pattern? Only in part, as its different religious and political complexion would lead one to expect. Europeans will be astonished to learn that in 2000 about a fifth of American 18- to 24-year-olds claimed to be virgins on their wedding day. Only 6 percent of American couples cohabited. More than 70 percent of mothers at first birth are married. On the other hand, the United States has nearly twice as many teenage births per cohort as the highest country in the EU and an extramarital birthrate higher than that of the Netherlands. Without going much into race or region, Therborn describes the American system as "dualist." But from the evidence he provides, it might be thought that electoral divisions are reflected in sexual contrasts, blue and red in the boudoir too. In the last part of Between Sex and Power, Therborn moves to fertility. Here the conundrum is the "demographic transition"--the standard term for the shift from a regime of low growth, combining lots of children and many early deaths, to one of high growth, combining many children but fewer deaths, and then back to another one of low growth, this time with both fewer deaths and fewer children. There is no mystery about the way medical advances and better diets led to falling rates of mortality in nineteenth-century Europe and eventually reached most of the world, to similar effect, in the second half of the twentieth century. The big question is why birthrates fell, first in Europe and North America between the 1880s and 1930s, and then for the majority of the human race from the mid-1970s onward, in two uncannily similar waves. In each case, "a process rapidly cutting through and across state boundaries, levels of industrialization, urbanization and levels of income, across religions, ideologies and family systems" slashed fertility rates by 30-40 percent in three decades. Today, the average family has no more than two to three children throughout most of the former Third World. What explains these gigantic changes? The first nations to experience a significant fall in fertility were France and the United States, by 1830--generations in advance of all others. What they had in common, Therborn suggests, was their popular revolutions, which had given ordinary people a sense of self-mastery. Once the benefits of smaller families became clear in these societies, neolocality allowed couples to make their own decisions to improve their lives before any modern means of contraception were available. Fifty years later, perhaps triggered initially by the onset of a world recession, mass birth control began to roll through Europe, eventually sweeping all the way from Portugal to Russia. This time, Therborn's hypothesis runs, it was a combination of radical socialist and secular movements popularizing the idea of family planning, together with the spread of literacy, that brought lower fertility as part of an increasingly self-conscious culture of modernity. This was birth control from below. In the Third World, by contrast, contraception--now an easy technology--was typically propagated or imposed from above, by political fiat of the state. China's one-child policy has been the most dramatic, if extreme, example. Once lower birthrates became a general goal of governments committed to modernization, family systems then determined the order in which societies entered the new regime: East Asia in the lead, North India and black Africa far in the rear guard. Here too it was a sense of mastery, of human ability to command nature--not always bureaucratic in origin, since the better-off societies of Latin America moved more spontaneously in the same direction--that powered the change. The consequences of that change, of which we can still see only the beginnings, are enormous. Without it, the earth would now have some 2 billion more inhabitants. In Europe and Japan, meanwhile, fertility has dropped no less dramatically, falling below net reproduction rates. This collapse in the birthrate, from which the United States is saved essentially by immigration, promises rapid aging of these nations in the short run and, if unchecked, virtual extinction of them in the long run. There is now a growing literature of public alarm about this prospect, what the French historian Pierre Chaunu denounces as a "White Death" threatening the Old World. Therborn eschews it. Negative rates of reproduction in these rich, socially advanced societies do not correspond in his view to any birth strike by women but rather to their desire to have two to three children and careers that are the equal of men's, which the existing social order does not yet allow them to do. In denying themselves the offspring they want, European parents are "moving against themselves," not with the grain of any deeper cultural change. Between Sex and Power ends with four principal conclusions. The different family systems of the world reveal little internal logic of change. They have been recast from the outside, and the history of their transformations has been neither unilinear nor evolutionary but rather determined by a series of unevenly timed international conjunctures of a decidedly political character. The result has not been one of convergence, other than in a general decline of patriarchy, due more to wars and revolutions than to any "feminist world spirit." In the South, the differential timing of changes in fertility continues to shift the distribution of global population further toward the subcontinent and Africa and away from Europe, Japan and Russia. In the North, European marriage has altered its forms but is proving supple and creative in adapting to a new range of desires: Conventional jeremiads notwithstanding, it is in good shape. Predictions? Serenely declined. "The best bet for the future is on the inexhaustible innovative capacity of humankind, which eventually surpasses all social science." In due course, an army of specialists will gather round Between Sex and Power, like so many expert sports fans, to pore over its multitudinous argument. What can a layman say, beyond the magnitude of its achievement? Tentatively, perhaps only this. In the architectonic of the book, there is something of a gap between the notion of a family system and the triad of patriarchy, marriage and fertility that follows it. In effect, the way these three interconnect to form the structure of any family system goes unstated in the separate treatment accorded each. But if we consider the trio as an abstract combination, it would seem that logically--as the order in which Therborn proceeds to them itself suggests--patriarchy must command the other two as the "dominant," since it will typically lay down the rules of marriage and set the norms of reproduction. There is, in other words, a hierarchy of determinations built into any family system. This has a bearing on Therborn's conclusions. His final emphasis falls, unhesitatingly, on the divergence between major family systems today. After stressing continuing worldwide dissimilarities between fertility and marital regimes, he concedes that "the patriarchal outcome is somewhat different." His own evidence suggests that this way of putting it is an understatement. For what his data show is a powerful process of convergence, far from complete in extent but unequivocal in direction. But if the variegated forms of patriarchy are what historically determined the main parameters of marriage and reproduction, wouldn't any ongoing decline of them across family systems toward a common juridical zero point imply that birthrates and marriage customs are eventually likely to converge, in significant measure, at their own pace too? That seems, at any rate, a possible deduction sidestepped by Therborn, but which his story of fertility appears to bear out. For what is clear from his account is that the astonishing fall in birthrates in most of the underdeveloped world has been the product of a historic collapse in patriarchal authority, as its powers of life and death have been transferred to the state, which now determines how many are born and how many survive. What, then, of marriage? Here, certainly, contrasts remain greatest. In speaking of "the core of romantic freedom and commitment in the modern European (and New World) family system," Therborn implies this remains specific to the West. But while the caste system or Sharia law plainly preclude extempore love, does it show no signs of spreading, as ideal or realization, in the big cities of East Asia or Latin America? The imagination of urban Japan, he shows, is already half-seized with it. Not, of course, that the decline of marriage in Western Europe, with the advent of mass cohabitation, has so far been replicated anywhere else. But here a different sort of question might be asked. Is it really the case that the negative rates of reproduction that have accompanied this pattern are as unwished-for as Therborn suggests? He relies on the discrepancy between surveys in which women explain how many children they expect and those they actually have. But this could just mean that in practice their desire for children proved weaker than for a well-paid job, a satisfying career or more than one lover at a time. Voters in the West regularly say they want better schools and healthcare, and in principle expect to pay for them, and commentators on the left often pin high hopes on such declarations. But once such citizens get to the polling booth they tend to stick to lower taxes. The same kind of self-deception could apply to children. If so, it would be difficult to say European marriage was in such good shape, since there would be no stopping place in sight for its plunge of society into an actuarial abyss. Therborn resists such thoughts. Although Between Sex and Power pays handsome homage to the role of Communism in the dismantling of patriarchy in the twentieth century, it displays no specially Marxist view of the family. Engels would not have shared the author's satisfaction that marriage is flourishing, however ductile the forms it has adopted. In expressing his attachment to them, Therborn speaks with the humane voice of a level-headed Swedish reformism that he understandably admires, without having ever altogether subscribed to it. In looking on the bright side of the EU marital regime, he is also consistent with the case he has made in the past for its welfare states, which have survived in much better condition than its critics or mourners believe. It is in the same spirit, one might say, that he insists on the persistent divergence of family systems across the world. Uniformity is the one condition every part of the political spectrum deplores. The most unflinching neoliberals invariably explain that universal free markets are the best of all guardians of diversity. Social democrats reassure their followers that the capitalism to which they must adjust is becoming steadily more various. Traditional conservatives expatiate on the irreducible multiplicity of faiths and civilizations. Homogeneity has no friends, at least since the French Hegelian philosopher Alexandre Koj?ve prepared the end of history for Francis Fukuyama. But when any claim becomes too choral, a flicker of doubt is indicated. It scarcely affects the magnificence of this book. In it, you can find the largest changes in human relations of modern times. From checker at panix.com Mon May 16 20:21:38 2005 From: checker at panix.com (Premise Checker) Date: Mon, 16 May 2005 16:21:38 -0400 (EDT) Subject: [Paleopsych] Telegraph: The best-laid plans Message-ID: The best-laid plans http://www.arts.telegraph.co.uk/arts/main.jhtml?xml=/arts/2005/05/01/boorm01.xml&sSheet=/arts/2005/05/01/bomain.html 5.5.1 Alasdair Palmer reviews Why Most Things Fail: Evolution, Extinction and Economics by Paul Ormerod. There is nothing quite like an election campaign for reminding you that governments do not achieve the goals they set out to accomplish. The relentless scrutiny to which every political party's promises are subject during a campaign demonstrates very clearly that politicians in power usually end up breaking whatever promises they make - and, if they don't, it is because what they promised to do in the first place was sufficiently nebulous to mean that no one could tell that they had failed, or was sufficiently insignificant to mean that keeping to it required almost no planned activity at all. Paul Ormerod has a novel explanation for this regrettable but constant feature of political life. Its cause, he says, is not the incurable mendacity of politicians. It is that everyone who sets out on a large-scale enterprise usually ends up failing to do what they promised. Politicians are no worse than the executives of private companies, lawyers, architects, designers, builders, or anyone else. They are bound by the same law which governs every other human activity - and that law is that most of it fails. Why Most Things Fail might have been just another melancholy dirge on the omnipresence of death of the "all flesh is grass" variety, but this engrossing and entertaining book is much more than a mordant moan on mortality. It is a careful, comprehensible analysis of the limits of human rationality's ability to control the world, and of the implications for public policy - and, to some extent, for personal conduct - of the failure of most rational calculation to produce its intended results. Ormerod occasionally over-states the hopelessness of the human predicament. Impres-sed by the parallels between the law which relates the frequency of extinctions in the animal kingdom to their size (the larger the percentage of living creatures an extinction wipes out, the less often such an event happens) and the law which relates the frequency at which firms fail to their size (the rate at which firms fail exactly parallels the way species go extinct), he concludes that firms will fail at a predetermined rate - irrespective of the decisions of the people who run them. The collapse of IBM, for example, wasn't really the fault of its executives. Nor was it due to Bill Gates's planning genius at Microsoft. No one, including Mr Gates, expected Microsoft's Windows 3 to be the huge success it became. It wasn't any better than the competing products, and in some ways it was worse. Luck, more than any rational (or irrational) choice by managers, explains why IBM shrank and Microsoft prospered. No amount of planning or rational calculation, insists Ormerod, can escape the iron law of failure which governs human endeavour. That conclusion, however, seems to me to be too despairing. Human reason can't be as impotent as Ormerod implies. After all, its application to the problem of how to understand the laws which govern the natural world has produced spectacular results over the past 200 years. Thanks to the development of technology and some sensible decisions on how to apply it, Western societies enjoy an unprecedented degree of prosperity and productivity. Progress in the standard of living is not simply an accident: it has been a goal of human rationality, and has been achieved as a result of rational calculation. Ormerod's despairing vision of the inevitable failure of practical reason seems unable to account for that fact. Still, his arguments on the limits to which we can expect more specific political, economic or managerial policies to achieve their stated goals are persuasive. Our inevitable ignorance about the future, and about the way people will behave, sabotages the fulfilment of most plans. It doesn't matter whether those plans are made by the executives of a company such as Coca-Cola (their attempt to improve the taste of Coke was so great a failure that the new product nearly ruined the company and had to be withdrawn after a matter of months), or by Government ministers (despite 50 years of effort to eliminate it, there is now more inequality in Britain than there was in 1955). The moral is simple: keep central planning to a minimum, and expect to be disappointed - whoever wins the next election. Alasdair Palmer is Public Policy Editor of The Sunday Telegraph BOOK INFORMATION Why Most Things Fail Author Paul Ormerod Publisher Faber, ?12?99, 267 pp From shovland at mindspring.com Tue May 17 01:12:36 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 16 May 2005 18:12:36 -0700 Subject: [Paleopsych] Energy Psychology Message-ID: <01C55A42.D74F5890.shovland@mindspring.com> http://www.energypsych.org/ Energy Psychology is a family of mind/body techniques that are clinically observed to consistently help with a wide range of psychological conditions. These interventions address the human vibrational matrix, which consists of three major interacting systems: . Energy pathways (meridians and related acupoints) . Energy centers (chakras) . Human biofield (systems of energy that envelop the body) These techniques are also helpful in promoting high-level mind-body health and peak performance in the physical, mental and creative arenas of life. Other links: http://www.energypsych.com/ http://www.innersource.net/energy_psych/energy_psychology.htm http://www.amazon.com/exec/obidos/tg/detail/-/1574441841/102-5200919-189 2111?v=glance http://www.torontoepc.com/ http://www.google.com/search?hl=en&q=energy+psychology Steve Hovland www.stevehovland.net From waluk at earthlink.net Tue May 17 02:32:12 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Mon, 16 May 2005 19:32:12 -0700 Subject: [Paleopsych] Energy Psychology In-Reply-To: <01C55A42.D74F5890.shovland@mindspring.com> References: <01C55A42.D74F5890.shovland@mindspring.com> Message-ID: <428957AC.1010807@earthlink.net> Good links. Especially the following: http://mentalhealth.about.com/cs/specialtechniques/a/022403.htm Regards, Gerry Reinhart-Waller Steve Hovland wrote: >http://www.energypsych.org/ >Energy Psychology is a family of mind/body techniques that are clinically >observed to consistently help with a wide range of psychological >conditions. These interventions address the human vibrational matrix, which >consists of three major interacting systems: >. Energy pathways (meridians and related acupoints) >. Energy centers (chakras) >. Human biofield (systems of energy that envelop the body) > >These techniques are also helpful in promoting high-level mind-body health >and peak performance in the physical, mental and creative arenas of life. > >Other links: > >http://www.energypsych.com/ > >http://www.innersource.net/energy_psych/energy_psychology.htm > >http://www.amazon.com/exec/obidos/tg/detail/-/1574441841/102-5200919-189 >2111?v=glance > >http://www.torontoepc.com/ > >http://www.google.com/search?hl=en&q=energy+psychology > >Steve Hovland >www.stevehovland.net > > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From HowlBloom at aol.com Tue May 17 03:19:31 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Mon, 16 May 2005 23:19:31 EDT Subject: [Paleopsych] free wills and quantum won'ts Message-ID: <7b.4568eb14.2fbabcc3@aol.com> This is from a dialog Pavel Kurakin and I are having behind the scenes. I wanted to see what you all thought of it. Howard You know that I'm a quantum skeptic. I believe that our math is primitive. The best math we've been able to conceive to get a handle on quantum particles is probabilistic. Which means it's cloudy. It's filled with multiple choices. But that's the problem of our math, not of the cosmos. With more precise math I think we could make more precise predictions. And with far more flexible math, we could model large-scale things like bio-molecules, big ones, genomes, proteins and their interactions. With a really robust and mature math we could model thought and brains. But that math is many centuries and many perceptual breakthroughs away. As mathematicians, we are still in the early stone age. But what I've said above has a kink I've hidden from view. It implies that there's a math that would model the cosmos in a totally deterministic way. And life is not deterministic. We DO have free will. Free will means multiple choices, doesn't it? And multiple choices are what the Copenhagen School's probabilistic equations are all about? How could the concept of free will be right and the assumptions behind the equations of Quantum Mechanics be wrong? Good question. Yet I'm certain that we do have free will. And I'm certain that our current quantum concepts are based on the primitive metaphors underlying our existing forms of math. Which means there are other metaphors ahead of us that will make for a more robust math and that will square free will with determinism in some radically new way. Now the question is, what could those new metaphors be? Howard ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsmith06 at maine.rr.com Tue May 17 03:29:39 2005 From: dsmith06 at maine.rr.com (David Smith) Date: Mon, 16 May 2005 23:29:39 -0400 Subject: [Paleopsych] free wills and quantum won'ts References: <7b.4568eb14.2fbabcc3@aol.com> Message-ID: <00e301c55a90$a8a0fb40$0200a8c0@dad> Traditionally, the problem of free will is not a question of whether or not we have choices, it is the question of whether or not these choices are caused by prior events. David ----- Original Message ----- From: HowlBloom at aol.com To: paleopsych at paleopsych.org Sent: Monday, May 16, 2005 11:19 PM Subject: [Paleopsych] free wills and quantum won'ts This is from a dialog Pavel Kurakin and I are having behind the scenes. I wanted to see what you all thought of it. Howard You know that I'm a quantum skeptic. I believe that our math is primitive. The best math we've been able to conceive to get a handle on quantum particles is probabilistic. Which means it's cloudy. It's filled with multiple choices. But that's the problem of our math, not of the cosmos. With more precise math I think we could make more precise predictions. And with far more flexible math, we could model large-scale things like bio-molecules, big ones, genomes, proteins and their interactions. With a really robust and mature math we could model thought and brains. But that math is many centuries and many perceptual breakthroughs away. As mathematicians, we are still in the early stone age. But what I've said above has a kink I've hidden from view. It implies that there's a math that would model the cosmos in a totally deterministic way. And life is not deterministic. We DO have free will. Free will means multiple choices, doesn't it? And multiple choices are what the Copenhagen School's probabilistic equations are all about? How could the concept of free will be right and the assumptions behind the equations of Quantum Mechanics be wrong? Good question. Yet I'm certain that we do have free will. And I'm certain that our current quantum concepts are based on the primitive metaphors underlying our existing forms of math. Which means there are other metaphors ahead of us that will make for a more robust math and that will square free will with determinism in some radically new way. Now the question is, what could those new metaphors be? Howard ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych -------------- next part -------------- An HTML attachment was scrubbed... URL: From jvkohl at bellsouth.net Tue May 17 04:24:39 2005 From: jvkohl at bellsouth.net (JV Kohl) Date: Tue, 17 May 2005 00:24:39 -0400 Subject: [Paleopsych] What's the survival value of posttraumaticstressdisorder? In-Reply-To: <016001c54b3a$6b9b66a0$6501a8c0@callastudios> References: <01C54AF6.7D585B80.shovland@mindspring.com> <016001c54b3a$6b9b66a0$6501a8c0@callastudios> Message-ID: <42897207.5050104@bellsouth.net> Alice, I've long thought that the link between PTSD and rape is olfactory. War vets response triggered by smoke; women's response triggered by the natural scent of a man--or event associated odors: alcohol, etc. The natural scent of a man can evoke chemical changes in reproductive hormone levels, which would also affect personality. The association with natural masculine scent is most likely to alter intimacy with a rape victim's loving spouse/lover. She will respond to him, unfortunately, as her traumatized body responded to the rape. I wonder how much you've heard, read about the olfactory connection--and how much validity you think there is to it. Jim Kohl www.pheromones.com Alice Andrews wrote: > Steve wrote: > > Her chemistry will change, and depending on where she is > developmentally (her life-history), her personality may actually > change! (Pre, say, 25 years of age). > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewsa at newpaltz.edu Tue May 17 11:38:06 2005 From: andrewsa at newpaltz.edu (Alice Andrews) Date: Tue, 17 May 2005 07:38:06 -0400 Subject: [Paleopsych] What's the survival value ofposttraumaticstressdisorder? References: <01C54AF6.7D585B80.shovland@mindspring.com> <016001c54b3a$6b9b66a0$6501a8c0@callastudios> <42897207.5050104@bellsouth.net> Message-ID: <009701c55ad4$e554be70$6501a8c0@callastudios> Hi Jim, That makes absolutely perfect sense to me. I was just yesterday talking to a friend who is quite 'addicted' to her fairly emotionally abusive boyfriend; much of what she will say about why she can't just be with a 'nice guy' has a lot to do with the way this man smells....Unfortunately, I too have this 'problem'. I suggested to her that the smell we might happen to like is a combination of high testosterone and some other personality traits -- that we are able to smell BPD, narcissism, etc.(And not just PDs but mental disorders such as bipolar, too.) It would be quite an interesting study to look at the Big-5 and see if there are pheromonal correlations...But anyway, back to associations and conditioning (which is relevant to your point re rape): The first man I fell madly in love with was probably borderline (BPD) and a narcissist...And the few men after him who smelled like him, well, I had similar responses. My big query about that has been: Is the huge attraction to the scent something essential, i.e. about 'matching' immune systems and personalities, about desiring something rare/special, about desiring something disordered, about desiring something that shows fitness, etc etc...? Or is it just that I happened to have fallen in love with a man who happened to have had these particular characteritics and smell, and now I'm locked into it by association? Or a little of both? I realize this is a lot to share with Paleo..But I figure everyone can handle it! Also: Three years ago we corresponded about love and pheromones and I got your permission to post/share your responses on EP-yahoo. I'm pasting here because it's pretty interesting. And exactly a year ago I wrote you an email re the above question re personality and pheromones. I no longer have that email, but I do have your response. Here's some of it... I figure it's okay to share: All the best! Alice Alice Andrews wrote: Is there any evidence to suggest that particular odors are signals of particular personalities? Certainly high testosterone and these pheromones and personality must be linked, no?Yes. Also, since stress increases cortisol, which decreases testosterone, a confident man's pheromone production would be indicative of reproductive fitness. You know the type; acts like he owns the joint, presents as an alpha male, attracts most of the women. The three men who share this particular scent (musky, musty, almost like mildew) all have similar personalities...Somewhat 'disordered' (a little boderline, narcissistic, schizoid, etc.) I'd be curious to know if there is anything out there on any correlation. (I have not found yet.)Watch out for the schizoid. DHEA production varies and so does the natural body odor of schizophrenics. In homosexual males it's the ratio of androsterone to etiocholanolone, which are the primary metabolites of DHEA. Homosexuals prefer the odor of other homosexuals (this will be published later this year by others). -------------------------------------------------------AA:I was wondering if there's any literature on (or talk of) female pheromones at ovulation having the capability to alter or inhibit or increase a particular type of sperm-one that is more likely to impregnate? JVK: The egg has been described as an active sperm-catcher; pretty sure we cited this in my book, but no info I've seen indicates pheromonal effects on type of sperm. This is an interesting thought, nonetheless. I hope you follow up with your inquiry to other experts. Pheromone receptors also are present on sperm cells (presumably to guide them to the egg). AA: If such a sperm is more 'costly' in some way to manufacture, it would make sense that a man would 'conserve' most 'fertile,' 'viable,' 'healthy' sperm for when female was at her most fertile. Or perhaps it is just as simple as: when a man detects pheromones most (or likes them most), he is most turned on and produces MORE semen, thus more chance for fertilization to occur. And perhaps more normal sperm cells are present? Any thoughts? JVK: The literature I've seen indicates a continuum of sperm production based on ratios of luteinizing hormone (LH) and follicle stimulating hormone (FSH), with FSH being largely responsible for development. However, it is an LH surge that accompanies both ovulation in women, and a testosterone increase in men exposure to ovulatory women's pheromones (copulins). There is also some literature (Sperm Wars) that mentions increased aniticipatory volume of semen, but no indications of sperm quality as I recall. Sorry I can't be of more help, (read that your book got Jim Brody's approval, congrats!) Jim --------------------------------------------------- AA: I sometimes wonder if the feelings of Love during conception could possibly alter the quality of sperm, too... neurotransmitters/hormones/peptides etc in woman feeling love during sex-------->affect (copulins) pheromones (type or amount)----> affect sperm quality??? And/or 'love chemicals' in men simply affecting sperm quality etc....??? Hmmm.... JVK: A possibility, since many if not all neuronal systems feedback on the gonadotropin releasing hormone neuronal system, which drives everything about reproduction (and, of course, is directly affected by pheromones.) An example: increasing estrogen levels are linked to increased oxytocin release with orgasm in women. If oxytocin also increased with testosterone, bonding would be facilitated. Perhaps the bonding mechanism influences fertility. Or maybe something so simple as the immune system functions of paired mates adjusting to the ongoing presence of a mate, facilitating conception via immune system interaction with sperm production. Much to think about; more to study. Jim ----- Original Message ----- From: JV Kohl To: Alice Andrews ; The new improved paleopsych list Sent: Tuesday, May 17, 2005 12:24 AM Subject: Re: [Paleopsych] What's the survival value ofposttraumaticstressdisorder? Alice, I've long thought that the link between PTSD and rape is olfactory. War vets response triggered by smoke; women's response triggered by the natural scent of a man--or event associated odors: alcohol, etc. The natural scent of a man can evoke chemical changes in reproductive hormone levels, which would also affect personality. The association with natural masculine scent is most likely to alter intimacy with a rape victim's loving spouse/lover. She will respond to him, unfortunately, as her traumatized body responded to the rape. I wonder how much you've heard, read about the olfactory connection--and how much validity you think there is to it. Jim Kohl www.pheromones.com Alice Andrews wrote: Steve wrote: Her chemistry will change, and depending on where she is developmentally (her life-history), her personality may actually change! (Pre, say, 25 years of age). -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Tue May 17 14:48:21 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:48:21 -0400 (EDT) Subject: [Paleopsych] Roeper Review: To produce or not to produce? Understanding boredom and the honor in underachievement. (On Gifted Students in School) Message-ID: To produce or not to produce? Understanding boredom and the honor in underachievement. (On Gifted Students in School) Roeper Review, Fall 2003 v26 i1 p20(9) by Lannie Kanevsky and Tacey Keighley [This is the first of eight articles on boredom.] The gifted students at the heart of this study are those who have chosen not to produce schoolwork of the quality and quantity expected of them. Why not? Because their schoolwork is "boring." But what does "boredom" mean to them? Teachers and researchers (e.g., Csikszentmihalyi, 1975; Farrell, 1990) have ascribed meanings to students' boredom yet scant research exists that has asked students directly, particularly gifted students, to describe their boredom. Delisle (1992) has distinguished gifted "nonproducers" from "underachievers." Nonproducers are at-risk academically but not psychologically; that is, they are self-assured, independent and have chosen not to attend classes or complete school assignments because they are boring or irrelevant. Underachievers are at-risk academically and psychologically; that is, they do not complete assignments because they have low self-esteem and are dependent learners. This distinction may exist in the literature but most practitioners consider both "types" to be underachievers in the sense that they are not living up to their potential. Although we did not initially intend to focus on the experiences of nonproducers, the students involved were clearly confident in their abilities and autonomous learners. As will be shown, the literature has characterized boredom in relatively simple terms. We felt the cumulative effects of boredom on classroom learning are symptoms of a complex interaction of factors. The purpose of this qualitative study was to give nonproducing gifted students an opportunity to describe the nature of their boredom. As this study evolved, it became apparent that their boredom , unlike that described in the literature, was more complex and had a moral dimension. Their disengagement was an honorable resolution to the dilemma of whether or not to engage in inappropriate curricula as they confronted it each day in school. In one student's words: I knew that I deserved education and deserved to say what I wanted to say. I always knew those things 'cause I always had these opinions that I would totally stick up for. And I knew we weren't doing the things that we should be doing and so did everybody else.... I remember always thinking I want to learn something and we're not learning anything and we did the same things over and over again ... [Anita] With nothing to learn, Anita chose not to produce the work expected of her. About Boredom According to the literature, boredom is an emotion (Farmer & Sundberg, 1986). It is a global phenomenon and it happens in and out of school (Larson & Richards, 1991). At various times, everybody is bored, however some people are more prone to boredom than others (Farmer & Sundberg, 1986). What little research exists suggests some trends in the general population: more men than women are boredom prone (Farmer & Sundberg); adolescents and seniors are more boredom prone than children and adults (Sundberg & Bisno, 1983); and extroverts are more easily bored than introverts (e.g., Eysenck & Zuckerman, 1978). These findings suggest boredom is dispositional (related to the nature of the individual), however others believe it is situational, attributing boredom to the nature of the setting (e.g., the school system, classroom, or the teaching; Farmer & Sundberg, 1986). It is likely that there are interdependent characteristics of the individual and context that result in what we each call boredom. Concerns about boredom emanate from the unpleasant feelings we associate with it: frustration, anger, disengagement, and the like. It appears, however, that there are additional causes for concern. Samuels and Samuels (1974) found boredom and curiosity were the most common causes of drug use. It is also associated with the eating behaviors of obese and nonobese individuals (Abramson & Stinson, 1977). Not surprisingly, boredom is one of the most frequently identified causes for students leaving school temporarily (e.g., skipping classes, feigning illness) or permanently (Farmer & Sundberg, 1986; Karp & Goldfarb Consultants, 1988; Larson & Richards, 1991). In classrooms it is associated with diminished attention and interferes with student performance (Larson & Richards). Rothman (1990) reported that nearly half of the 25,000 eighth graders in the 1988 National Educational Longitudinal Study said they were bored in school at least hall of the time. In and out of school boredom demoralizes teachers and parents (Larson & Richards, 1991). Farmer and Sundberg (1986) summarized the inconsistent findings of older investigations addressing the relationship between boredom and intelligence reporting, "Some studies are suggestive of a positive relationship (Drory, 1982), negative relationship (Robinson, 1975), curvilinear relationship (Fogelman, 1976), or no relationship at all (Hill, 1975; Smith, 1955)" (p. 14). These diverse findings were likely the result of differences in each study's operational definitions of both boredom and intelligence, as well as methodological differences (e.g., instrumentation, age and gender of the participants). Differences in those characteristics as well as personality variables interacting (e.g., thrill-seeking, introversion/extroversion, loneliness, depression, hopelessness, perceived competence; Farmer & Sundberg) and features of the context (e.g., peers, setting, materials) are all potential factors. Any and all may have frustrated researchers' efforts to gain a clear sense of any relationship between boredom and intelligence. Research on Boredom Involving Gifted Students Research findings suggest that a student does not have to be gifted to be bored in school but it helps (Feldhusen & Kroll, 1991; Gallagher, Harradine & Coleman, 1997; Larson & Richards, 1991). A lack of challenge is the most commonly identified cause for classroom boredom. Many believe this leads to underachievement (e.g., Gentry, Gable & Springer, 2000). Plucker and McIntire (1996) examined the behavior of 12 gifted, underchallenged 5th to 9th graders and their teachers' responses to them. Their findings will resonate with the experiences of many parents and educators. The bored students responded easily and correctly to teachers' questions although they appeared inattentive. These students also raised the level of discussion by posing abstract questions; they disrupted the class with humor; they read their choice of material (e.g., magazines) or slept during class time. Some teachers noticed their student's efforts to create challenge or survive the lack of it; others did not. The classroom dynamics these researchers observed led them to conclude, "Boredom, treated as both a cause and effect, may be more complex than previously believed" (p. 13). We agree; boredom is chaotic and dynamic. The simplistic definitions of boredom that have driven prior research have provided few insights into this pervasive feature of gifted students' school experience. Gallagher, Harradine, and Coleman (1997) equated the lack of challenge with boredom when they surveyed 871 gifted students' opinions of their schooling. Across grade levels and subject areas, students associated boredom with copying, memorizing, regurgitating, repetition, waiting and so on. Some of the student participants and the authors felt these experiences were related to schools' and teachers' inability to appropriately challenge students due to the diverse levels of prior knowledge, aptitude and motivation common in today's heterogeneous classrooms. Surprisingly, Feldhusen and Kroll (1991) found no statistically significant differences in the levels of boredom reported by 227 gifted and 227 nongifted 4th through 6th graders. They also found that the gifted students found school easier and more repetitive but still liked school more. How can this be? Perhaps a sampling bias is at work. The gifted students in this study were participating in a Saturday Enrichment Program. It is likely that children willing to attend an educational program over the weekend were achieving rather than underachieving students in regular school programs. As a result, it is unlikely their feelings would be shared by students who chose not to participate in school lessons much less a Saturday program. Larson and Richards (1991) tracked the boredom of 392 fifth- to ninth-graders in and out of school. They asked participants to wear an electronic "beeper" and to record their activity and feelings in a journal each time it sounded. This would happen once "at a random point within every two hour block of time between 7:30 AM and 9:30 PM" (Larson, 1989). Results based on the whole sample indicated differences in the nature of boredom in and out of school, but indicated boredom had more to do with the individual than the setting (i.e., some students become bored more easily). A closer examination of the data provided by high ability and high achieving students revealed a different pattern. We did find boredom to be more frequent during schoolwork for high ability and high achieving students. These students, however, were not more bored outside school, suggesting their boredom is not dispositional but rather related to a lack of stimulation and challenge in their classes. (p. 438) So, no matter how easily they get bored outside of school, high potential students were more frequently bored in school. The richest methodology and findings can be found in another more comprehensive "beeper study." Csikszentmihalyi, Rathunde and Whalen (1993) tracked motivational and emotional factors contributing to the talent development of 208 talented teens through four years of high school. Their methods also included extensive interviews, questionnaires, and tests involving students, parents, and teachers. The complexity of the data collected was essential "for making sense of the multitude of factors affecting the development of talent" (p. 242). As a result, their findings defy summarization. They will be woven into our results and discussion to highlight consistencies and inconsistencies in our findings and theirs. No distinctions were made between the achievement levels and psychological risk status of participants in any of these studies, making it impossible to determine potential differences in the nature of the boredom experienced by achieving, nonproducing, and underachieving students. Many gifted students attributed their disengagement to boredom by the time they reached high school. To date, the explanation of that process provided by the literature is superficial to say the least. We sought a rich understanding of its meaning and its power as a prelude to designing interventions to address it. Method Participants Students. Teachers and school counselors in a suburban Canadian school district were asked to recommend 15- to 18-year-old students who satisfied three criteria: (1) they had been identified as gifted in elementary school, (2) they were now academically underachieving, and (3) they had dropped out of or been suspended from school on at least one occasion. Seven girls and three boys were contacted at home by phone and agreed to participate. The results reported here are based on all ten, however we have included comments from only three. We limited ourselves in hopes of providing readers a rich, clear sense of three unique individuals rather than superficial glimpses of all ten. The names used here are pseudonyms chosen by each student. Jill. Jill (age 17) was a strong leader who had earned straight A's until grade 10. She had a particular affinity for math and problem solving. In 4th grade she scored at the 96th percentile on the Test of Cognitive Skills. Although her assignments were seldom neat, she was described as a "positive and productive young lady." She was in a "Challenge" program through grade 7 and then her school performance began to unravel. Galfunkel. Garfunkel's (age 15) gifts were most evident in music, drama, dancing and math. He was confident--the outspoken class clown. He called the principal's office his second home in grade 7 due to his disruptive humor. In that same year he had scored at the 98th and 99th percentiles on the Test of Cognitive Skills in sequencing, verbal reasoning and analogies, but only at the 74th percentile in memory. Karen. Karen (age 17) is a strong, ideal-driven young woman and a gifted writer. She scored between the 95th and 99th percentile on subscales of the Canadian Cognitive Abilities Test in elementary school. She wrote professionally throughout high school. She had her first child before marrying at 16 and by 19 (2 years after this study was conducted) was a mother of three. She dropped out of high school in grade 11 and enrolled in an alternative program for teen parents in January of the following year. Interviewer The second author (Ms. Keighley) interviewed all of the students. At the time of the study she had 15 years of teaching experience in drama, physical education, and special education, including six years in gifted education. She had a post-baccalaureate diploma in counseling and was completing a Master's degree on giftedness. Her coursework included courses in qualitative research methods. She had no knowledge of any of the participants prior to initiating this study. Procedure The students participated in two or three semistructured interviews lasting approximately an hour each. Their purpose was to probe and explore students' perceptions of their boredom in any context, not only school. While establishing rapport, students were asked to recount their school history and participation in gifted programs. The initial set of probes asked: * Where are you bored? * When are you bored? * How do you feel when you are bored? * What do you do to overcome being bored? * What is the opposite of being bored? The first four questions were then repeated, but this time addressed experiences that students felt were the opposite of boring as described in their responses to the fifth question above. The interviews were audiotaped and transcribed. Immediately following each interview, the interviewer made fieldnotes to record additional impressions of the student, connections with other students' comments and potential follow-up questions for clarification. She also listened to the audiotapes within 24 hours of the interview to make further notes. Throughout the interview phase she kept a reflective journal to track her evolving notions of themes in the students' responses. After transcribing the audiotapes, all texts (notes, journal entries, and transcripts) were submitted to a content analysis. Spradley's (as cited in Lincoln & Guba, 1985) notion of "semantic domains" was used to create categories based on common meanings, phrases, and themes. This method was chosen because we sought the meaning of boredom, not just descriptors or exemplars. Lincoln and Guba's (1985) criteria for trustworthiness were used to ensure the credibility of our interpretations. First, multiple sources were consulted to triangulate the results. For example, school records were consulted to verify student's reports of their grades and behaviors throughout their schooling. Second, member checks were undertaken in the second interview. Students validated, clarified and expanded upon the authors' interpretations of the responses they had provided in the first interview. Finally, Ms. Keighley's teaching colleagues acted as peer debriefers. After the member checks, the peer debriefers read and re-read drafts of themes emerging within and across students' responses, identifying inconsistencies and confusing passages as well as insights and relationships among the themes. Results The three students who best represent the 10 in this study--Jill, Garfunkel, and Karen--were unique individuals, yet common themes emerged from accounts of their boredom. They disengaged from school as it evolved and increased over time. Their reports of their academic performance were consistent with school records and the literature. Like Cordova and Lepper (1996) we saw a deterioration of school productivity, achievement, and motivation from elementary to middle or junior high school, accelerating in high school. The 10 students in this study equated "'schooling" with boredom . Schooling was teacher-directed, textbook-based, and addressed content students already knew. This was a clear contrast to "learning." The learning they sought had five interdependent features, five C's: control, choice, challenge, complexity and caring. As will become apparent, the extent to which the five C's were available in an activity determined whether these students learned or were bored. Each C will be described and the students' words will bring them to life. Control These students spoke clearly about their need for control of self-determination in their learning experiences. Because she felt a sense of control, Jill thrived in her elementary school Challenge program: It wasn't the teacher telling us what to do. It was more like things that we discovered on our own that the teacher didn't even know about and that's a good feeling. It was really fun that we could do that, to know that we figured it out on our own, like sort of a teamwork thing. That was really interesting. By grade 10, her school experiences had become "mundane," filled with experiences like those reported by the students in Gallagher, Harradine, and Coleman (1997): copying, repetition, and passive listening to teachers "droning." She attempted to regain some of the control she had enjoyed in the Challenge program by writing to her school counselor and the District Consultant for Gifted Programs. Appendix A provides the body of her letter which suggests ways her courses might be modified to make them more valuable and challenging. With her sense of humor intact, she signed her letter "Chairman of the Bored." The letter's content makes it clear Jill sought challenge and choices as well as control. She was eager to learn and willing to work; she was not lazy. The school's response was not sufficient to get Jill to attend her classes. Eventually, in grade 12, Jill was asked to leave school. She had missed (skipped) too many classes to graduate. Garfunkel was always confronting his teachers and the school system in general. He had this to say about his grades and his need to control his education: Why are they [his grades] so poor if I'm so smart? ... Because in high school, it's not like it's your opinion, you have to write what the teachers tell you to write and I really don't want to.... I've been told to pass through High School you have to jump through hoops and I don't want to. I want to make my own hoops. Like Jill, Garfunkel wanted power. Kohn (1993) also considers grades to be a means of controlling students. Garfunkel did not comply and he did not graduate; he felt the grades were not worth it. These students sought a sense of self-determination, the power to change the situation and the authority to implement their choices. Choice Concerns related to control were intricately intertwined with choice. In practice and research, they are difficult to distinguish because making a choice is only significant if you have sufficient power or control to act on it. Studies of intrinsically motivated learning (learning voluntarily for the sheer joy of learning) have treated the two as one, calling it self-determination (e.g., Zuckerman, Porac, Lathin, Smith & Deci, 1978). No matter the term, the findings have been consistent: choice/control/self-determination enhances motivation to learn. We distinguished control from choice. Control issues were evident in comments related to the implicit distribution of power while choice issues focused on explicit opportunities to act on one's preferences. These students felt their ability to make choices, to make decisions, was not honored in schooling, and it should be. They expected their opinions and interests to be reflected in their education. They made it clear that disrespect for their ability to make powerful decisions fueled their sense of injustice and resentment toward schools. Ultimately, they realized they held the "trump card" in school--they could choose not to produce the work expected of them. Karen was by choice a teen mother of three children. She wanted to be a mother. Just before the birth of her second child she chose to marry the father of all three. In her words: I am not a drug addict. I do not smoke. I don't even drink. I've never run away. I've never spent a night on the street or been picked up by the police. I've never been abused by my parents and my dad was not an alcoholic.... So why did I get pregnant for the first time at fifteen? Because I WANTED to. Whether or not one agrees with Karen's choices, it was apparent in the interviews that she had given them a great deal of thought. After choosing not to attend her "regular" high school program, Karen enrolled in an alternative program where she found support for her writing passion. In the traditional high school, she felt "They're supposed to be educating you, but they only want to be teaching what they want to teach you, not what you want to learn." She wanted to work at her own pace, moving ahead instead of waiting when she had "nothing else to do." Gifted students resented two choices often offered to them: opportunities to fill time with more of the same work and to tutor struggling classmates. The students questioned the educational value of both. They wanted and felt they deserved other options that offered them developmentally appropriate, powerful learning experiences. "Why should I have to wait if I got it the first time?" (Jill). The types of choices the students wanted to make in their education included: * Content: The courses that were required; the topics, the content of any course, and the course materials (e.g., selecting the novels and poems to be studied). They wanted to be able to enhance the relevance of the content and connections between the curriculum, their interests, and real world experiences. * Process: How they learn (e.g., teaching methods requiring higher levels of thinking, hands-on activities with authentic materials), and the pace of learning (e.g., quick or individually determined pace with minimal repetition). * Environment: When they learn (time of day, flexible timetabling, optional class attendance), with whom they learn (alone or in small groups, with peers who share their interests, with peers of their choice) and attendance (required only when unfamiliar knowledge and skills are introduced). Challenge A lack of challenge in curriculum is the most frequently mentioned cause of boredom for gifted students in schools (Delisle, 1992; Gallagher, Harradine, & Coleman 1997; Plucker & McIntire, 1996, Whitmore, 1980). The term "challenge" has multiple meanings in the literature and among the students in this study. It meant accelerated pacing for Garfunkel and Jill, and deeper, more complex thinking for Karen. As Plucker and McIntire (1996) found, these students felt textbook-based instruction was a barrier to any type of challenge. Jill expressed a common concern--easy tasks may be fun for others but she preferred activities that offered something new, something hard: "Some easy things are fun, but if it's easy why bother 'cause you know you can do it? Why take the time? Why waste the time?" For these students, a "fun" class provided intellectual challenge, fast pace, and greater complexity. Each student found challenge in different types of activities. Garfunkel spoke fondly of opportunities to work independently with "freedom from incompetent teachers" and peers. Jill found the challenge and control she enjoyed in the responsibility of group leadership: I'm good with being an authority. If I'm in charge of something, I'll do a good job if I know I'm important .... I know who is good at what in cooking class.... And they listen because they know I give them the best job that they like to do. I'm good at that. People always ask me what they should do. The students often created their own challenges when they were not challenged by classroom activities. In some cases, they would add a creative dimension to an assignment of elaborate on core content to make it more abstract. They would engage in the self-modified activity rather than the task assigned by the teacher. Jill's "Chairman of the Bored" letter provides clear evidence of her efforts to find challenge in her schoolwork. Her suggestion for math class highlights her willingness to do harder-than-grade-level work instead of easy, grade appropriate problems once she has demonstrated her proficiency on the latter. In other situations these students created options that were often unappreciated by their teachers and school. Garfunkel, the renegade, took joy in finding small ways to outwit his teachers. Boredom in school is just sitting there when the teacher is babbling, listening to lectures.... I'm being bored sitting there twiddling my thumbs, being class clown, figuring out ways to stump the teacher.... It's agitating; it's frustrating to be bored.... He enjoyed intellectual sparring matches and debating with the teacher. Some teachers planned for these and others found themselves unexpectedly engaged. Skipping classes was a common strategy for creating challenge. Jill's strategy was to avoid classes altogether because it was more challenging and more efficient to teach herself at her own pace. I can jump in and do the hard ones right away. I have to sit and wait for everyone else to practice and practice through ... watch the clock. You just don't want to be there. You don't want to listen. You want to tune everything out.... Sometimes anticipating boring schoolwork made it difficult for her to get out of bed: "[I] wake up in the morning and it's raining and you know you're just going to do another worksheet. What's the point? I'll just make it up tomorrow." The students played the attendance game by monitoring their absenteeism and attendance. They seldom completed daily assignments and homework when they found they could meet the school's requirements and pass a course based on test scores. Jill spoke for herself and others in the study: I set up little challenges for myself taking the risk of getting into trouble by doing that [skipping class]. I liked doing things like that, daily challenges. I don't skip everyday. It's not a big problem because I get the work done ... I beat the system. The 10 students in this study joined the chorus of support for differentiated curricula that let them take greater leaps in difficulty through more complex content, more at a faster pace, reduce repetitions of tasks ("drill and kill"), and let them move ahead without waiting for the rest of the class. These curricula were free of rote memorization tasks, textbook-bound question-and-answer assignments, and copying from books, the blackboard or overhead transparencies. These students enjoyed their struggles with new material. Their need for challenge was woven into their need for complexity and a supportive teacher as will be seen in the next two sections. Complexity Mikulas and Vodanovich (1993) define complexity as a function of unfamiliarity and these students craved the unfamiliar. "The complexity of a situation relative to an individual is a function of that person's experiences with similar situations" (p. 4). Thus, the perceived complexity of a task will vary from student to student based on their previous and current experiences. As a result, Karen disdained her math class, "It's just really bland and systematic.... You're going. You do the worksheet. You leave. You know it ... does not stimulate the mind." Garfunkel was grim, "Most of the things I get told once then I get told over and over again cause they figure the repetition method works. But most kids get bored and just sit there." All of the students in the study mentioned the need for complexity in their learning experiences. They sought novel, authentic, abstract, open-ended experiences and felt the familiar, artificial, concrete, decontextualized, simplistic nature of most assigned work contributed to their boredom. The only thing you do at school is memorize. That's all they expect you to do. They don't expect you to understand. They just want you to remember that 2 + 2 = 4 and not tell you why. We're never asked, we were never questioned, never inspired to ask why does this work? It was just, you know, do the work, hand it in, I'll mark it. You'll get a grade. That was it. (Karen) Jill created ah interesting contrast between her feelings about math and science. She was fascinated with mathematics and never found it boring because "You apply those math facts.... All the numbers you see how they work.... you figure it out." Jill was looking for the whys of mathematics, not the rote number-crunching. "... Math isn't something the teachers really talked about. They just gave you an assignment and let you do it. I always liked math. That was never really boring." In science class, on the other hand, ... you had to write out labs and stuff and it was mostly copying out of textbooks. Lots of mundane questions over and over again, same questions just reworking the answers, stuff like that. Things over and over again.... Too much writing, copying, teacher talking--never interesting. Complex learning experiences took a number of forms. Students mentioned preferences for rich, messy content. processes that involved high level thinking and questioning, their emotions and interests, opportunities to develop sophisticated products using the resources of a professional and opportunities to work in professional contexts. Other students in the study suggested complexity might arise when they were allowed to stick with a topic longer than their classmates do (i.e., slowing down to dig deeper). They "hated bells" that signaled they must disengage from their learning to go to their next class. Complex learning activities often require more time and flexibility than the timetable allowed. Although this comment appears to contradict their desire to move more quickly, the point was that they want a self-determined pace. Thus, the desire for complexity was influenced by their needs for other types of challenge as well as control. Caring The final and perhaps most powerful theme focused on characteristics of teachers and their teaching. A caring teacher could enhance or overcome the other four C's. Teachers who cared about their teaching and their students were admired and valued for their professional integrity and commitment. Caring teachers were described as nonjudgmental, fair, flexible (also see Emerick, 1992) and humorous. They honored students' need to talk, to question, to challenge and be challenged, to dig deeper, and they respected their students' wishes to be respected. Caring teachers were prepared to teach when they carne to class. They use discovery, inquiry-based and hands-on methods, varying their techniques and media. They gave these students control over some aspects of their learning relevant to student's skills, abilities, and interests, allowing individual explorations and group work with substantial in-class interaction. They show a concern for all individual's well-being and returned assignments promptly. Caring teachers were enthusiastic about the content of lessons as well as their teaching. Essentially, they made it clear they want to be teachers. Uncaring teachers left the class while it was in session without explanation. They seemed to dislike their role and the students. Some teachers were referred to as "control freaks" and "dictators." Karen felt her most boring teachers did not really want to be teachers. It's almost like they think it's a one-way deal because we're supposed to do everything that they say and be wonderful for them. But when they're not going to give as well.... So a good teacher is one who is not afraid to get in there and help out ... doesn't sit behind a desk all the time like a barrier between him and the kids. Yeah, someone who gets right in the dirt and helps you dig out the little pieces of clay pots or whatever ... [referring to a favorite social studies fieldtrip] ... really wanting to be with the kids, not just getting paid; goes past what the job requires.... If I could give one message to teachers I'd say don't take the job unless you're really going to go out of your way to do it. Karen's account of a caring teacher included the teacher's willingness to listen and to pursue students' questions: If I have a question, we'll think it out together. Even if she doesn't have the answer, she helps with it. She's a wonderful woman because she's understanding. Whenever you need to say something she's right there to listen. Even if she doesn't have the time, she makes time later. She doesn't brush you off and say, "Go read a book about it." Flexibility was ah essential characteristic of the teachers that enabled Garfunkel to learn: ... good teachers, they have that new teacher's kind of innocence.... They let you do stuff. They don't stick with routine that works. They try new things and if it works, it works, if it doesn't, it doesn't. The 10 students in this study did not generalize their frustration with uncaring teachers to all teachers. All had at least one caring teacher at all times throughout their school years. Mutual respect and reciprocity were fundamental manifestations of caring. These students were prepared to respect their teachers and schools, and felt they deserved respect in return. Ultimately, one student wondered why he and his peers should be expected to respect the mandatory attendance requirement while, at the same time, the system was not required to educate him. The theme underlying students' accounts of their classroom experiences was that boredom and learning were mutually exclusive: they were never bored when they were learning and they were never learning when they were bored. Apparently, the antidote to boredom was learning that involved one or more of the five C's, and the more the better. Discussion Each of the C's has been associated with learning or boredom in the literature but in isolation. For example, in the early 1950's, Fenickel (as cited in Mikulas & Vodanovich, 1993) noted the relationship between control and boredom. He felt boredom "arises when we must not do what we want or we must do what we don't want to do" (p. 359). Ames's (1992) work is representative of the orientation in the 90's. She demonstrated that student control of learning enhanced engagement and the quality of learning. Both were consistent with our findings and those of Larson and Richards (1991) indicating boredom was greatest in teacher-directed activities. The "ideology of control" (Noddings, 1992) that organizes schools was anathema to gifted students in our study. Noddings feels schools are generally unsupportive of students with genuine intellectual of intrinsic interests. Sarason (1990) and Csikszentmihalyi (1975) concur, suggesting that the present organization of schools is so bureaucratic that they cannot address the interests nor satisfy the curiosity of children. Kohn (1993) examined the "powerlessness" many students experience in their education. The effects of keeping students powerless include diminished physical health, depression, difficulty making decisions, and reduced motivation to achieve in school assignments. In support of this claim he cites the work of Taylor (1989) who found "few things lead more reliably to depression and other forms of psychological distress than a feeling of helplessness" (p. 11). Kohn argued, ... if we want children to take responsibility for their own behavior, we must first give them responsibility ... The way a child learns how to make decisions is by making decisions, not by following directions ... students should not only be trained to live in a democracy when they grow up; they should have the chance to live in one today. (p. 11) The types of decisions or choices of content, process and environment mentioned by participants in this study have arisen in other research. In ah investigation comparing gifted and nongifted students' learning preferences, Kanevsky and Kay (1998) found differences in the nature and strength of their preferences. Among these differences were choices. Like the participants in this study and more intensely than their nongifted peers, gifted students wanted opportunities to choose topics of study, the format of the products of their learning, the way a product was graded and the members of learning groups. The effect of student choice was also investigated by Cordova and Lepper (1996). They found offering 4th and 5th grade students choice enhanced engagement, the amount learned, perceived competence and levels of aspiration. Jill, Garfunkel, and Karen would agree, but found their strengths and learning needs inconvenient for the system. As students move into adolescence, their increasing need for independence and autonomy becomes more evident in school (Turner & Meyer, 1995). A tension is growing between this phenomenon and political pressures as curricula become increasingly test-bound and "standardized," many educators feel opportunities to respond to students' needs and preferences have diminished, however, this need not be true. Classroom activities and projects can be designed that offer students choices and control (Kanevsky, 2002). The hunger for challenge is shared by "producing" gifted students (Plucker & McIntire, 1996) as well as these nonproducers. It is also consistent across age levels as it has been observed in highly creative high school (Hilliard, 1993), middle school (Plucker & McIntire,1996) and preschool (Kanevsky, 1992) gifted students. Observational studies of classroom practices have validated students' reports of repetitive, low level classroom activities filling the majority of their school day (Archambault et al., 1993; Westberg, Archambault, Dobyns, & Salvin, 1993). As Csikszentmihalyi, Rathunde, and Whalen (1993) reported, "... boredom occurs when teachers expect too little" (p. 10). In contrast, Karen, Jill, and Garfunkel wanted new ideas and skills introduced quickly and continuously to honor their capacity and desire to learn rather than "a re-run" of the past. They craved opportunities to develop new skills and understandings. They sought experiences in what Vygotsky (1978) characterized as the "zone of proximal development." This zone is created when a student and teacher attempt to solve a problem for which the learner has no immediate solution but can learn to solve it with the teacher's support. The breadth of the zone of proximal development results from the dynamic interaction between the task, the student, and the teacher. Teachers who offer and support tasks that include choice, control, complexity and challenge will expand the zone of proximal development and result in relatively greater learning benefits for students. Clearly, Jill did not feel much of her education created a zone of proximal development. Easy tasks simply required her to use previously acquired skills and knowledge without assistance. In contrast, her letter described activities involving knowledge and skills beyond those she already knew. An appreciation for the richness and complexity of phenomena in natural and professional settings was evident in the project and problem-based learning experiences all three students recounted fondly. As Jill indicated in the Bus Ed (Business Education) and Socials (Social Studies) sections of her letter, she wanted to probe deeper, beyond the standard fare. Cordova and Lepper (1996) found that contextualizing lessons in the ways Jill suggests increased learning, task engagement, and intrinsic motivation. They state, "By removing learning from the contexts in which both its practical utility and its links to everyday interests and activities would be obvious to children, teachers risk undermining children's intrinsic motivation for learning" (p. 715). The significance of experiences with caring teachers cannot be underestimated. Csikszentmihalyi and McCormack (1986) reported, "Most students who become interested in an academic subject do so because they have met a teacher who is able to pique their interest" (p. 7). Csikszentmihalyi, Rathunde, and Whalen (1993) described very similar student responses to teachers in their study of talented teens. Enthusiastic teachers inspire ... students to reconsider the intrinsic rewards of exploring a domain of knowledge. Talented teenagers experience apathetic of lackluster instruction with an especially acute sense of disappointment. More than most teens, they reach high school already interested in a particular domain and its subjective rewards. This awareness makes them ... particularly intolerant of teachers who go through the motions. (p. 185) The feelings of the participants in this study resonated with these findings. They respected teachers whose actions reflected a sincere curiosity and passion for their subject. It comes as no surprise that those teachers were most likely to weave the other four C's into their instruction. The 10 nonproducers in our study had used some of the time they spent waiting to learn to question their position in the school system. It appears academic production and achievement posed a moral dilemma. It was resolved by a choice over which they had control: to produce or not to produce the results schools expect of them. One of the rights these students held sacred regarding their education was the equal opportunity to learn. Why wasn't there as much learning for them in school as there was for other students? Their sense of justice was further offended by the time wasted waiting for classmates to learn what they already knew, or finish work they had already completed. Participants in our study felt attendance should be optional if they already knew the material or if their assignments were complete, If class attendance was mandatory, learning should be too. Isn't that fair? Like the gifted students Silverman (1989) described, the students in this study recognized and resented the "double standards" and inconsistencies evident in their schooling. A growing sense of frustration, disappointment and injustice regarding their schooling emerged from these students' stories. They made a clear distinction between their "learning" and their "schooling." Boredom appeared only in their schooling, not learning experiences. The students perceived learning as an essential force in their lives; one from which they derived a sense of their identity. Their learning was often self-directed and facilitated by caring teachers. Schooling, on the other hand, was generally a tiring, frustrating experience. The students' resentment and boredom evolved and escalated as they consistently experienced a lack of control, choice and challenge, complexity and support within the classroom setting. They gradually disengaged from classroom tasks and their productivity and grades decline as their boredom intensified in middle and high school. In spite of their levels of frustration, students' attitudes were amazingly positive and their intrinsic motivation to learn burned bright. They were articulate and optimistic. None used a complaining tone or whined, but some were clearly frustrated, angry, and demoralized. This "passionate idealism" (Piechowski, 1991) indicates their sensitivity to the conflict between the real and ideal world. Like the students in the study by Gallagher, Harradine, and Coleman (1997), ours were acutely aware of the many difficulties teachers face while attempting to meet the needs of all students. Still, they had developed an intense resentment toward schooling resulting from the injustices and unfairness they perceived. Their empathy for the difficulties teachers face while attempting to manage and meet all students' needs could not overcome their indignation, so they quit producing school assignments. These gifted students felt this was the most honorable response to the activities they had been offered in the guise of education. The intensity of these students' need for learning experiences that involve the five C's (control, choice, challenge, complexity, caring) suggests that although they under-achieve in school, they have retained the higher levels of intrinsic motivation and perceived competence found in samples of gifted students achieving academic success (Gottfried & Gottfried, 1996; Valerand, Gagne, Senecal, & Pelletier, 1994). Our gifted students cherished their moments of learning within classroom settings. They clearly indicated how essential learning was to them and how their disappointment and frustration grew over time. They felt they had a right to learn, as well as a strong desire to learn, but these needs were not met in most of their classes. Observational studies (e.g., Archambault, et al., 1993) and self-report survey data (e.g., Gentry, Gable, & Springer, 2000) make it clear the education system has not come close to meeting the needs of gifted students, particularly in middle and high schools. These students are not lazy learners. Our findings are consistent with those reported by Csikszentmihalyi, Rathunde, and Whalen (1993): Adults, after all, commonly fault adolescents for what they perceive as laziness, lack of discipline and a counterproductive defiance of authority. But what came through clearly in our study was an avid willingness to accept challenges and overcome obstacles when the problems were interesting and the necessary skills were within the individual's reach. (p. 186) Tentative implications for practice will be suggested; however, the qualitative nature of this study precludes generalizing these findings. Three linked recommendations emerged from our methodology as much as our findings. The deep interviewing techniques used in this study offered an opportunity to hear the students in a new way. The member checks that are a part of such a qualitative study convinced the students that they were truly being listened to; eliciting a desire to clearly articulate their feelings and creating a rich experience that their educators could not take time for. From this experience we learned that educators need to: (1) ask students about their boredom, (2) listen and probe until their understanding is deep and accurate, and then (3) act on what they hear. These students had a great deal to say, but they had never been asked. We have focused on the commonalities in their descriptions of boredom and learning, however, individual differences were also evident. As a result, we feel interventions must be sensitive to the most powerful aspects of each student's boredom and learning. Other implications include the obvious: weave opportunities for personal control, choice, challenge and complexity into classroom activities. Clearly, further empirical study will be necessary before solid guidelines for interventions can be proposed. Evidence of the impact of each of the five C's on learning is not new. The dynamic relationships among these factors are far more idiosyncratic and complex than earlier literature has suggested. We have shown that their roles in learning, boredom, and academic productivity are not discrete; they are interdependent. The boredom of these students was not dispositional of situational--it was both. Perhaps these gifted students arrived at school more prone to boredom than their productive and achieving gifted peers. Perhaps characteristics of their personality of temperament made them more sensitive to a lack of the five C's so their response was more intense and driven by moral, as well as cognitive and emotional concerns. Like their learning, each student's boredom was unique, complex, dynamic, and progressive. It evolved and increased throughout their school years. Perhaps gifted students' boredom (as well as their learning) is qualitatively different from that of their nongifted peers. The moral impact of ah inappropriate education on the classroom learning and achievement of gifted students has been relatively ignored and must be explored more fully. These are exciting and essential topics for research in the future. The findings will provide us with the ability to prevent and alleviate classroom boredom. It is somewhat surprising that so many gifted students have managed to "grin and bear it" for so many years. Ultimately, an appropriate education for gifted students must honor their right to learn in school or we should not be surprised when they choose not to honor it with a commitment to engaging in activities from which they learn little or nothing. Appendix A Excerpt from Jill's Letter to her School Counselor and the District Consultant for Gifted Programs Science: In my Science 10 class I would like to learn more about the subject areas we are studying by researching more. Maybe I could get different books related to the topic area and write a short report or conclusion about my findings. It would also be better in science if we were able to do more "hands on" assignments rather than just copying question answers out of a book. Math: In my Math 10 class I would like to do the thing we discussed where I would do tire or so questions that were the hardest of the assignment. If I go those questions right, that would be my work for the day or I could get a harder question to work out. If I was confused or got them wrong, I would have to do the whole assignment. Bus Ed: My Business Education class is going fine right now just a little slow. If there were anyway of speeding up the rate we are going at that would be fine. One thing, I don't know if it is possible, but it would be neat if I could do a business project such as, how much would it cost to start my own company and what would I need to know about costs, advertising and just basically staying in a good profitable business with a promising future. Socials: I would like to study a more wider [sic] topics in socials than just certain events in Canadian history. I would like to know what was going on in other countries of the world at that time. Another thing would be to tie the socials and science together and see how they fit such as at what times were people inventing and making scientific discoveries and how they affected the people and the economy. In this class it would also be better if we had more then and now discussions on maybe government or taxes and what they did about it then and try to come to a conclusion of what we can do about it now. Sincerely, Chairman of the Bored REFERENCES Abramson, E. E., & Stinson, S. G. (1977). Boredom and eating in obese and nonobese individuals. Addictive Behaviors, 2, 181-185. Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84, 261-271. Archambault, F. X., Westberg, K., Brown, S. B., Hallmark, B. W., Emmons, C. L., & Zhang, W. (1993). Regular classroom practices with gifted students: Results of a national survey of classroom teachers Storrs, CT: National Research Center on the Gifted and Talented. Cordova, D. I., & Lepper, M. R. (19963. Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization and choice. Journal of Educational Psychology, 88(4), 715-730. Csikszentmihalyi, M. (19753. Beyond boredom and anxiety. San Francisco, CA: Jossey Bass. Csikszentmihalyi, M., & McCormack, J. (1986, February). The influence of teachers. Phi Delta Kappan, 67(6), 415-419. Csikszentmihalyi, M., Rathunde, K., & Whalen, S. (1993). Talented teenagers. New York: Cambridge University Press. Delisle, J. R. (1992). Guiding the social and emotional development of gifted youth: A practical guide for educators and counselors. New York: Longman. Drory, A. (1982). Individual differences in boredom proneness and task effectiveness at work, Personnel Psychology, 35, 141-151. Emerick, L. J. (1992). Academic underachieving among the gifted: Students' perceptions of factors that reverse the pattern. Gifted Child Quarterly, 36(33), 140-146. Eysenck, H. J., & Zuckerman, M. (1978). The relationship between sensation seeking and Eysenck's dimensions of personality. British Journal of Psychology, 69, 483-487. Farmer, R., & Sundberg, N. D. (1986). Boredom proneness--The development and correlates of a new scale. Journal of Personality Assessment, 50(1), 4-17. Farrell, E. (1990). Hanging in and dropping out: Voices of at-risk high school students. New York: Teachers College Press. Feldhusen, J. F., & Kroll, M. D. (1991). Boredom or challenge for the academically talented in school. Gifted Education International, 7, 80-81. Fogelman, K. (1976). Bored eleven-year-olds. British Journal of Social Work, 6, 201-211. Gallagher, J., Harradine, C. C., & Coleman, M. R. (1997). Challenge or boredom? Gifted students' views on their schooling. Roeper Review, 19(3), 132-136. Gentry, M., Gable, R. K., & Springer, P. (2000). Gifted and nongifted middle school students: Are their attitudes toward school different as measured by the new affective instrument, My Class Activities ...? Journal for the Education of the Gifted, 24(1), 74-96. Gottfried, A. E., & Gottfried, A. W. (19963. A longitudinal study of academic intrinsic motivation in intellectually gifted children: Childhood through early adolescence. Gifted Child Quarterly, 40(4), 179-183. Hill, A. B. (1975). Work variety and individual differences in occupational boredom. Journal of Applied Psychology. 60, 128-131. Hilliard, S. A. D. (1993). Who's in school? Case studies of high creative adolescents. Dissertation Abstracts International. A 54(6), 2110. Kanevsky, L. S. (1992). The learning game. In P. Klein & A. J. Tannenbaum (Eds.), To be young and gifted (pp. 204-241). Norwood, NJ: Ablex. Kanevsky, L. S. (2002). Choice: A way to share responsibility for differentiating curriculum. Gifted Education Communicator, 33(3), 48-50. Kanevsky, L., & Kay, S. (1998, April). Possibilities for learning: Features of curriculum gifted and non-gifted students like and dislike. Paper presented at the annual meeting of the American Educational Research Association, San Diego. CA. Karp, E., & Goldfarb Consultants (1988). The drop-out phenomenon in Ontario secondary schools: A report in the Ontario study of the relevance of education and the issue of dropouts, student retention and transition series. Toronto: Ontario Ministry of Education. Kohn, A. (1993). Choices for children: Why and how to let students decide. Phi Delta Kappan, 75(1), 8-16, 18-21. Larson, R. (1989). Beeping children and adolescents: A method for studying time use and daily experience. Journal of Youth and Adolescence, 18(6), 511-530. Larson. R. W., & Richards. M. H. (19913. Boredom in the middle school years: Blaming schools versus blaming students. American Journal of Education, 99. 418-443. Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Beverly Hills. CA: Sage. Mikulas, W. L., & Vodanovich, S. J. (19933. The essence of boredom. The Psychological Record, 43, 3-12. Noddings, N. (1992). Advances in contemporary educational thought: Volume 8. The challenge to care in schools: An alternative approach to education. New York: Teachers College Press. Piechowski, M. M. (1991). Emotional development and emotional giftedness. In N. Colangelo & G. Davis, (Eds.), A handbook of gifted education (pp. 285-306). Boston: Allyn & Bacon. Plucker, J. A., & McIntire, J. (1996). Academic survivability in high-potential, middle school students. Gifted Child Quarterly. 40(1), 7-14. Robinson, W. P. (1975). Boredom at school. British Journal of Educational Psychology, 45, 141-152. Rothman, R. (1990, November 7). Educators focus attention on ways lo boost motivation. Education Week, 11-17. Samuels, D. J., & Samuels. M. (1974). Low self-concept as a cause of drug abuse. Journal of Drug Education, 4, 421-438. Sarason, S. B. (19903. The predictable failure of educational reform. Can we change course before it's too late? San Francisco: Jossey-Bass. Silverman, L. K. (1989). Social development, leadership and gender issues, in J. VanTassel-Baska & P. Olszewski-Kubilius (Eds.), Patterns of influence on gifted learners (pp. 291-327). New York: Teachers College. Smith, P. C. (1955), The prediction of individual differences in susceptibility to industrial monotony. The Journal of Applied Psychology, 39, 322-329. Sundberg, N. D., & Bisno, H. (1983, April). Boredom at life transitions--adolescence and old age. Paper presented at the meeting of the Western Psychological Association. San Francisco, CA. Taylor, S. E. (1989). Positive illusions: Creative self-deception and the healthy mind, New York: Basic Books. Turner, I., & Meyer, D. (1995). Motivating students to learn: Lessons from a fifth-grade math class. Middle School Journal, 27, 18-25. Valerand, R. J., Gagne, F., Senecal, C., & Pelletier, L. G. (1994). A comparison of the school intrinsic motivation and perceived competence of gifted and regular students. Gifted Child Quarterly, 38, 172-175. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological process. Cambridge, MA: Harvard University Press. Westberg, K. L., Archambault, F. X., Dobyns, S. M., & Salvin. T. J. (1993). An observational study of instructional and curricular practices used with gifted and talented students in regular classrooms. Storrs, CT: The National Research Center on the Gifted and Talented. Whitmore. J. (1980). Giftedness, conflict and underachievement. Boston: Allyn & Bacon. Zuckerman, M., Porac, J., Lathin, D., Smith, R. & Deci, E. L. (1978). On the importance of self-determination for intrinsically motivated behavior. Personality and Social Psychology Bulletin, 4(3), 443-446. Lannie Kanevsky is an associate professor of education at Simon Fraser University (Burnaby, British Columbia, Canada) and a member of the editorial advisory board for the Roeper Review. Her teaching and research focus on individual differences in learning, particularly gifted students' learning. She is the author of the Tool Kit for Curriculum Differentiation. E-mail: kanevsky at sfu.ca Tacey Keighley is a resource room teacher at Terry Fox Senior Secondary School (Port Coquitlam, BC, Canada) working with gifted, special needs and at-risk students. She also teaches adult learners who are completing requirements for a high school diploma at the Coquitlam Learning Opportunity Center. E-mail: tkeighley at sd43.bc Manuscript submitted January 14, 2003. Revision accepted February 14, 2003. From checker at panix.com Tue May 17 14:48:41 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:48:41 -0400 (EDT) Subject: [Paleopsych] Joseph Brodsky: Listening to Boredom Message-ID: Listening to boredom. (excerpt from 'In Praise of Boredom'; adapted from Dartmouth College commencement address) Joseph Brodsky. Harper's Magazine, March 1995 v290 n1738 p11(3) Abstract: Boredom is a natural condition of modern life that plagues rich and poor alike. Even unusually gifted individuals who create lucrative innovations must endure boredom. Advice for new college graduates on how to survive tedium is presented. ------------- A substantial part of what lies ahead of you is going to be claimed by boredom . The reason I'd like to talk to you about it today, on this lofty occasion, is that I believe no liberal arts college prepares you for that eventuality. Neither the humanities nor science offers courses in boredom . At best, they may acquaint you with the sensation by incurring it. But what is a casual contact to an incurable malaise? The worst monotonous drone coming from a lectern or the most eye-splitting textbook written in turgid English is nothing in comparison to the psychological Sahara that starts right in your bedroom and spurns the horizon. Known under several aliases--anguish, ennui, tedium, the doldrums, humdrum, the blahs, apathy, listlessness, stolidity, lethargy, languor, etc.--boredom is a complex phenomenon and by and large a product of repetition. It would seem, then, that the best remedy against it would be constant inventiveness and originality. That is what you, young and new-fangled, would hope for. Alas, life won't supply you with that option, for life's main medium is precisely repetition. One may argue, of course, that repeated attempts at originality and inventiveness are the vehicle of progress and, in the same breath, civilization. As benefits of hindsight go, however, this one is not the most valuable. For if we divide the history of our species by scientific discoveries, not to mention new ethical concepts, the result will not be very impressive. We'll get, technically speaking, centuries of boredom. The very notion of originality or innovation spells out the monotony of standard reality, of life. The other trouble with originality and inventiveness is that they literally pay off. Provided that you are capable of either, you will become well-off rather fast. Desirable as that may be, most of you know firthand that nobody is as bored as the rich, for money buys time, and time is repetitive. Assuming that you are not heading for poverty, one can expect your being hit by boredom as soon as the first tools of self-gratification become available to you. Thanks to modern technology, those tools are as numerous as boredom's symptoms. In light of their function--to render you oblivious to the redundancy of time--their abundance is revealing. As for poverty, boredom is the most brutal part of its misery, and escape from it takes more radical forms: violent rebellion or drug addiction. Both are temporary, for the misery of poverty is infinite; both, because of that infinity, are costly. In general, a man shooting heroin into his vein does so largely for the same reason you rent a video: to dodge the redundancy of time. The difference, though, is that he spends more than he's got, and that his means of escaping become as redundant as what he is escaping from faster than yours. On the whole, the difference in tactility between a syringe's needle and a stereo's push button roughly corresponds to the difference between the acuteness of time's impact upon the have-nots and the dullness of its impact on the haves. But, whether rich or poor, you will inevitably be afflicted by monotony. Potential haves, you'll be bored with your work, your friends, your spouses, your lovers, the view from your window, the furniture or wallpaper in your room, your thoughts, yourselves. Accordingly, you'll try to devise ways of escape. Apart from the self-gratifying gadgets I mentioned before, you may take up changing your job, residence, company, country, climate; you may take up promiscuity, alcohol, travel, cooking lessons, drugs, psychoanalysis. In fact, you may lump all these together, and for a while that may work. Until the day, of course, when you wake up in your bedroom amidst a new family and a different wallpaper, in a different state and climate, with a heap of bills from your travel agent and your shrink, yet with the same stale feeling toward the light of day pouring through your window. You'll put on your loafers only to discover that they're lacking bootstraps by which to lift yourself up from what you recognize. Depending on your temperament and your age, you will either panic or resign yourself to the familiarity of the sensation, or else you'll go through the rigmarole of change once more. Neurosis and depression will enter your lexicon; pills, your medicine cabinet. Basically, there is nothing wrong with turning life into the constant quest for alternatives, into leapfrogging jobs, spouses, and surroundings, provided that you can afford the alimony and jumbled memories. This predicament, after all, has been sufficiently glamorized onscreen and in Romantic poetry. The rub, however, is that before long this quest turns into a full-time occupation, with your need for an alternative coming to match a drug addict's daily fix. There is yet another way out of boredom, however. Not a better one, perhaps, from your point of view, and not necessarily secure, but straight and inexpensive. When hit by boredom , let yourself be crushed by it; submerge, hit bottom. In general, with things unpleasant, the rule is: The sooner you hit bottom, the faster you surface. The idea here is to exact a full look at the worst. The reason boredom deserves such scrutiny is that it represents pure, undiluted time in all its repetitive, redundant, monotonous splendor. Boredom is your window on the properties of time that one tends to ignore to the likely peril of one's mental equilibrium. It is your window on time's infinity. Once this window opens, don't try to shut it; on the contrary, throw it wide open. For boredom speaks the language of time, and it teaches you the most valuable lesson of your life: the lesson of your utter insignificance. It is valuable to you, as well as to those you are to rub shoulders with. "You are finite," time tells you in the voice of boredom, "and whatever you do is, from my point of view, futile." As music to your ears, this, of course, may not count; yet the sense of futility, of the limited significance of even your best, most ardent actions, is better than the illusion of their consequences and the attendant self-aggrandizement. For boredom is an invasion of time into your set of values. It puts your existence into its proper perspective, the net result of which is precision and humility. The former, it must be noted, breeds the latter. The more you learn about your own size, the more humble and the more compassionate you become to your likes, to the dust aswirl in a sunbeam or already immobile atop your table. If it takes will-paralyzing boredom to bring your insignificance home, then hail the boredom. You are insignificant because you are finite. Yet infinity is not terribly lively, not terribly emotional. Your boredom , at least, tells you that much. And the more finite a thing is, the more it is charged with life, emotions, joy, fears, compassion. What's good about boredom, about anguish and the sense of meaninglessness of your own, of everything else's existence, is that it is not a deception. Try to embrace, or let yourself be embraced by, boredom and anguish, which are larger than you anyhow. No doubt you'll find that bosom smothering, yet try to endure it as long as you can, and then some more. Above all, don't think you've goofed somewhere along the line, don't try to retrace your steps to correct the error. No, as W. H. Auden said, "Believe your pain." This awful bear hug is no mistake. Nothing that disturbs you ever is. From checker at panix.com Tue May 17 14:49:12 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:49:12 -0400 (EDT) Subject: [Paleopsych] Jeremy P. Hunter and Mihaly Csikszentmihalyi: The positive psychology of interested adolescents. Message-ID: Jeremy P. Hunter and Mihaly Csikszentmihalyi: The positive psychology of interested adolescents. Journal of Youth and Adolescence, Feb 2003 v32 i1 p27(9) Author's Abstract: Using the experience sampling method (ESM) and a diverse national sample of young people, this study identifies two groups of adolescents: those who experience chronic interest in everyday life experiences and another who experience widespread boredom. These groups are compared against several measures of psychological well-being: global self-esteem, locus of control, and emotions regarding one's future prospects. It is hypothesized that a generalized chronic experience of interest, an innate physiological function, can be used as a signal for a larger measure of psychological health, while chronic boredom is a sign of psychic dysfunction. A strong association between the experience of interest and well-being was found. ----------------- INTRODUCTION Recently, increased attention has been devoted to positive psychological phenomena. While the West has a long tradition of inquiry about what makes life worthwhile, including in the past century William James, John Dewey, Carl Rogers, and Abraham Maslow, concern with psychic well-being has seldom been systematically investigated. However, as Maslow might have predicted, now that many in the postindustrialized world have temporarily solved the problems of physical sustenance, attention is free to turn towards psychic development and fulfillment. Inglehart's exploration of world values supports a similar conclusion, showing that the "returns to happiness" attributable to economic gain substantially decrease once a nation's GDP moves beyond basic needs (Inglehart, 1997). In the course of human history, perhaps more people than ever before have reached this rarefied level of material well-being and are now ready to explore the psychic frontiers of the good life. One area of special promise is the experience of in terest and the relevance it has for developing youth. While the process of modernization has wrought incredible changes on the human condition in the past 300 years, the everyday lives of contemporary children bear perhaps the least resemblance to their peers of the past. To be a young person in preindustrial Europe meant that everyday life was an insecure and backbreaking affair where about half the population people reached adulthood. Even then, most died by 30. By the age of 7, most young boys started work, often as servants in the homes of others, gradually taking up apprenticeship in yet another household (Gillis, 1974). The nasty and brutish existence of most youth left little thought to notions of "optimal development" or even "development" for that matter. For these people, survival was the watchword. Not until the 1880s, when a growing middle class could afford to systematically educate their children, did youth issues come into awareness as something deserving of attention. Instead of being sent out to learn a trade, middle class children were sent to school. This new circumstance of elongated dependence and removal from the cycles of production led to the "discovery of adolescence" and established, more or less, the pattern that most youth in the industrial and postindustrial world follow today (Aries, 1965; Gillis, 1974). The rise of youth movements like scouting, the YMCA and YWCA, the enactment of child labor laws, the development of the kindergarten movement, and universal education have made the lives of children safer, culturally richer, and more secure than ever before in history. Yet, these relatively recent developments in the human condition do not seem to be perfect. Despite these hard-won advances, it is not atypical to imagine a middle class teenager bored and despondent, alone, angry, and alienated. While universal education is certainly preferable to children toiling in mines, the system that consumes the lives of most youth does not seem to be optimally calibrated for their developing selves. Csikszentmihalyi et al. (1993) have shown that school for most young people is a dull and uninspiring place to be in. Far from nurturing youngsters into expressive, intellectually alive and curious, confident, and able beings, school for many American youth is a trial to be endured. Boredom is so common that many consider it a normal phase of growing up. However, most children do not start out bored and detached. Interest and curiosity about the world is a hallmark of childhood experience. Maria Montessori, the Italian pediatrician-turned-educator, believed that the expression of high intrinsic interest characterized a "normalized" (e.g. healthy) child (Montessori, 1949[1967]). Interest is present from birth and fosters human development by mobilizing resources for worldly engagement. It does this by engendering "the feeling of being engaged, caught up, fascinated, or curious. There is a feeling of wanting to investigate, become involved, or expand the self by incorporating new information and having new experiences..." (Izard, 1991, p. 100). Interest impels growth-oriented behaviors--exploration, learning and creativity--increasing the likelihood for successful adaptation and survival (Izard, 1991; Izard and Ackerman, 2000; Piaget, 1981). To effectively live in a demanding and changing environment, one must necessarily actively relate to it. Interest functions as the tool self-organizing creatures (Brandtstadter, 1998) use to direct attention to select information from the environment. William James' pithy description captures this aspect of interest: Millions of items of the outward order are present to my senses which never properly enter into my experience. Why? Because they have no interest for me. My experience is what I agree to attend to. Only those items which I notice shape my mind--without selective interest, experience is an utter chaos. Interest alone gives accent and emphasis, light and shade, background and foreground--intelligible perspective, in a word. (James 1890, p. 402) By focusing on certain things and not others, the world is reduced to a manageable place. Akin to parts of a camera, interest alters the size of the aperture of attention by widening or constricting the amount of information that enters awareness. By influencing the contents of consciousness, interest mediates the relationship between person and world. Selection, however, is only one facet of interest, another is to provide motivation for developing skills and abilities. According to Sylvan Tomkins, interest is so essential for cognitive growth that the "absence of the affective support of interest would jeopardize intellectual development no less than destruction of brain tissue ... There is no human competence which can be achieved in the absence of a sustaining interest" (Tomkins, 1962, p. 343). Because of this basic role in learning John Dewey felt that individual interest should be the center of educational endeavor (McDermott, 1981). Learning complex tasks requires persistence and focus, interest provides concentrative "staying power" in the face of difficulty. When things are interesting, concentration comes easy and persisting at them is less laborious and burdensome. Interest is also associated with a drop in heart rate, a quieting response, which prepares the senses to receive and respond to information, (Izard, 1991). Interest cultivates an inter nal milieu that optimizes the acquisition of information. Research shows that when students are interested in what they are reading, they are likely to recall more points, recall more information from more paragraphs, recall more topic sentences, write more sentences, provide more detailed information about topics read, make fewer errors in written recall, and provide additional topic relevant information. (Renninger, 2000, p. 374) The benefits of interest extend beyond comprehension too. When interested in a topic, students are likely to earn higher grades and test more successfully (Csikszentmihalyi et al., 1993; Schiefele and Csikszentmihalyi, 1995). Interest's role in cognitive development cannot be underestimated. The cognitive boons of interest and its motivational power are also complemented by the fact that interest feels good. Izard (1991, p. 108) reports the phenomenology of interest is also characterized by a relatively high degree of pleasantness, self-assurance and a moderate degree of impulsiveness and tension. Joy is often part of the pattern of emotion in an interest-eliciting situation. The experience of this positive state is characterized by 3 qualities: (1) being caught up and fascinated, (2) enjoying what one is doing while (3) in a state of arousal or excitement. Deci (1992) holds that the convergence of interest, enjoyment, and excitement signals the presence of intrinsic motivation. Others also report the experience of interest is enjoyable, rewarding, and associated with good feelings (Fazio, 1981; Renninger, 1989, 1990, 2000). Examples of intense interest, like optimal experience or flow (Csikszentmihalyi, 1975, 1990, 1997; Csikszentmihalyi and Csikszentmihalyi, 1988), are among the most enjoyable moments of being alive. Abiding interests, sources of interest engaged in over time, can even provide intellectual and behavioral structures around which the life course forms (Csikszentmihalyi et al., 1993; Csikszentmihalyi and Beattie, 1979; Rathunde, 1993). The complexity of the experience of interest with its overlap into emotional, volitional, and cognitive areas, provide an optimal s tate for interfacing the psyche to the environment. Through the experience of interest, nature wires us for worldly involvement. Consider what happens when the experience of interest is absent. The loss of interest is one of the key features of depression (Klinger, 1993). Instead of bringing James' "intelligible perspective" (James, 1890, p. 402) characteristic of normal functioning, depression makes the world dull, gray, and lifeless. The things that usually make us sit up and take notice seem strangely unpromising and empty. Even the fundamentals associated with being alive: the company of others, food, and sex are not compelling enough to devote energy to. Wakefulness becomes dreadful and oppressive. When depressed, the disappearance of psychic handles to hold on to severely impacts a person's ability to function normally. From this extreme case, we can see that even in the most mundane of waking states interest binds us to the surrounding world. While not as intense as a full-fledged depressive state, boredom is also characterized by an absence of experiencing interest (Farmer and Sundberg, 1986; Klinger, 1993). Where interest is enjoyable, stimulating, and focused, boredom is an unpleasant state of low arousal and motivation (Mikulas and Vodanovich, 1993). If interest is viewed as the drive an individual uses to learn, discover, and grow, boredom marks an entropic state of disengagement impeding psychological growth over the long term. Zuckerman (1979) suggests that boredom -prone persons are likely to fall into alcoholism and other types of substance abuse like marijuana, psychedelics, and other stimulants. Later research reveals that in addition to drug use, boredom -prone youth are attracted to extreme forms of sensation-seeking and antisocial behavior (like burglery or vandalism) (Hamilton, 1983; Orcutt, 1984; Sommers and Vodanovich, 2000; Wasson, 1981). A bored kid's attraction to "cheap thrills" may originate from an inability to structure experience in pleasurable ways, what Scitovisky calls "skilled consumption" (Scitovsky, 1976). Delinquent adolescents have been shown to be poor at creating fantasies and elaborate ideas, "Their thought world appears to be a rather barren place." (Hamilton, 1983, p. 366, Cited in Spivack and Levine, 1964). Without the generative possibilities of the imagination and the skills to manifest them, the allure of destructive and thrilling behavior becomes an easy source of entertainment. Even many nondelinquent youth live for the weekend when they can party and get drunk with their friends, suggesting the skills for structuring enjoyment are not particularly widespread. If interest provides the foundation for building skills that can be converted into enjoyable activity, boredom may be the result of an inability to cultivate such talents. Without such skills, the possibilities inherent in the world become fewer and fewer. Vanda lism and getting high emerge as easy ways to find excitement. Considering this, the chronic experience of boredom could be thought of as the "evil twin" of interest. To test this hypothesis, this study examines those youth who maintain a widespread experience of interest in daily life and compares them to those whose experience is much less optimal. By uncovering whether Interested youth compared to their Bored peers have higher, more stable global self-esteem, an internal locus of control, and view their futures with hopefulness adds credence to the notion that experiencing interest is associated with positive development. This research program is distinct from past efforts to either investigate situation-specific examples of interest (Hidi, 1990) or the development of individual interests centered on specific subject matter (Renninger, 2000). This approach examines persons in natural settings who encounter everyday life with a sense of inquiry and enjoyment. In other words, the focus here is less the "target" of interest, that is, particular interest-piquing moments or abiding material relevant to an individual, and more the person who chronically experiences interest. However, this approach does not forego the possibility of either type, situational or individual, but examines persons for whom experiences of interest are a salient feature of everyday life. METHODS Sample The data from this study comes from the 1st year of a 5-year (1992-1997), longitudinal project, where 1215 junior and senior high school students from 33 public schools across the country, representing the 6th, 8th, 10th, and 12th grades, participated in a multimodal research effort geared toward understanding career development (Csikszentmihalyi and Schneider, 2000). The study was sponsored by the Alfred P. Sloan Foundation and conducted at the National Opinion Research Center's Ogburn-Stouffer Center (Bidwell et al., 1992) at the University of Chicago. Twelve communities representing the full range of socioeconomic conditions, from poverty-level urban areas to affluent suburbs, participated in this study. Furthermore, students from these locales were randomly chosen to proportionally represent their school in terms of ethnicity, gender, race, and scholastic ability level. They were given a full-scale instrument battery that included the NELS questionnaire (modified from the National Educational Longitudinal Study of 1988, NELS:88), a Friend Sociometric Form (FRIENDS), and the Career Orientation Scale (COS), which measures the student's knowledge of working life and their career goals. They also participated in a week of Experience Sampling (ESM). This group of students is referred to as the "focal group." Seventy-four percent of the focal students completed the ESM over the5 year period, whereas 87% of the focal students completed the NELS, COS, and FRIENDS surveys. ESM Procedures The ESM involves the participant wearing a wristwatch programmed to signal 8 times a day for 1 week. The watch signaled randomly within every 2-h period between 7:30 A.M. and 10:30 P.M. and no signal was within 30 min of another. At each signal, the participant fills out an experience sampling form (ESF) that asks various questions about the participant's activities, location, companionship, and mood. To insure a level of quality control, only those participants who completed 15 or more ESF were included in the final database. This keeps with past practices by Csikszentmihalyi et al. (1993). For the 1st year data (the data reported in this paper), 74% of the total ESM study returned the sufficient number of forms, which is similar to past ESM studies of adolescents (Csikszentmihalyi and Larson, 1987; Csikszentmihalyi and Schneider, 2000). The final sample consisting of 806 individuals represented by 28,193 responses, were included. These responses captured nearly every aspect of daily life, from school, to work, to play, to home life. MEASURES The primary measure, the Interested-Bored construct, was created to capture the nuances of the experience of interest. It combines 3 questions from the ESM: a 9-point Likert scale asking "Was this activity interesting?," a 10-point Likert scale asking "Did you enjoy what you were doing?" and a 7-point Likert scale defined by "Excited--Bored." This formulation follows theoretical treatments on the nature of interest offered by Dewey (McDermott, 1981) and others (Tomkins, 1962; Izard, 1991). Because the 3 component questions are scaled differently, they were proportionally equalized by converting them to z-score variables keyed to the average of the whole sample, or grand mean, for each variable. After this they were summed to form the Interested--Bored construct. The measure describes a continuous state of varying levels of engagement with the world at the experiential level. We then calculated average Interest scores for each person and divided the distribution into quartiles. The extreme ends, 2 equal-sized groups (n = 207), form the main analytical characters of this study. At one end are youth who experience stimulation, enthusiasm, and pleasure, and on the other, adolescents in a disconnected state of apathy. Table I displays the demographic composition of the Bored and Interested groups. The groups differ significantly in several criteria, namely social class of community (SCC) ([chi square] = 23.0; p < 0.0001), ethnicity ([chi square] = 20.5; p < 0.001), and age (as measured by grade) ([chi square] = 10.4; p < 0.015). The largest and most significant differences in SCC occur in poor and upper middle classes, where poor Interested students outnumber Bored ones and upper middle class Bored students outnumber Interested ones. Similarly, Interested African Americans outnumber Bored Blacks; while Whites represented in the Bored group outnumber those in the Interested. Gender distribution tilts toward girls and no difference between the groups was found. Finally, Interested students have greater representation in the 6th grade than do the Bored ones, while the 10th grade has a larger share of Bored students. Because of these differences, later analyses will control for the effects of SCC, race, and grade in analyses o f covariance (ANCOVA). The remaining measures come from the NELS, namely scales of self-esteem, locus of control, optimism, and pessimism. Data for the entire group is not available because some students failed to complete the forms fully. The missing data come from throughout the sample, so no one group is systematically absent. The available N will be reported in the tables. Global self-esteem (GSE) measures the positive and negative feelings one holds about the self. Based on the Rosenberg Self-Esteem Scale (1965), it consist of 7 items, 4 positively and 3 negatively phrased, exemplified by statements like "In general, I feel good about myself," "I am a person of worth," "On the whole I am satisfied with myself' "At times, I think I am no good at al1' "I usually feel emotionally empty," "I don't have much to be proud of," and "I am able to do things as well as others" (Rosenberg, 1986). These are 4-point items ranging from 1 (strongly disagree) to 4 (strongly agree). Cronbach's alpha for these 7 variables was 0.82 for the entire sample, 0.84 for the Bored group and 0.77 for the Interested. Factor analysis of these 7 items found that they form one factor with an eigenvalue of 3.4, accounting for 48.8% of the variance. To create a single construct the negative items were summed and subtracted from the sum of the positive ones. Locus of control measures beliefs about personal causation. People with an internal locus of control feel they are more masters of their own destiny, and are referred to as "Origins" while "Pawns" are those who locate control externally and believe they are victims of fate and circumstance (Rotter, 1966). This measure, abridged from Rotter's Locus of Control scale (Rotter, 1966), also consists of 7 items, with 1 positively phrased while the remaining were negative. They are "When getting ahead somebody/thing stops me," "My plans hardly ever work out," "I feel useless at times," "I do not have enough control over my life," "When I make plans, I am certain they work," "Good luck is more important that hard work," and "Luck is very important in life." Like self-esteem, these are also 4-point items ranging from 1 (strongly disagree) to 4 (strongly agree). Cronbach's alpha for these 7 variables was 0.77 for the entire sample, 0.69 for the Bored group and 0.74 for the Interested. Factor analysis found that they for m a factor with an eigenvalue of 2.7 accounting for 38.5% of the variance. To create a single construct the single positive item was subtracted from the sum of the negative ones. Optimism and pessimism are assessed through 8 items, 4 positive and 4 negative, that gauge the kinds of feeling one has towards the future. These scales, ranging from 1 (not at all) to 7 (very much), each ask about a different emotion. The positive ones include feeling confident, curious, enthusiastic and powerful, while the negative ones are doubtful, lonely, angry, and empty. We formed 2 variables, 1 negative (eigenvalue = 2.5, explaining 31.1% of the variance), and I positive (eigenvalue = 1.6, explaining 20.5% of the variance) by summing the corresponding items. The optimism variable had a Cronbach's alpha of 0.60 for the entire sample, 0.70 for the Bored group and 0.50 for the Interested. The pessimism variable had a Cronbach's alpha of 0.72 for the entire sample, 0.71 for the Bored group and 0.75 for the Interested. RESULTS For each of these constructs, an analysis of covariance (ANCOVA) controlling for grade, race and SCC was performed to see if the averages between Bored and Interested groups were notably different. To summarize, the measures of self-esteem, locus of control, optimism, and pessimism all showed highly significant differences between the 2 groups. Global Self-Esteem Table II shows that Interested students report significantly higher self-esteem (M = 8.38) than the Bored (M = 5.25), F(1, 293) = 50.00, p < 0.0001. The influence of grade, race, and social class does not yield significant results. Furthermore, the average self-esteem of the entire sample is between both groups at 7.04. Therefore, even compared the to entire focal group, Interested students report a higher level of esteem. A between-groups t test (t = -7.0, df = 309, p < 0.0001), reveals that the standard deviation of an Interested student's self-esteem is 3.37, while Bored students show greater variability at 4.03. Locus of Control Table III shows that Interested youth are more likely to believe they originate their actions (M = 8.16) while Bored students lean more toward the "Pawn" end of the scale (M = 10.56), F(1, 206) = 37.9, p < .00001. Again, the two groups fall between the sample mean, with the Bored group showing less signs of personal causation and the Interested showing more. As with self-esteem, the effects of the covariates showed no significant differences. Optimism When envisioning their future, Table IV shows that Interested students feel more hopeful (M = 5.43) than do the Bored students (M = 4.73), F(1, 371) = 33.31, p < 0.0001. The entire sample's daily experiences are between these two at M = 5.10. Unlike the case of the previous variables, the covariates Grade (F = 3.83, p < 0.051) and Race (F = 3.98, p < 0.047) had a significant impact on optimism. Whites and Blacks reported feeling greater amounts of optimism than Latinos or Asians, while 8th graders reported less optimism than the other grades. Pessimism [Click for Full Size] When it comes to projecting negative emotions toward the future, Table V shows that Bored students do so more strongly (M = 2.61) than Interested students (M = 2.10), F(1, 374) = 17.83, p < 0.0001. The sample average is again between these two at M = 2.34. In this case, the covariates exhibited little influence on the amount of pessimism. DISCUSSION This paper aims to establish evidence that the widespread experience of interest can be seen as a symptom of larger psychological well-being. After identifying 2 groups of students whose daily experiences falls at opposing ends of a continuum of interest, we compared them on a variety of measures of well-being. These results indicate a clear difference between young people who experience chronic interest in their everyday lives versus those who experience boredom. In general, the findings suggest that the Interested children are much more likely to view themselves as effective agents in their world. Believing that the self is good and worthy provides a setting for effective personal functioning. The global self-esteem measures showed significant differences between Bored and Interested students. However, research has suggested that having high self-esteem alone is not enough, esteem must be both high and stable (Kernis et al., 1993; Waschull and Kernis, 1996). Though there are several varieties of esteem variability (Rosenberg, 1986; Savin-Williams and Demo, 1983), Kernis and colleagues suggest that "the essential nature of unstable self-esteem involves the propensity to exhibit variability in self-feelings across time" (Kernis, 1993, p. 1190). We can test this if we examine the repeated ESM measures of self-esteem, also based on the Rosenberg Scale. Here we find the standard deviations of the Bored (SD = 1.40) and Interested (SD = 0.99) students differ significantly (p < 0.000). As with the GSE scores, the mean value of Interested student's ESM esteem is significantly (p <0,000) higher (7.2) than Bored students' (5.4. This suggests that Interested students enjoy a more durable positive self-concept, while Bored students are less stable and negative in their self-assessment. The literature on boredom proneness (BP) has found similar results regarding self-esteem. McCleod and Vodanovich (1991) reported a significant negative correlation between boredom-prone students and both self-esteem and autonomy. A line of research that has produced consistent results is the tendency of boredom prone individuals to dwell on themselves and their internal states. Research like that conducted by Wink and Donahue (1997) found boredom proneness to be significantly related to high narcissism. Seib and Vodanovich (1998) have shown that BP relates strongly to high negative self-awareness, while those high in positive self-awareness were much less likely to be Boredom Prone, a finding consistent with the Interested students. Some have suggested that those high in negative self-awareness may actually not be particularly aware of their internal states at all and also experience lower self-esteem (Conway and Giannopoulous, 1993). It is also expected that Interested students are more likely to perceive themselves as having greater internal locus of control. If interest is the psychic "relational mechanism" between person and world, then those who honor interest would also be more likely to believe that the ability to influence one's fate is also high. The relationship between locus of control and global self-esteem also bears itself out. Statistically, global self-esteem correlates negatively with greater external locus of control (r = -0.62, p <0.0001). This is to say that if I do not believe myself to be a person of worth, then the likelihood of also believing that there is little I can do to influence my fate might also be high, and Bored students seem to sympathize with this circumstance. The lack of a sense of personal causality from one's efforts also undermines the effort to act with intrinsic involvement in the world, the body at rest stays at rest. [Full Size Picture] Students who chronically experience interest, however, take a different tack. To experience interest, by definition, implies that one is interested in something. Interest does not occur without a referent, whether it might be the attractive person standing across the room from me, or the fascinating book on the bestseller list. This necessarily means that to facilitate experiencing interest I must grapple with my reality in a way that somehow affects it. This could be walking across the room to start up a conversation, or going to the library to borrow the desired book. Interest requires action. It follows then, that those who experience a great deal of interest in their lives would also likely believe they are the volitional force behind their actions. [Full Size Picture] The strength of the positive findings regarding Interested students continues on towards their feelings regarding the future. They are much more likely to feel more positive (enthusiastic, powerful, and confident) and less negative (lonely, Doubtful, and Empty) about growing older than Bored students. These emotional projections take on an even more interesting cast when considered in the light of the prevalence of upper middle class Boredom . One would assume that access to resources increases the likelihood a person would feel optimistic and hopeful about the future. However, this does not seem to be the case. The prevalence of material resources does not seem to automatically result in the accumulation of psychic ones. [Full Size Picture] The experiences of Interested youth indicate the calculus for well-being extends beyond familial finances to the economy of the psyche. Interest's association with other measures of well-being suggests this innate process is the foundation for building what might be thought of as psychological capital (Csikszentmihalyi, in press). If the experience of interest is an innate means of optimally relating to the environment, then those individuals who maintain a widespread sense of interest over time develop greater internal resources than those who are disengaged. The dividend that comes from acting with interest is the sense of personal effectiveness arising from being the causal agent of one's life. The fact that Interested youth look to their future with greater confidence, enthusiasm, and power reflects this "past performance." Because they trust their own experience and skills, the probability of their success is certainly higher than a person wracked with doubt and emptiness. [Full Size Picture] Of course, as we mentioned, just as interest does not happen without a referent, people who maintain a widespread sense of interest do not grow in a vacuum. The Interested person requires the support of a social system and cultural resources to direct the raw impulse of interest towards complex, useful, and rewarding ends (Eccles et al., 1998; Rogoff, 1990; Valsiner, 1998). So in a sense, the development of psychological capital must be girded by social capital. Renninger (2000) details how nascent interests in children often require adult influence to modulate the level of challenge or to help develop goals before the child can do so autonomously. Therefore, support in enhancing and directing interest might provide an effective fulcrum to leverage personal development. Material circumstances are certainly better for first world middle class youth than they were in the Middle Ages. Therefore, widespread malaise among adolescents is also not only an undesirable circumstance but a tremendous waste of opportunity and resources. Fortunately, a substantial number of young people do not succumb to this malaise. Interested youth present a picture of vitality and well-being that stands in sharp contrast to their Bored counterparts. Interested students believe in their basic worth, are confident and effective agents in the world, and are optimistic and hopeful about their future. Of course, we cannot make statements about causality, but it seems clear that interest is associated with a matrix of beliefs that govern active involvement. The implications of this are vast. While these children do not have to face the hardships of their ancestors, the modern world of work still presents daunting challenges. According to management theorist Peter Drucker (1999, p. 163), workers in the emerging Knowledge Economy, will have to MANAGE THEMSELVES. They will have to place themselves where they can make the greatest contribution; they will have to learn to develop themselves. They will have to learn to stay young and mentally alive during a fifty-year working life. They will have to learn how and when to change what they do, how they do it and when they do it. If Drucker is right, the Interested youth reported about here will be ideally suited for life in the twenty first century. What remains to be answered is how young people acquire the predilection for the openness to experience that results in experiencing interest. Our studies suggest that the social environment plays a significant role in this development. In the coming years, we will further explore these relationships. Table I Demographic Representation of Interested (N = 205) and Bored (N = 204) Groups by Percent Percentage breakdown Interested group Bored group t Value SCC ([chi square] = 23.0, p < 0.0001, df = 4) Poor 19.0 10.3 -2.5 Working 19.0 14.2 -1.3 Middle 40.0 33.3 -1.4 Upper middle 17.6 36.2 4.6 Upper 5.3 5.9 0.2 Race ([chi square] = 20.5, p < 0.001, df = 5) Asian 5.4 7.8 1.0 Latino 18.0 14.7 -0.9 African American 28.3 12.8 -4.0 White 47.3 64.2 3.5 Gender ([chi square] = 1.3, NS, df = 1) Male 39.0 44.6 1.1 Female 61.0 55.4 -1.1 Grade ([chi square] = 10.4, p < 0.015, df = 3) 6th 35.6 22.6 -2.9 8th 26.4 29.4 0.7 10th 19.5 29.4 2.3 12th 18.5 18.6 0.0 B/W group difference significance SCC ([chi square] = 23.0, p < 0.0001, df = 4) Poor Working NS Middle NS Upper middle Upper NS Race ([chi square] = 20.5, p < 0.001, df = 5) Asian NS Latino NS African American White Gender ([chi square] = 1.3, NS, df = 1) Male NS Female NS Grade ([chi square] = 10.4, p < 0.015, df = 3) 6th 8th NS 10th 12th NS Percent representation in total sample SCC ([chi square] = 23.0, p < 0.0001, df = 4) Poor 14.1 Working 16.3 Middle 33.5 Upper middle 25.9 Upper 10.2 Race ([chi square] = 20.5, p < 0.001, df = 5) Asian 6.6 Latino 15.8 African American 17.4 White 59.3 Gender ([chi square] = 1.3, NS, df = 1) Male 40.9 Female 59.1 Grade ([chi square] = 10.4, p < 0.015, df = 3) 6th 28.1 8th 28.6 10th 23.6 12th 19.8 Table II Analysis of Covariance of Global Self Esteem With Experience of Interest, Controlling for Race and Social Class of Community Adjusted mean Unadjusted mean Unadjusted SD Bored (N = 153) 5.25 5.29 4.03 Interested (N = 145) 8.38 8.33 3.38 Total sample mean (N = 623) = 7.04 [+ or -] 3.7; F = 50.00, p < 0.000. Main effect df = 1; residual df = 293. Table III Analysis of Covariance of Locus of Control With Experience of Interest, Controlling for Race and Social Class of Community Adjusted mean Unadjusted mean Unadjusted SD Bored (N = 163) 10.56 10.47 3.07 Interested (N = 148) 8.16 8.26 3.64 Note. Larger means indicate greater external locus of control. Total sample mean (N = 641) = 9.3 [+ or -] 3.34; F = 37.90, p < 0.000. Main effect df = 1; residual df = 306. Table IV Analysis of Covariance of Optimistic Emotions With Experience of Interest, Controlling for Race and Social Class of Community Adjusted mean Unadjusted mean Unadjusted SD Bored (N = 189) 4.73 4.75 1.26 Interested (N = 187) 5.43 5.41 1.07 Total sample mean (N = 744) = 5.1 [+ or -] 1.11; F = 33.31, p < 0.000. Main effect df = 1; residual df = 371. Table V Analysis of Covariance of Pessimistic Emotions With Experience of Interest, Controlling for Race and Social Class of Community Adjusted mean Unadjusted mean Unadjusted SD Bored (N = 189) 2.61 2.62 1.19 Interested (N = 186) 2.10 2.10 1.1 Total sample mean (N = 748) = 2.34 [+ or -] 1.11; F = 17.83, p < 0.000. Main effect df = 1; residual df = 374. ACKNOWLEDGMENTS This study is part of a longitudinal research program of youth and social development supported by a grant from the Alfred P. Sloan Foundation given to Mihaly Csikszentmihalyi, Charles Bidwell, Larry Hedges, and Barbara Schneider at The University of Chicago. Accepted April 11, 2002 REFERENCES Aries, P. (1965). Centuries of Childhood: A Social History of Family Life. Vintage Books, New York. Beattie-Emery, O., and Csikszentmihalyi, M. (1981). The socialization effects of cultural role models in ontogenetic development and upward mobility. Child Psychiatry Hum. Dev. 12(1): 3-18. Bidwell, C. E., Csikszentmihalyi, M., Hedges, L., and Schneider, B. (1992). Studying Career Choice: Vol. 1. Overview and Analysis. Ogburn-Stouffer Center, National Opinion Research Center, Chicago. Brandtstadter, J. (1998) Action perspectives in human development. In Lerner, R. M. (ed.), Theoretical Models of Human Development: Vol. 1. Handbook of Child Psychology (5th edn.). Wiley, New York. Conway, M., and Giannopoulous, C. (1993). Self-Esteem and specificity in self-focused attention, J. Soc. PsychoL 133: 121-123. Csikszentmihalyi, M. (1975). Beyond Boredom and Anxiety. Jossey-Bass, San Francisco. Csikszentmihalyi, M. (1990). Flow. Harper and Row, New York. Csikszentmihalyi, M. (1997). Finding Flow. Basic Books, New York. Csikszentmihalyi, M. (2002). Good Business. Viking, New York. Csikszentmihalyi, M., and Beattie, O.V. (1979). Life themes: A theoretical and empirical exploration of their origins and effects. J. Hum. Psycho). 19(1): 45-63. Csikszentmihalyi, M., and Larson, R. (1987). Validity and reliability of the Experience Sampling Method. J.Nerv. Ment. Dis. 175:525-536. Csikszentmihalyi, M., Rathunde, K., and Whalen, S. (1993). Talented Teenagers: The Roots of Success and Failure. Cambridge University Press, New York. Csikszentmihalyi, M., and Csikszentmihalyi, I. S. (1988). Optimal Experience. Cambridge University Press, New York. Csikszentmihalyi, M., and Schneider, B. (2000). Becoming Adult. Basic Books, New York. Deci, E. (1992). The relation of interest to the motivation of behavior: A self-determination theory perspective. In Renninger, K., Hidi, S., and Krapp, A. (eds.), The Role of lute rest in Learning and Development. Erlbaum, Hillsdale, NJ, pp. 43-70. Drucker, P. F. (1999). Management Challenges for the 21st Century. Harper Business, New York. Eccles J. S., Wigfield, A., and Schiefele, U, (1998). Motivation to succeed. In Damon, W., and Eisenberg, N. (eds.), Handbook of Child Psychology: Vol. .3. Social, Emotional, and Personality Development (5th edn.). Wiley, New York, pp. 1017-1095. Farmer, R., and Sundberg, N. (1986). Boredom-proneness--The development and correlates of a new scale. J. Pers. Assess, 50(1): 4-17. Fazio, R. H. (1981). On the self-perception explanation of the over justification effect: The role of the salience of initial attitude. J. Exp. Soc. Psychol. 17: 417-426. Gillis, J. R. (1974). Youth and History. Academic Press, New York. Hamilton, J. A. (1983). Development of interest and enjoyment in adolescence. Part II: Boredom and psychopathology. J. Youth Adolesc. 12(5): 363-372. Hidi, S. (1990). Interest and its contribution as a mental resource for learning. Rev. Educ. Res. 60(4): 549-571. Inglehart, R. (1997). Modernization and Postmodernization: Cultural, Economic and Political Change in 43 Societies. Princeton University Press, Princeton, NJ. Izard, C. E. (1991). The Psychology of Emotions. Plenum Press, New York. Izard, C. E., and Ackerman, B. P. (2000) Motivational, organizational, and regulatory functions of discrete emotions. In Lewis, M., and Haviland-Jones, J. M. (eds.), Handbook of Emotions (2nd edn.). Guilford Press, New York. James, W. (1890). The Principles of Psychology (Vols. 1 and 2). Dover, New York. Kernis, M. H., Cornell, D. P., Sun, C. R., Berry, A., and Harlow, T. (1993). There's more to self-esteem than whether its high or low: The importance of stability of self-esteem. J. Pers. Soc. Psychol. 65(6): 1190-1204. Klinger, E. (1993). Loss of interest. In Costello, C. (ed.), Symptoms of Depression. Wiley, New York, pp. 43-62. Larson, R., and Richards, M. (1991). Boredom in the middle school years: Blaming schools versus blaming students. Am. J. Educ. 99: 418-443. McDermott, J. J. (1981). The Philosophy of John Dewey The University of Chicago Press, Chicago. McLeod, C., and Vodanovich, S. J. (1991) The relationship between self-actualization and boredom proneness. J. Soc. Behav. Pers. 6(5): 137-146. Mikulas, W. L., and Vodanovich, S. J. (1993). The essence of boredom. Psychol. Record 43: 3-12. Montessori, M. (1949(1967]). The Absorbent Mind. Holt, Rhinehart and Winston, New York. Orcutt, J. D. (1984). Contrasting effects of two kind of boredom on alcohol use. J. Drug Issues 14:161-173. Piaget, J. (1966) The Psychology of Intelligence. Littlefield, Adams, Totowa, NJ. Piaget, J. (1981). Intelligence and Affectivity: Their Relationship During Child Development. Annual Reviews, Palo Alto, CA. Rathunde, K. (1993). The experience of interest. Adv. Moziv Achievement 8: 59-98. Rathunde, K., and Csikszentmihalyi, M. (1993). Undivided interest and the growth of talent: A longitudinal study of adolescents. J. Youth Adolesc. 22(4): 385-405. Renninger, K. A. (1989). Individual patterns in children's play interests. In Winegar, L. T. (Ed.), Social Interaction and the Development of Children's Understanding. Ablex, Norwood, NJ, pp. 147-172. Renninger, K. A. (1990). Children's play interests, representation, and activity. In Fivush, R., and Hudson, I. (eds.), Knowing and Remembering in Young Children (Emory Cognition Series, Vol III). Cambridge University Press, New York, pp. 127-165. Renninger, K. A. (1992). Individual interest and development: Implications for theory and practice. In Renninger, K., Hidi, S., and Krapp, A. (eds.), The Role of Interest in Learning and Development. Erlbaum, Hillsdale, NJ, pp. 361-396. Renninger, K. A. (2000). Individual interest and its implications for understanding intrinsic motivation. In Sansone, C., Harackiewicz, J. M. (eds), Intrinsic and Extrinsic Motivation: The Search for Optimal Motivation and Performance. Academic Press, San Diego, CA, pp. 373-404. Rogoff, B. (1990). Apprenticeship in Thinking. Oxford University Press, New York. Rosenberg, M. (1986). Self-concept from middle childhood through adolescence. In Suls, J., and Greenwald, A. G. (eds.), Psychological Perspectives on the Self (Vol. 3). Erlbaum, Hillsdale, NJ, pp. 107-135. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement, Psychological Monographs. 80: 1-28. Savin-Williams, R. C., and Demo, P. (1983). Situational and transituational determinants of adolescent self-feelings. J. Pers. Soc. Psychol. 44: 820-834. Schiefele, U., and Csikszentmihalyi, M. (1995). Motivation and ability as factors in mathematics and achievement. J. Res. Math. Educ. 26(2): 163-181. Scitovsky, T. (1976). The Joyless Economy. Oxford University Press, New York. Seib, H. M., and Vodanovich, S. J. (1998). Cognitive correlates of boredom proneness: The role of private self-consciousness and absorption. J. Psychol. 132(6): 642-652. Sommers, J., and Vodanovich, S. (2000). Boredom proneness: Its relationship to psychological- and physical-health symptoms. J. Clin. Psychol. 56(1): 149-155. Tomkins, S. (1962). Affect Imagery Consciousness. Polyglot, New York. Valsiner, J. (1998). The Guided Mind. Harvard University Press, Cambridge, MA. Waschull, S. B., and Kernis, M. H. (1996). Level and stability of self-esteem as predictors of children's intrinsic motivation and reasons for anger. Pers. Soc. Psychol. Bull 22: 4-13. Wasson, A. S. (1981). Susceptibility to boredom and deviant behavior at school. Psychol. Rep. 48: 901-902. Wink, P., and Donahue, K. (1997). The relation between two types of narcissism and boredom. J. Res. Pers. 31: 136-140. Zuckerman, M. (1979). Sensation Seeking. Erlbaum, Hillsdale, NJ. Jeremy P. Hunter, (1) and Mihaly Csikszentmihalyi (2) (1.) Research Director, The Quality of Life Research Center, Peter F. Drucker School of Management, Claremont Graduate University, Claremont, California. Received PhD in psychology (human development) from The University of Chicago in 2001. Major research interests include the experience of interest, intrinsic motivation, and meditative practice and the quality of life. To whom correspondence should be addressed at Quality of life Research Center, Peter F. Drucker School of Management, Claremont Graduate University, Claremont, California; e-mail: jeremy.hunter at cgu.edu. (2.) C. S. and D. J. Davidson Professor of Psychology and Management and Director, Quality of Life Research Center, Peter F. Drucker School of Management, Claremont Graduate University, Claremont, California. Received PhD in psychology (human development) from The University of Chicago in 1965. Major research interests include the psychology of adolescence and the study of optimal experience, creativity, and intrinsic motivation. From checker at panix.com Tue May 17 14:49:25 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:49:25 -0400 (EDT) Subject: [Paleopsych] Richard W. Bargdill: The Study of Boredom Message-ID: Richard W. Bargdill: The Study of Boredom Journal of Phenomenological Psychology, Fall 2000 v31 i2 p188 ABSTRACT This article extends the study of a phenomenological investigation (Bargdill, R. W., 2000) in which six participants wrote protocols and gave interviews describing the experience of being bored with their lives. This study found that the participants gradually became bored after they had compromised their life-projects for less desired projects. The participants felt emotionally ambivalent because they were thematically angry with others involved in their compromises while being pre-reflectively angry with themselves. The participants non-thematically adopted passive and avoidant stances toward their lives that allowed their boredom to spread to more aspects of their lives. The participants' boredom led them to identity issues because they no longer were actively working toward projects. They felt empty and apathetic because they felt every action led to boredom, and thus action was futile. Preliminary distinctions between the experience of life boredom and depression are considered. THE STUDY OF LIFE BOREDOM At some point in everyone's life there is an experience of being bored. We may find ourselves bored by certain events such as a book, a job, or by others. At times, we may even be bored with ourselves, complaining that there is nothing for us to do or that nothing interests us. While boredom is often attributed to the situation in which it arises, evidence suggests that certain people are more prone to being bored than others. These people describe themselves as being bored more frequently and across a variety of situations. This habitual boredom suggests a more serious psychological issue. Psychological and social research demonstrates that a person experiencing boredom often attempts to alleviate this feeling. These attempts can be associated with social ills, such as drug use, vandalism, gambling, and other self-destructive behaviors. For example, drug abuse journals report that boredom is a major factor in the abuse of drugs (Iso et al., 1991; Johnston & O'Malley 1986; Samuels & Samuels, 1974). A drug abuser is more likely to use drugs when bored and more likely to leave treatment due to boredom (Sherman et al., 1989). In addition, vandals, prisoners, and high school drop-outs (Farrell, 1988; Kirsch, 1986) all cite boredom as a main contributor to their behavior. Similar findings concerning addictive behavior note that pathological gamblers roll the dice in order to avoid or relieve feelings of boredom (Blasczynski et al., 1995). Other research associates boredom with eating disorders (Abramson & Stinson, 1977; Leon & Chamberlain, 1973), excessive cigarette smoking (Ferguson, 1973), and increased drinking (Forsyth & Hundleby, 1987). Furthermore, some authors (e.g., Boss, 1969) believe boredom and, in effect, the social ills linked to it will be an increasing problem in the future. Because of the severe problems associated with habitual boredom, this phenomenon warrants further study. Accordingly, I will review the literature on boredom as a psychological issue as approached by psychologists from different paradigms. I will then describe an empirical phenomenological study of the experience of life-boredom. LITERATURE REVIEW Surprisingly, a majority of the research on boredom concentrates on the situational determinants of boredom rather than the assessment of those who experience boredom habitually. This literature review will concentrate on studies that focus on habitual boredom, or the "bored personality." Industrial and human factors research on boredom typically suggests that physical monotony is seen as the necessary and sufficient condition for the occurrence of boredom. However, O'Hanlon (1980) notes that degrees of boredom reported by different individuals in the same monotonous working environment vary greatly. He suggests some workers performing monotonous jobs are not bored at all, while others claim they exert more 'effort' to pay attention to their jobs than their coworkers. O'Hanlon suggests that any task requires that an individual have some interest in accomplishing the task and at the same time the task requires the individual's attention. Effort is a concept that alludes to these subjective components of boredom. Thus, if the experience is not "objective" (determined by the stimulus) then certain people could experience boredom chronically. Cognitive researchers find that some people are bored all the time regardless of the situation or the type of stimuli present in that situation. Smith (1981) notes that the most robust finding in cognitive research is that extroverts seem much more likely to be bored than introverts. This suggests that personality factors make extroverts more likely to be situationally bored, and thus eventually make them more likely to be habitually bored as well. Larson and Richards (1991), educational researchers, note that the same youths who report a high frequency of boredom during schoolwork also experience high rates of boredom outside school. This suggests that students who are most bored in school are not people who have something tremendously exciting they would rather be doing. They are youths who report boredom across many situations; this suggests that boredom may be a trait. In 1986, Farmer and Sundberg developed the Boredom Proneness Scale to address the disposition of bored individuals. They state that the boredom -prone individual is "one who experiences varying degrees of depression, hopelessness, loneliness, and distractibility.... Boredomprone persons tend to be amotivating and display little evidence of autonomous orientation..." (p. 14) Thus, Farmer and Sundberg suggest that there is a group of people who are bored to a greater degree than the rest of the population. The psychodynamic literature provides a few theories about patients who are habitually bored. Fenichel (1951) describes pathological boredom as habitually occurring when drives or wishes exist, but their objects or aims are repressed. In pathological boredom , Fenichel thinks people experience the tension between instinctual impulses (drives) and unfulfilled gratification (objects) as a longing for something without knowing what it is they wish for. Therefore, people are left to do nothing and experience a sense of aimlessness. Wangh (1979) writes, "[T]he bored person, being most inclined to ascribe his state of mind and mood to outside circumstances, usually feels superior, while the person who complains of emptiness, being basically depressed, is apt to feel inferior" (p. 519). For Wangh, boredom is a transitory emotion, except in cases where boredom prevents the individual from slipping in to more detrimental states such as depression. In Neurosis and Human Growth, Homey (1950) describes the neurotic personality style of "resignation". The resigned personality shows familiar symptoms of boredom such as repression of wish, disinterest in world, and looking to the world for shallow enjoyment. The resigned type shrinks away from life and growth by not facing this inner conflict. The resigned type believes that life should be "easy, painless, and effortless." So the resigned type has the absence of any serious strivings and the aversion to putting forth an effort. The resigned type appears to dabble in many things but master none due to a lack of commitment. The very essence of this solution is withdrawing from active living. Several existential authors think that people's experience of being bored with their lives is primarily a result of inability to create meaningful existences. Sartre (1947) thinks that life has no a priori meaning, and individuals create all the meaning in their lives. At each moment, we are responsible for choosing the meaning of our situations. Thus, bored persons find themselves waiting for others to make their lives meaningful. O'Connor (1967) agrees with Sartre and feels that we usually avoid boredom's insight of freedom and responsibility because the requirement to create meaning and value is too overwhelming for us. Thus, our choices remain largely passive and unreflective responses to environmental pressures (p. 381). This also means that we generally accept a theory of determinism, genetics, or environmental factors as having control over us. For Straus (1980) and Knowles (1986), boredom shows itself as the blocking of the process of becoming. To become is to proceed toward the future. In becoming, people have goals and then attempt to actualize those possible goals. When becoming is blocked, Straus and Knowles feel that people are unable to foresee meaningful futures; people no longer experience themselves as a process. Instead, they experience themselves more as a determined object. Boredom reveals the present as cut off from the future. Clive (1967) suggests that contemporary society provides the conditions to make boredom a prevalent and habitual emotional experience. Our times are plagued by profound disappointments and setbacks which are the result of an irrepressible drive for change. The disappointments arise because change is supposed to be change for the better, and Clive feels that many societal changes simply create new problems and disappointments. This review reveals that there are many differing perspectives on the cause and meanings of habitual boredom. However, most of the treatment of habitual boredom is speculative or theoretical. The theories of habitual boredom are drawn from observations of clinicians or from conclusions of research done on situational boredom. While Farmer and Sundberg developed a scale to determine who is prone to boredom , there is no research as to what conditions may lead to this temperament and the processes associated with it. In order to address this shortcoming, the present study goes to the source, to the people who consider themselves bored with their life. This study explores the process that leads to that boredom, the psychological experience of life boredom, and the possibility of the resolution of that experience. METHOD PARTICIPANTS Six participants were recruited by advertising in wide-reaching newspapers and by placing flyers in a wide range of venues. They were chosen based on their interest in the research, their diversity with respect to age and gender, and their ability to articulate their experience. The participants (P's) can be characterized as follows: P1 was a 28-year-old Caucasian female. She initially became bored working on her dissertation. Her advisors needed help on their research work and offered funding for working with them. She would not have chosen their topic and soon could not bring herself to work on it. She avoided her advisors for long periods of time because she was ashamed that she had nothing to show them. P2 was a 67-year-old Jewish mother of six. Her goal was to find someone who loved her for who she is, yet, she has remained married to man who she finds emotionally abusive and cold. Motherhood left her with loads of work that she found unsatisfying, but when the children left she suffered a complete loss of identity. She had attempted suicide because she found her life empty and could find no relief from her boredom. P3 was a 33-year-old Caucasian male who became bored after being hospitalized for a mental illness. He felt that subsequent hospital stays, changes in high schools, and lost friendships had led him to life-boredom and prevented him from getting a college degree. P4 was a 40-year-old African-American female. She noticed she was bored with her life because she had changed jobs every few years. She liked a challenge, and so she made frequent, but superficial, changes in her life. She was currently working toward a master's degree; yet, she felt she might become bored with that too. P5 was a 49-year-old Caucasian male who originally wanted to be an astronomer. However, he had problems with a math course in high school and changed his focus to evangelical religion. In college, he lost his faith in God and his faith in himself. Since that time, he has been bored with his life. He has jumped from job to job never finishing what he has started. P6 was a 16-year-old Caucasian male who was bored with his school, bored with his town, but most of all, bored with being treated as a kid. His goal is to be treated like an adult although he has done little to deserve that treatment. He felt his experience of boredom made him underachieve as a student and had brought him to the borderline of delinquency. PROCEDURES The specific method I employed was an empirical-phenomenological method developed from the phenomenological psychological research tradition (Giorgi, 1975). In this method, researchers perform a qualitative narrative analysis of peoples' everyday accounts of the phenomenon being studied. Through this method, researchers seek to interpret psychological meanings from the everyday descriptions. These psychological meanings are inherent within the narrative, but only implicitly; they emerge for the researcher in the course of interpretive analysis. In this case the participants were asked to write a description of the experience of feeling bored with their lives. In particular the researcher requested that they provide a written protocol in response to the following question: Please describe, in as much detail as possible, your experience of being bored with your life. Please include how this experience arose, the experience itself and how this experience came to a resolution, if it did. Give enough detail so that someone who has never experienced an event of this kind would know just what it was like for you. After reading the protocols several times, interviews were conducted to clarify any questions or ambiguities found in the written descriptions. The integration of the written and transcribed interview provided the material that formed the edited synthesis, the basis for each individual's case study. THE EDITED SYNTHESIS The edited synthesis is the protocol combined with the interview minus any information that would identify the participant. An example of an edited synthesis for one of the participants is provided in Appendix I. The participant received the following instructions for the interview. The researcher (R) told the participant (P) that the researcher would read the participant's own written protocol to him or her. When the researcher paused in his reading of the protocol, this would indicate that he would like to know more about the proceeding events. Any pause would mean "Could you tell me more about that?". This approach was used in order to avoid leading questions. Pauses and the participant's responses were marked off from the original written protocol by brackets. Direct questions asked by the researcher also appeared in the bracketed text. MEANING UNITS Each edited synthesis was divided into smaller, numbered units, each unit highlighting shifts in the participant's meaning as the researcher perceived these shifts. Giorgi (1985) writes, [T]he meaning units that emerge as a consequence of the analysis are spontaneously perceived discriminations within the subject's description arrived at when the researcher assumes a psychological attitude toward the concrete description, and ... becomes aware of a change of meaning of the situation for the subject that appears to be psychologically sensitive. (p. 11) Through this methodological move, the researcher became more attuned to the meanings within the participant's account. Giorgi writes, "Thus, the method allows the lived sense of these terms to operate spontaneously first and later tries to assess more precisely the meaning of the key terms by analyzing the attitudes and set actually adopted" (p. 12). Meaning units were delineated in preparation for the next step of elucidating the central themes of these accounts. CENTRAL THEMES Each meaning unit was re-examined, but this time in terms of its psychological significance and relevance to the overall meaning of being bored with one's life. The relevance of material was judged by asking, "How am I understanding this phenomenon such that this statement reveals it?" Wertz (1983) writes of the researcher at this stage, "He now takes a step back and wonders what this particular way of living the situation is all about. Breaking his original fusion with the subject, he readies himself to reflect, to think interestedly about where his subject is, how he got there, what it means to be there, etc." (p. 205). Giorgi adds, "After the natural [meaning] units have been delineated, one tries to state as simply as possible the theme that dominates the natural unit within the same attitude that defined the units" (1975, p. 87). This transformation of the meaning units into psychologically significant themes (also called constituents) prepared the data for a new synthesis. INDIVIDUAL SITUATED NARRATIVE The integration of the central themes and their mutual implication into a descriptive statement formed the individual situated narrative. The aim was to capture the meanings and significance of each participant's experience by moving from the everyday meaning to the psychological meaning of the phenomenon (Wertz, 1983, p. 228). This individual situated narrative remained close to the participant's specific meanings, while seeking a comprehensive psychological view of that individual's experience. The limitation of the individual situated narrative is that it is only one example of a phenomenon. An individual situated narrative was completed for each participant providing the base from which a general structure can be derived. GENERAL PSYCHOLOGICAL STRUCTURE Once all individual situated narratives were complete, the researcher compared each one with the others. At this point, the researcher reflected on those constituents and structures that were common to all individual situated narratives as well as any ways in which each structure could be considered a variation of the others. The results of these comparisons produced the general psychological structure. The general structure established the psychological dynamics of the phenomenon that held true invariably across the specific experiences studied. Wertz (1983) writes that the general psychological structure "involves understanding diverse individual cases as individual instances of something more general and articulating this generality of which they are particular instances" (p. 228). Later, Wertz describes the general psychological structure as a formulation of "the essential, that is, both the necessary and sufficient conditions, constituents, and structural relations which constitute the phenomenon in general, that is, in all instances of the phenomenon under consideration" (p. 235). The general structure represents the major finding of this study. FINDINGS In the following paragraphs I will summarize a general structure of life boredom. The general structure established the psychological dynamics of life-boredom that held true invariably across the specific experiences studied. I have also included examples from the edited synthesis of various participants which help support the general themes being discussed. THE GENERAL STRUCTURE OF BEING BORED WITH ONE'S LIFE The participants, who became bored with their lives, originally found themselves to be active and interested in their lives. They had goals and life-projects that they were striving toward. They anticipated that their goals were reachable, and yet often underestimated the requirements of their goals. However, in the initial stages of their projects, the participants found themselves stuck by an event that they constituted as being beyond their control. P3: "Basically I was good up to the ninth grade in school. I was skiing and doing a lot of activities and things. I had a lot of friends and people who wanted to do things with me. My grades were pretty good, so I planned to go to college... There was always something to do before the time I got sick." P5: "I was originally interested in scientific work, specifically, becoming an astronomer way back in junior high school...I don't know if it was the material itself or the teacher I had trouble with but, in any case, I stumbled pretty badly in the tenth grade and became a little disillusioned with it." The participants relinquished their original goals, apparently without much of a fight. The participants chose paths that avoided making stands for their own desires. They compromised those goals for other less desired projects, and justified these compromises as being paths of least resistance or eventually claimed these compromises as forced upon them. In any case, the modified goals were not something they would have originally chosen. P1: P had mixed emotions about her dissertation topic. On one hand, the topic of P's dissertation was not something that P would have chosen. She was interested in another area of study. On the other hand, P felt quite honored that her advisors had asked her to work on their project. P2: P suggested that her pregnancy was what kept her from pursuing more education; however, her attempts to return to school were short lived and her goal to be a biologist or doctor seem more of a fantasy than an actual possibility. P5: "So at that point I changed my focus to religion. I would not have originally gone into religion, but there were certain influences in high school that led to that." The participants became emotionally ambivalent after they had compromised their life-projects. They experienced divided feelings directed at themselves and others, and yet, they only seemed to recognize feelings that were directed towards others. They were aware of anger towards others who they felt had "forced" them to compromise their original goals for their modified goals. The participants blamed these others and considered them responsible for their situations. At the same time, the participants did not seem to be aware of self-directed emotions of anger and responsibility for giving up on their original goals so easily. These self-directed feelings would be ignored or kept at a prereflective level. However, the participants attempts to deny or hide these feelings would lead to their intensification; some things grow in the dark. P5: When selecting a college, P's pastor and parents convinced him to go to a mainstream religious college rather than a hard-line evangelical college. P came to feel anger toward these people for this action; yet, he was less aware of anger toward himself for not taking a stand. P6: P blames the town for being "pretty lame. Our whole town just sucks and the people around it just suck." On the other hand, he resists taking responsibility for missing opportunities by sleeping in. P2: P is still married to a man she describes as "still a beast, he's still cruel, and he's still heartless. Every time he saw something was making me happy he had to cut if off... I'm still married to him and see him on weekends." The participants began working on their modified projects, and again those projects became stymied in the initial stages. Since the participants were torn in two directions, they were unable to give their new projects their full effort. When the projects became impeded, they did not experience the resolve to continue working on them. They found themselves frustrated with these modified projects. They tried some different approaches in attempts to make progress on these projects, but soon they could no longer see any new perspectives. They found themselves repeating the same approaches that had initially failed, which led them back to the same dead-ends. P1: "If I went to school I would sort of pick up this same book and open it back up to the same page and look at it and still not get any new ideas. I went on like that for a while." P3: P tried a few different times to go to college but frequently took on too much. "One time I was going to school part time, working two part time jobs, and seeing a doctor once a week." P5: "So once again it looks like I've kind of hit a brick wall with regard to vocation. It might be a rubber wall which keeps bouncing me back to where I started." The participants recognized themselves as being bored with their modified projects. They could no longer see themselves making progress toward their futures. They felt overwhelmed and helpless in the face of their projects. Yet, they did not gain the assistance of others. They would fantasize about solutions to their boredom ; they were passively hopeful. They hoped to be saved by someone else's actions rather than their own. The participants' stances toward their projects, and their lives, would become passive and avoidant. P2: "So many times my life was like a daydream. I would sit around and think about how nice it would be if I could get a job where I could travel and be the singer. I could dance. I was a real good dancer. Maybe I could get a job with Lawrence Welk. So I fantasized that somebody might discover me someday. I'd be somebody someday. It never happened though I'm still waiting." P5: "I felt kind of overwhelmed by the choices to the point where you have to take a step back and shut it out for a while. At that point, life temporarily loses meaning for me." P1: "So I avoided my advisors for six months. I wouldn't go to school a lot of the time. If I would go to school, I would hide out in my office so I wouldn't have to see my advisors." They began to find themselves bored with more aspects of their lives. They found themselves bored with activities that they had previously enjoyed. The participants recognized that they were acting in ways that were uncharacteristic of themselves; they would not have acted in these ways before they had become bored. Although they had some ideas about different possible projects that might change their boredom, they no longer felt confident enough to try new activities. They anticipated that potential projects would become boring, and so they decided not to bother with them. P4: "Presently, I am bored with my whole life. None of the old things I used to do bring enjoyment to me anymore. Nothing. [Boredom ] covers my social life. It covers school. It covers work. It covers going to the grocery store . . . It covers a lot of things. My hair." P5: "I might think that I would become bored with whatever activity I'm looking at. I project boredom. I'm looking ahead and saying 'Oh boy, it looks like it's going to be boring after all.' So I don't even start it." P3: "Too many ideas going through my mind . . . But I can't stick to things very long. Then you end up accomplishing nothing, and you're bored because you have all these ideas and they just don't get done." The participants became aware of the self-directed feelings of anger, doubt, and shame that had been hidden by ambivalence. These feelings had intensified and could no longer be denied. Those self-directed feelings had largely dissolved any previous feelings of confidence, desire, and will that the participants had prior to their boredom. However, when the participants became aware of these feelings they did not attempt corrective actions, rather they continued a pattern of avoidance. P1: P recognizes that 'There's a lot of self contempt.' She acknowledges that she wasted time avoiding her advisor and working on her project. P5: P recognizes that boredom has led to a loss of "good self esteem . . . and without it I become fragmented, un-focused, and that results in a lack of will power to keep going in any given project." The participants found themselves questioning their own identities. These questions centered around becoming self-aware of personal estrangement. The participants no longer could understand themselves as being the active, interested people they had been, nor were they actively working toward future goals. They wondered who they were. Although they were passive and avoidant, they continued to become someone--someone they did not like. P1: "I thought I wasn't Ph.D. material or maybe I wasn't cut out to do this. I had all these doubts about whether I'd be able to do it." P2: "Like you hear the young people today say 'I'm trying to find myself. I'm looking for myself.' I never found me. This person." P4: "But I've become more of a recluse over the past couple of years...I used to be more of a people-person...I always have a scowl on my face...It's there without me even thinking about it. I don't get any enjoyment anymore. That bothers me." The participants' questions of identity lead to feelings of emptiness. They had no answers to their identity questions. While their pre-bored sense of identity had burned up, nothing with a sense of vitality had developed to replace it. This left a vacuum. Nothingness. They no longer knew what they wanted or what to do with themselves. They were no longer throwing forward possibilities. P1: "In the past, I have always been energetic in every area of my life. But now, this problem I am having with my thesis work has had the reverse effect on my life. I felt like I couldn't do anything, whereas before I was sort of doing everything." P2: "But as I'm saying, for me as the one person myself, there was nothing. There was really nothing. If you want to take it as a female reward, raising your children. Fine. I had all that." P3: "I have memories in my head about how I was before it all broke down. But I wish I could be like I was before I broke down. I can only remember it, but I can't feel how I was before I got sick." P5: "I lost my focus, and I lost the vision for my life and that former excitement that I experienced." The participants had a sense of futility since they felt their actions would not bring about the desired results. They were unable to hold the future open; they were not able to see any direction in their lives. They lost sight of a hopeful vision for their lives. They lost faith in themselves, in terms of actively, willfully, and authentically influencing their own lives. Their limitations became more present to them than their possibilities. P2: "It's like you take a train, and you have a long ride, and you come to the end, and where do you go? There's nothing for you, nobody for you. It's like you're in a big empty room. What do you do? Lay down and die." P5: "When I lose my vision, I lose any idea or projection of what I want to do in the future. I don't have any distinct plans, or even an idea of what I want to do and so I wanted to immerse myself more in the present rather than projecting myself in the future, hoping that something would work out in the near future." They could no longer envision a future that was different from the present. They began not to care. When participants were apathetic, they tended to act in ways that they knew were not in their best interest, in destructive ways. P2: "So after so many years of loneliness and boredom, I thought, 'What now? It's over with . . . So just end it.'" P5: "Being in the disillusioned state I didn't have the will power to be disciplined. I knew what I was getting into, but I just didn't care." P4: "I've found myself in precarious situations, with people I would otherwise never be with if I wasn't trying to avoid being bored. Just to have something to do even if it's negative. I mean sometimes I just go out drinking if I'm bored." P6: When feeling bored and apathetic, P often resorted to teen mischief which included drinking, buying and chewing tobacco, minor vandalism. Not all participants resolved their experience of being bored. Those participants who did, typically returned to having an active sense of agency because someone else forced them to do so. Concerns such as lost jobs or lost marriages placed new demands on the participants and made them create new futures. This meant returning to active participation in their lives. P1: "I will have to make a decision about how to handle this, and soon, for the university is not likely to let me sit here, staring at the walls, too much longer." Participants began creating new goals by imagining new possibilities for their lives. Unlike earlier fantastic solutions to their boredom, their new solutions focused on possibilities that were within their active realm. By creating new goals, new futures appeared. The participants who made active choices at that point and experienced some initial success toward those new goals did not currently see themselves as being bored. P1: "I think it was different because I picked it; actually against the advice of other graduate students, who told me the same thing would happen again . . . But that didn't happen and I really got into this topic." P2: "But now I'm trying to find myself. Now I'm taking piano lessons. I've joined senior citizens, and I have a few lady friends." DISCUSSION This study found that the most important aspect in the experience of life boredom was the development of emotional ambivalence. Ambivalent feelings developed once the participants compromised their personal goals for less desirable projects. Whereas participants' pre-bored selves appeared to be unified toward their goals, they now were conflicted and divided. They still desired their original goals; however, in lieu of the obstacles of those goals the participants chose to change or modify them. On a practical or rational level, their decisions to modify their goals made sense to them. Yet, their decisions undermined the desires for their original goals. To some degree, the participants understood that they were turning away from their own-most selves, but felt compelled to work on these modified projects that they soon found their hearts were not in. This turning away from their own desires would become a repeated pattern of passivity and avoidance. The participants were emotionally torn in two directions, and these two similar but opposing emotional directions would develop simultaneously on two different levels of awareness. Freud (1989) acknowledged a similar structure, "Contrary thoughts are always closely connected with each other and are often paired off in such a way that the one thought is excessively conscious while its counterpart is repressed and unconscious" (p. 200). On one hand, the participants would consciously feel anger and direct blame towards the world and others. Ironically, the participants also expected the solutions to their boredom to be provided by others. By giving the world and others so much control over their lives, the participants found themselves waiting for change instead of working towards it. On the other hand, the participants pre-consciously felt anger and directed blame toward themselves. Their attempts to deny, or ignore their own self-directed feelings of anger, shame and doubt led to an intensification of those feelings. Ignored feelings do not simply disappear. Rather, they simmer, boil and burn. In this case, these feelings burned up the participants former sense of confidence, will and active living. When strengths atrophy and nothing new replaces those strengths, the participants experienced feelings of emptiness. This study is in agreement with the findings of cognitive researchers Larson and Richards (1991) who reported that the same youths experienced high rates of boredom inside and outside school. In this study, the participants were initially bored with their modified projects, but boredom spread to more aspects of their lives including activities that they previously enjoyed. Instead of being open to the way the world called forth particular emotional responses, the participants attuned themselves to a very limited set of emotional possibilities. Their experience of being bored changed from a state to a trait. Returning to the psychodynamic perspective, the participants in my study appeared to he very similar to the resigned personality that Homey (1950) described in Neurosis and Human Growth. Both Homey's resigned type and the participants experienced ambivalent conflicts and withdrew from active living. In my study, I found no evidence that life boredom was the result of instinctual impulses with repressed objects (Fenichel, 1951). However, "repressed" objects may pertain more to situational boredom rather than life boredom . Although Wangh (1979) felt that only depressed persons would experience feelings of emptiness, I found that feelings of emptiness were also part of life boredom. Some preliminary distinctions between boredom and depression are in order. There are some significant differences between the experience of life boredom and the cognitive triad of depression (Beck, 1967). The first aspect of the cognitive triad of depression is a negative view of one's self. According to Beck, depressed people totalize themselves as being defective, deprived, or inadequate. They understand their negative experiences as a result of their own personal defects. Because of these defects depressed people underestimate and criticize themselves, believing that they are undesirable, worthless, and lack the psychological necessities to be happy and content (Beck et al., 1979; p. 11). In this study I found that the participants had a much different experience of themselves. They began their original life projects with confidence and expectations that their goals were well within their reach. In fact, I believe that the participants were overconfident and over-estimated their own abilities. They expected to easily achieve their original goals. For instance, regarding the beginning work on her dissertation question, P1 said, "So I hoped that I would start in whatever direction, and I guessed that when I started doing the research it would be obvious to me what to do." Apparently initial troubles with life-projects did not immediately lead the participants to doubt themselves, nor did they double their efforts. They tended to see the projects, instead of themselves, as less desirable or unworthy. P5 said, "I don't know if it was the material itself or the teacher I had trouble with but in any case, I stumbled pretty badly in the tenth grade and became a little disillusioned with [my original goal]." Most of the participants could picture themselves as being happy, but this happiness was usually dependent on the actions of others rather than actions under their own control. The second component of the cognitive triad consists of depressed people's tendency to interpret their ongoing experience negatively (Beck et al., 1979; p. 11). The world is understood as making tremendous demands or presenting insurmountable obstacles which keep them from reaching their goals. Depressed people misinterpret their interactions with the world and others in a way that leaves them feeling defeated or deprived (p. 11). The participants who experienced life boredom had similarities to this second component as well as some subtle differences. The participants also ran into obstacles on the path to their projects. However, they felt that they had the resources to overcome those obstacles. Yet, when it came to using those resources, in the form of finding fresh perspectives to their problems, the participants were unable to do so. They tried the same approach over and over. Instead of a series of tremendous demands, the bored participants tended to believe there was one small hurdle that they tripped over. Once they would get past their obstacles their lives would be back on track. They underestimated the requirements of their goals. Moreover, instead of seeking help when they became stuck, the participants often avoided getting help from others. Not because they felt defeated rather the participants felt too ashamed to ask for help. They thought they should be able to accomplish the task on their own. In general, I tend to think of shame as integral to life-boredom whereas guilt seems to be more prominent in depression. The third component of the cognitive triad is that depressed people have a negative view of the future. They anticipate never-ending series difficulties and perpetual suffering. Depressed people exhibit expectations of hardship, frustration, and deprivation. These expectations lead the depressed people not to take up projects because they anticipate that these projects will end in failure (Beck et al., 1979; p. 11). The bored participants also tend to have a negative view of the future even though initially their feelings were not as hopeless or pervasive as in depression. I think that the participants felt they deserved and were worthy of an interesting and active life. Although they did anticipate boredom toward their own possibilities, they could fantasize or passively hope for others to save them from their boredom. Fantasizing falls in between the hopelessness of depression and imagining. Imagining or active hope occurs when a person throws forward projects that are within the realm of one's real possibilities. Several existential authors (e.g., Frankl, Boss, Clive) feel that contemporary society appears to have special factors which make people particularly vulnerable to the agonies of boredom. In general, I found no evidence to suggest that our culture has a large influence in leading a person to boredom . The participants tended to blame others for their situation, and under the broad category of others to blame, cultural structures were mentioned. Yet, cultural structures only appeared to be another target, in a series, toward which the participants directed their anger. However, the method of protocol analysis utilized in this study may not be the best method for uncovering cultural influences on the experience of boredom. My findings are in agreement with the perspectives of Knowles (1986) and Straus (1980) in that the process of becoming was blocked by the participants' experiences of boredom. In addition, my study found that although the participants' active sense of becoming was blocked, they passively continued to become--only they became people whom they did not like. In time, the participants found that these passive, avoidant aspects became a distinct, then dominant aspect of their identity. Therefore, the bored participants were alienated from the future and estranged from the past. The existential position recognizes the need to create personal meaning--which becomes the background for choice--to be a very important feature of a person's experience of boredom. In fact, I originally thought boredom was a possible precursor to authenticity (Bargdill, 1999), providing an opportunity for people to take up their own most possibilities. However, when faced with choosing the meaning for their lives, people experientialiy appear to make fight, flight or freeze responses. Boredom is equivalent to the freeze response. In this response, people ignore the possibility of taking creative steps toward making their lives meaningful. Instead, they walt for others-for outside assistance to a very personal insight. Like a deer in the headlights, these people freeze. They hope that the intrusive danger, meaninglessness, will disappear and that they will be able to return to their daily lives. They retain a passive hope; not a hope that leads to action, rather a hope that someone will help them. They are between despair and joy. They are in purgatory waiting and dependent on other people's prayers. As if they have seen Medusa, they stagnate, solidify. They are no longer in motion. They are aware, but paralyzed. They are bored. APPENDIX I: EDITED SYNTHESIS FOR PARTICIPANT ONE "MARGARET THE MATH GRAD" Until a few years ago, I had never really been bored with any aspect of my life. [Well, I don't think I have ever been in a situation where I had to do just one thing and nothing else. Through high school, through college, I was always taking a bunch of different classes. Then, all of a sudden I had to do one thing, and I just got really bored with it. Up until then, I had been doing lots of other different things, so maybe it was that. I had some of other things to go to. If I got tired of the one subject I was studying I could go to something else.] School interested me a great deal, and I was able to focus intently upon all of my studies, and I achieved a measure of satisfaction from doing so. [I think it was not only being satisfied with the different things I was doing and the being satisfied with the things I was learning in classes, but there somebody was teaching me, and I always had immediate feedback from homework or exam grades or something like that. But when I started doing my research, all of a sudden nobody was there teaching me anymore. So I was never sure how much progress I was making. And I was not being evaluated right away. So there wasn't this immediate sense of satisfaction from research like there was from class, I guess.] I finished all of my required coursework for my Ph.D. approximately two years ago, and then embarked upon a research project for my thesis work. I fell into this project rather than choosing it, [I first started working with my advisors after they asked me to be a research assistant, instead of being supported by teaching. So of course I said, "Yes." The project I started working on is the project which I ended up doing my dissertation on--and that's sort of how I fell into it. They had money to support this work, and so, I started to work on this project for them. Then, that was fine. But then when it came time to do my work, I had been working on this project for a year. Instead of trying to put something totally new together, they said, "Since you know so much about this project, why don't you just try to extend this research." It made sense to me at the time, and I suppose it still does make sense to me. Sort of the direction they would lead me in, but that's why I sort of fell into it. It wasn't like I had found out about it independently and said, "That really interests me." It sort of came about due to this research position.] since at the tim e I was being supported by research grants through my advisors, and this happened to be the topic on which they needed work done. At first, I was excited about the project. [Well, first of all I was excited that they had come to me and ask me to do research with them. I felt like that was a big honor. Most of the time you have to go seek out somebody to do research with. So I was excited about that. I was also excited about doing research, that's what I went to grad school for. It is an interesting topic. When it was first presented, I thought: "Wow, this is really great." It's something that's really applicable. They were working with a company around the city, so we could go and see the results. And I was going to be writing computer code, which I was really good at. So I thought it would a good project.] Here was some original work that they had written papers on and developed algorithms for, and I was writing the computer programs that were to test out the practical validity of their theories. It seemed t o me that this was what I had spent four years of graduate study preparing to do--[I spent this time in graduate school taking all these classes in applied math, and I was sort of focused in on applied math. And I took all the classes with numerical algorithms, and by that time, I had probably four or five classes in numerical method. So here is this project, an applied project, and I'm going to be testing out numerical algorithms and that's why I took all these classes: to be able to something like this.] and in fact, I did it quite well. [They gave me this project--and actually compared to how long it has taken me to do my dissertation--they gave me this project to do: I had to write up this computer program and test out all these different cases, and so forth, and I did it over the summer. So in three months, it was done and it worked. We did some investigation and it just seemed so amazing, it was done so fast. R: Sounds like you were quite happy with it? P: I think they were happy with what I'd done, and I was happy with what I'd done, too.] I finished my program, the numerical tests were done, and I came out of it with my name on a jointly-published paper. All a very good start for a fledgling Ph.D. student in mathematics. [Well, I think, "Yes, a very good start," because I think at that point I had passed all preliminary exams, I'd passed all my comprehensive exams, I was just about done with course-work. I started working on this research project. I hadn't done any original research but had done these numerical tests, and I had my name on a paper--a published paper in what was considered to be a very good refereed journal. One of the things you need in order to get jobs when you finish your Ph.D., is you have to have some record. It helps to have a record of publication.] Next came the time for us to decide what topic I should pick for my original thesis work. This particular project is not what I would have chosen, [I kind of think that is a big part of it, of the problem. The advisors that I ended up working with, they are the professors I would have chosen to work with. In fact, when I went to school I knew ahead of time that I would probably want to work with these people. But they work in a different area of applied math, which is what I thought I wanted to do research in. In fact, they had a bunch of students at the same time as me and, everyone else was working on their project except for me. I was working on this other topic. If I had never started working with them on their research project, I would probably ask them if I could work with them on this other area of math, not the area I ended in.] but since I already knew all of the background information, and since my advisors felt that there was further work to be done in the same area, it was mutually agreed upon th at I would try to further the research that they had already done on this topic. [We talked about what I was going to do my research on, and I had told them that I was interested in this other area. We talked for a while about what my options were, and the truth is that they told me that if I really wanted to work in this other area, it would be fine with them. But that I had already, in their opinion, spent a year and a half doing all the background work on this area I was working on for them. So I would sort of have to start over if I did work in a different area. What they said made a lot of sense to me at that time. I said, "Yeah, you're probably right. I'll see where I can go with this." We discussed it, and all agreed that this was going to be the fastest way for me to get finished.] Again, I originally was excited and hoped that I would soon be writing programs to test the new and improved theories that I devised on my own. [In the beginning, I had a lot of enthusiasm. I had been successful in what I h ad done for them. So now it's time for me to work on my own thing, and I was thinking two years and I'll be done. R: Was that part of the hope part? P: Oh yeah. So I hoped that I would start in whatever direction, and I guess that when I started doing research, it would be obvious to me what to do. So I was kind of excited about it, and I thought I'm just going to sit down and start doing it. In a couple years, I'll graduate. I was excited and I was hoping that I would be done pretty soon.] These new breakthroughs, however, were not so easily found. [I thought I was going to sit there and it was all going to come to me--how to go about solving this problem. Actually it wasn't even a matter of trying to solve the problem, it was a matter of trying to pose the problem. I don't know. Ideas were not coming to me, and I would sit and read things over and over again and still not have any ideas about where to go.] I began to find myself despairing of ever doing anything original on my own, [After not being able to come up with anything, I just was thinking I have to create something out of thin air. I have no idea of how to do that, and maybe I can't do that. For a long time I thought that way, and I thought that whatever I came up with had to be something completely new, that nobody ever thought of before. Which actually isn't the case, but that's what I thought when I was first trying to do this--I just had no idea. R: Can you tell me more about this feeling of despairing? P: I started having this feeling of despair, of never being able to do it. I thought I wasn't Ph.D. material or maybe I wasn't a cut out to do this. I had all these doubts about whether I'd be able to do it. And I think that the more I sort of sat there and realized that there were no ideas coming to me, the more I sort of got depressed about it. I thought, "It's never going to happen. I'm never going to come up with anything that's going to lead to a dissertation."] because I had no idea where to start. [That was the whole problem: I didn't know how to start coming up with ideas, and I didn't know how to start approaching the problem, how to start posing a problem. It was like I was faced with this huge task, and I felt like I had to do it all in a day. I thought I had to come up with the one big idea in a day, and I didn't know how to do that.] Eventually, my advisors wrote a new grant proposal on the topic, in which they outlined several directions that could be explored by further research. Now, I had at least some idea of which way to head, [They wrote this grant proposal. They took the problem, and what they did for me by doing that is they posed several questions, like seven or eight different questions that they thought could lead to further research. So then we sat down again after they wrote this, and we talked about some of the questions. At least I had sort of a question and something to direct me. So what I did, was after we talked about it, I picked one of questions and this is what I'm going to try to work on.] and I began with somewhat renewed enthusiasm to look for books and papers on some of the topics. [Well, I felt like, "OK. Now I have a question posed. So now I know where to look for answers." I had renewed enthusiasm because I felt like I know kind of where to start, so that helps. So I went off to the library and started looking up all this stuff started doing background research. R: So just having a direction helped you? P: Yeah, because I wasn't looking for my own question anymore. I had this question and it was like, OK now I have a question and can go out and try to figure out how to answer it. At least I knew how to start.] Unfortunately, these bursts of enthusiasm never seemed to last for long. [(Laughing) When I first looked at this proposal they wrote, I picked out a question and then it was sort of going to lead me into an area I had never studied. So with the renewed enthusiasm I went out and got books on this area, and I started trying to teach myself enough about it that I could figure out how to apply it to my problem. That lasted for a while, I was reading through this book making sure I understood what was going on. Then when it came to trying to apply it to the problem we had, I got stuck every time. At that point, I started despairing again thinking, "Well, I can go out and get this book and learn about this topic but I have no idea where to go from there. I'm back in the same situation I was in before."] These days I find myself sitting at my desk, on those occasions when I force myself to go to school, staring at the book from which I hope to find ideas. [Well, this book I'm referring to is on integral equations. I was teaching myself from it, and at that point, I just did not know where to go. I wasn't being supported by teaching--I had a fellowship for a while. So I avoided my advisors for six months. I wouldn't go to school a lot of the time. If I would go to school, I would hide out in my office so I wouldn't have to see my advisors. If I went to school, I would sort of pick up this same book and open it back up to the same page and look at it and still not get any new ideas. I went on like that for a while. R: What was it about this hook? P: It was this book on integral equations and I was trying to take and figure out how to apply it to this problem. I actually worked through a lot of the book.] I first checked out this book on March 1, 1996. [R: What was it about having this book for a year? P: What it is, is that I avoided it a lot of the time. I would look at this book once or twice in the two months. I'd renew it, do the same thing. I mean I really wasted a whole year by trying to avoid the whole thing.] It is now almost one year later, and I still have not finished working on the sections which I hope to apply to my research. Every time I pick up the book, I leaf to the same section, begin to reread the same chapter that I have read maybe fifty or sixty times, try to rewrite the theorems in the context of my particular problem, come to the same dead end and quit. [The same dead end, was that I would try to rewrite the theorems in the context of my problem, and I would sort of always get started. But I would work on the same thing and it never led anywhere. But there was a problem, I got stuck at the same point and couldn't figure out how to get around it. So I would try the same approach or a slightly different approach, time after time, thinking that if I write this down one more time, it's going to come to me how I can get around this problem, and I never could. In fact, I never did. I never got around that problem.] I read it, but cannot seem to get excited about it. [Every time I would read the book, it was never anything that interested me a whole lot. It never interested me! When I tried to teach it to myself from the book, it never interested me. I never got excited. I don't know if excited is the right word. I never really got interested in the math that was going on.] I am positive that I could do something with this, [At the time, I was sure that the problem they had posed could be solved and that it could be done using this method and that there should be some way to do it. I don't know why I was so positive about that, but for some reason I thought there has to be a way to do it.] find some breakthrough, but I cannot seem to get past this barrier in my mind [I don't know when the barrier appeared, but at some point I had looked at the problem from the same perspective for so long tha t I think there was just kind of this wall. I felt like I just could never get past it. I would be like staring--I guess I felt like there was something preventing me from seeing the light. Like when I would read this stuff, I thought I was understanding what I was reading and thought there would be some way it would connect with my problem. I thought that I should see it and I never saw it. I felt like there was something sort of preventing me from seeing the solution.] that is closing off any kind of intellectual curiosity about this topic. [After a while I didn't care. I wasn't interested in finding a solution. In the beginning, I thought, "Well, it's kind of interesting, and it would be neat if we solve it." After a while I didn't care about it. I wasn't curious if there was an answer. I was just sick of looking at it.] I could do almost any other boring, mundane activity and find more interest in it than looking at these books and papers for one more time. [Whenever I forced myself to go to school, inste ad of working, I would sit there and get on the internet and read inane chat groups that were going on. I read so many newsgroups: I knew what was going on in the Simpson's group and would talk to these people for a hour, sometimes six hours a day. How ridiculous is that? R: How do you explain that? P: I think I originally did it to avoid working, and I would go to school out of guilt. Also, so that my advisors would know that I was there. Then I would sit in my office and do no work. Sometimes I would go in and try to do some work, but most of the time I would try to stare at the stuff for a half an hour, then decide to take a break and log on to the computer for a second. Then four hours later, I realized I've been here for four hours, and it was time to go home and make dinner. I would do that all the time. R: Could you say more about being able to do any other activity? P: It was the worst. I had no interest in it at all. I mean I would do things like balance my check book, clean my office. Things I would never do otherwise just because I couldn't stare at this stuff. I just couldn't stare at this stuff anymore. So I would look for any menial task, or something to occupy myself as sort of a diversion from my research.] In fact, I very often will come into school and sit staring at the computer screen for hours, browsing the Internet, and reading the most incredibly stupid news-groups rather than attempt to do any work. R: There sounds like some self-contempt there? P: Oh Absolutely! There's a lot of self-contempt. I thought it was ridiculous that I did that. I still think I sort of wasted time; four to eight hours a day reading these stupid little comments on the computer screen. There's no meaning in that. I thought it was stupid. My time would have been much better spent reading a book. I was angry at myself because I knew I wasn't getting anywhere. I always felt that every time I went to my advisors I had to have accomplished something. I had to have something to show them when I go see them. At this point, I sort of kept spinning my wheels and never had anything to show them. So I never went to see them. It turned into weeks, then months, then six months between meetings with my advisors. That was the longest period of time that it ever turned into. I was angry at myself. I thought I was incompetent and I was depressed. I was convinced that I was not going to be able to do this.] I even began writing this protocol after leafing through my research notes for five mi nutes. This boredom is not something that I have ever experienced in my life before. [I think it was a different type of boredom because it was so prolonged. Certainly, I have been bored in my life. I've met some people and had some conversations that were boring or gone to a party that was boring, but that might have lasted a couple of hours. Then I got to leave and do something else. This was different. It was like every single day, I'd wake up and know that I was either going to avoid doing this, or I was going to have to try to do this. It was just like never-ending boredom . I didn't see any end coming, and that, I never experienced before. I never had a situation where I felt I couldn't get out of it. I couldn't see it ending.] In the past, I have always been energetic in every area of my life. [I've always been a person who sort of takes on a million things and usually does a million things pretty well. I have always had a lot of energy. I've always excelled at school. I've always excelled at everything I did . I always was very interested in what I was doing and had a lot of energy for it.] In fact, people used to tell me that I took on too many things. [My family always tried to tell me that I try to do too many things. Especially, my grandma used to say, "You do too much, you do too much." Even professors would tell me that, because I always had a job and had a full load of classes. Besides that, I would be in a choir and going out with friends--always doing a lot of different things.] But that never bothered me; [It never bothered me that I had so many things going on. I never felt like I was doing too many things. I never even remember feeling overwhelmed about all I was doing. It seemed to me then and seems to me now that I have a million things going on and I seem to get them all done. I actually get more done than if I have nothing going on.] the more I took on, the more I accomplished. But now, this problem I am having with my thesis work has had the reverse effect on my life. [I felt like I couldn't do a nything, whereas before I was sort of doing everything. Now all of sudden there was one thing that I could not get past. Like there was this one obstacle that I couldn't overcome. I always felt guilty doing anything else because I had one really important project to get done, and it wasn't getting done. So if I did other things, it was like taking away from the one thing I was supposed to be doing the most. So I felt like I couldn't do anything. I think I felt guilty doing other things. For sure, I didn't enjoy the other things as much, even when I was doing them. Even at work I felt guilty because I knew it was time away from my research.] I feel weighed down by it, [Whatever I was doing, it was sort of always in the back of my mind. That here's this big problem that I can't get over, and it was always in the back of my mind somewhere. It was always this big burden I was carrying around. So I guess that's what I mean about "weighed-down by it". It always bothered me, it was always a presence that I thought a bout.] and it occupies my mind, at least partly, most of the time. All of my energy seems to go into thinking about getting my work done, and I have no energy or patience for other aspects of my life. [Again, because I always felt weighed-down by it, I always felt like I was carrying this around in the back of my mind. It occupied my mind a lot of the time so I didn't have a lot of mental energy for thinking about other things. About patience too, I can remember trying to read books, and I couldn't because I didn't have enough mental energy or patience to concentrate on anything other than some inane book. Like anything that would require thought or concentration to read, I couldn't read. I wouldn't be able to concentrate on the same thing for a very long time, I just wouldn't have the patience to do that or I'd get sort of irritable pretty easily. I certainly wasn't in a very good disposition most of the time.] Because I feel guilty about it, I force myself to go into school, dig out my research, and attempt some kind of work. [The guilt came from myself because I felt I was letting myself down. I knew I was doing things that were not characteristic of me. I also felt like I was letting my parents down, letting advisors down, everyone in my family who had supported me when I was an undergraduate-and going to graduate school--and my friends. Every time someone would ask me how my research was going, I would feel like this big guilt wave, and would not want to give an answer. I'd try to change the subject.] But inevitably I get nothing done, [It seemed inevitable at the time. It seemed like this was never going to end; I was never going to have a breakthrough and never find an answer to this problem. I think it's probably true that when I forced myself to go to school, I never really believed I was going to get anything done. I sort of convinced myself of that after a while, which is probably why I didn't get anywhere after a while.] sit staring blankly into space or at the computer, [Boy, I sound really morbid do n't I?--Sitting staring blankly into space. I would sit there staring at the open book. You start staring at something and everything becomes blurred and you sit there staring at nothing. I don't even know if I was daydreaming or not. I was just sitting there trying to work. You realized ten minutes have gone by and you haven't read a single word.] and go home with a sense of utter failure. [Because at the end of the day I would realize that I wasted another day sitting at school for eight hours getting absolutely nothing done--sometimes not even really giving it a good effort to get anything done, sometimes not giving any effort at all. I always felt every single day of school, "You failed. You failed."] I avoid talking about school, I avoid my advisors, I avoid the voices in my head that tell me that I am wasting valuable years of my life. [I told you I avoided my advisors pretty successfully for six months. I avoided them because I felt that I had to have something to show them. The longer the period of ti me became since I had first seen them, the more I felt I should have to show them. After two months went by, I was just petrified by saying, "Two months have gone by and I have got nothing." So I continually avoided them. I avoided talking about school, I certainly never brought it up, and if someone else brought it up I would sort of say, "Yeah, things are going kind of slow," and change the subject. There was always in the back of my mind this fear and this doubt that I was never going to get anywhere. If I never got anywhere I should just quit now and not just sort of spin my wheels in grad school for four years. I'd gotten my master's only after a year and a half, so two and a half years would have been wasted if I didn't get anywhere.] Everyone else tells me that to quit now would be a waste--but am I not already just wasting time? [I don't know if I had given quitting a serious consideration. I had talked about it, I talked to friends about it. I don't think I ever really sat down and thought about what the consequences might be, or I certainly never sat down and thought about what I would do if I quit. I don't remember sitting down and thinking, "Well, I'd apply for jobs here or I'd do this." I always felt like I knew exactly what I wanted to do, and this was the only way to be able to do what I wanted to do. So I did talk about quitting, but I don't know that I would say that I seriously sat down and thought about quitting. R: When you were talking to others were you looking for some response from them? P: Probably. I knew they would probably say, "You shouldn't quit." I was probably looking for some reassurance from people who knew me well--to say, "You can do it."--because I had sort of convinced myself that, "No, I couldn't do it."] I know that if I could just get past this and finish, [So I felt that if I could just get done and get my Ph.D., then I would have the time to study the things I really wanted; that if I want to read books, that my mind wouldn't be so obsessed with this one thing, and I would be able to go back to enjoying books like history and classics. If I finished this I could enjoy the things I used to enjoy.] I would be free to study other things that interest me. [I certainly felt like I wasn't free, freed from this obligation, free from thinking that I had to work on this all the time or else I'd have to feel guilty about it.] I suppose that somewhere within my mind is the strength of will, [I guess I thought I should have been strong enough to just make myself do it. R: So part of the problem of boredom is a problem of the will? P: Yes, I guess I thought I wasn't exerting my will strongly enough. I felt I was sort of being weak by avoiding the whole thing; that I need to focus on the one topic and finish my dissertation.] I know that I am a strong person, [I mean I have the ability to go on even when things happen that don't go your way--or catastrophes, when bad things happen in your life--or being able to continue even if the odds are against you.] and I have already overcome many difficulties just to get to this point. [I felt like I had accomplished a lot just to get into graduate school. I went to college right out of high school, and I dropped out in my second semester because I lost everything I owned in a house fire. At that point I moved to my mothers house, then I moved out. I started living on my own for the first time. I was 18. I worked for a couple years and I started going back to school. I went to community college first, on grants and student loans. I went to a university on grants and students loans, as well as work ing to pay for the rest and supporting myself while at school. I felt, and still feel, a lot of pride that I was able to do that because it was hard for me to do that. I worked 30 to 40 hours a week, and went to school full time, and still graduated with honors, and I got into graduate school. I got full support for grad school, and that's why I thought I should be strong and should've been able to do it. I felt like I had already overcome a lot to even be here.] Reading back through some of what I have already written, I suspect that part of my problem is that I never have really been focused on only one thing in my life. As I said, I have always had a bunch of projects going on at one time. They all kept me interested and excited. [Maybe if it had been one thing that really absorbed me and that I was really interested in. I don't know if that would have mattered or not. I had other situations that I was bored in, but there was always an end. If it was at the beginning of the semester, I knew there was going to be an end. I think at that time I just didn't see the end, other than quitting.] Now that I am supposed to focus on this one big project--which will require much time and single-mindedness--I find that I am bored. I have not been able to find it within myself to just do it. [What eventually happened is that, later, all I did is work on my thesis. Now I'm into it and it's the only thing I'm working on. I'm hardly even at home. I'm at school all the time and I'm working really hard on it. I'm actually interested in it so maybe it's not that it was one topic. Maybe it's that I really didn't like doing it.] Rather, I have procrastinated, [Yeah, it's what I've been doing the whole time. Sitting there getting on the Internet, looking for other stupid little things to do. I was procrastinating working on it. I was procrastinating seeing my advisors, and telling them that I didn't think I could get anywhere. That's for sure. I always knew, "I should go. I should go. I should go." I told myself, "I'm gonna go tomo rrow. I'm gonna go tomorrow. I'm gonna go tomorrow." I was afraid. I was afraid they were going to think I was stupid. I was afraid they were going to think I was unable to do the work. I was afraid of going and saying, "Look I couldn't get anywhere."] looked for other interests, or sat idly wondering how it could possibly get done on its own. [I don't know if I seriously sat wondering if it was going to get done on it's own, but maybe this is what I was thinking as I was sitting staring blankly at the screen. Sometimes I would sit there and stare at it and think how is this ever going to get done and how am I going to do this.] Since such a miracle is not likely to occur, [R: miracle? P: The magnitude of the problem, I really felt like I would never figure this out. It was going to take an act of God to put this thought into my head for it to finally click. So I thought that it would take miraculous divine intervention to get the answer--to make me see the connection between the stuff I was reading and the problem I was trying to solve.] I will have to make a decision about how to handle this, and soon. For the university is not likely to let me sit here staring at the walls for too much longer. [1 felt like I had to make a decision about what I was going to do--whether or not I was going to think seriously about quitting or whether I should go and talk to my advisors about it. I knew eventually I had to do something. I certainly wasn't going to able to come in and sit in my office because my fellowship was going to run out. I was going to have to do something so I knew I just couldn't sit here and do nothing.] [P: Do you want to know what happened? This particular problem that I started working is not what I ended up working on. After the six months were up and I finally saw my advisors again, we sat down and had a really, really long talk and we completely changed my problem. It was the same basic topic. It was the same area, just a different question--a different one of the seven questions. I got much more interested in that, and I think I never originally felt frustration exactly because I never cared that much. I was never interested in the first question, and I was never interested in the direction it was taking me. Whereas the problem I ended up working on, I was interested in the direction it was taking me. When things wouldn't work out I would get frustrated, and I think the difference was that then I actually cared whether or not I could get the answer.] [R: When it came to picking the first question did you pick it? P: Actually no. I mean (Laughs), "we agreed." It was something that one of my advisors was interested in and I think, to be quite fair to them, they thought was going to lead somewhere interesting. Since they thought it was a particularly good question for a dissertation, and thought it would lead to a lot of things, I think that's why they picked it. At the time I didn't know enough about research to really argue with them, and so I agreed and that's what I started doing. Like they later said to me, "Sometimes you try an avenue of research, and it doesn't work out and that's just what happens." But like they told me, "You can't disappear for six months."] [R: What about the time you got excited about the new project. Was that their choice too? P: Actually no. What happened, after we had this meeting, was we went over what my options were. So we looked back through their proposal and they said, "Here are some other questions and there are some options." Then they also said that if I wanted to get out completely and work on this other completely different area--that I told you about--they said I could still do that if I wanted to, but I had to realize that it was going to take me longer. So they sort of gave me all these options, and I went away and thought about it for a while. Then I came back to them and said, I'd like to maybe work on the second problem. Again, they seriously encouraged me not to change the major area of my research because it would take me so much longer, but they gave me an out.] [R: When you came back and said, "Yes, I want to pursue this." do you think it had a big impact? P: Yes I do. For one thing, I sort of felt like I had been forced into the situation--like it wasn't of my own choosing to work on this problem. In fact, I think I felt resentment toward them because I felt that they sort of pushed me into this problem and it was easier for them, and they wanted someone to work on it. So they picked me and why did they have to pick me? So I think it was different because when I picked it, they gave me all the options and I decided to do it--actually against the advice of other graduate students, who told me the same thing would happen again, and I'd be better going into the area that really interested me. But that didn't happen and I really got into this. NOTES (1.) Portions of this article originally appeared in the author's doctoral dissertation, Being bored with one's life: an empirical phenomenological study, 1999. REFERENCES Abramson, E. E., & Stinson, S. G. (1977). Boredom and eating in obese and non-obese individuals. Addictive Behaviors, 2, 181-185. Bargdill, R. W. (1999). Being bored with one's life: An empirical phenomenological study. (Doctoral dissertation, Duquesne University, 1998). Dissertation Abstracts International-B, 59 (12), AAT 9914307. Bargdill, R. W. (2000). A phenomenological investigation of being bored with life. Psychological Reports, 86, 493-494. Blasczynski, A. P., Cocco, N., & Sharpe, L. (1995). Differences in preferred levels of arousal in two sub-groups of problem gamblers: A preliminary study. Journal of Gambling Studies, 11 (2), 221-229. Beck, A. T. (1967). Depression: Clinical, experimental and theoretical aspects. New York: Harper & Row. Beck, A. T., Rush, A. J., Shaw, B. F. & Emery, G. (1979). Cognitive Therapy of Depression. New York: The Guilford Press. Boss, M. (1969). Anxiety, guilt, and psychotherapeutic liberation. Review of Existential Psychology and Psychiatry, 11 (3), 173-195. Caplan, R. D., Cobb, S., French jr., J. R. P., van Harrison, R., & Pinneau jr., S. R. (1975). Job demands and worker health. Washington, DC: U.S. Department of Health, Education and Welfare. Clive, G. (1965). A phenomenology of boredom. Journal of Existentialism, 5 (20), 359-370. Farmer, R., & Sundberg, S. D. (1986). Boredom proneness--the development and correlates of a new scale. Journal of Personality Assessment, 50, 4-17. Fenichel, O. (1953). On the psychology of boredom. The collected papers of O. Fenichel. New York: W. W. Norton. Ferguson, D. (1973). A study of occupational stress and health. Ergonomics, 16, 649-663. Fiske, D. W., & Maddi, S. R. (1961). Functions of varied experience. Homewood, IL: Dorsey. Forsyth, G., & Hundleby, J. D. (1987). Personality and situation as determinants of desire to drink in young adults. International Journal of the Addictions, 653-669. Frankl, V. (1959). Man's Search for Meaning. [I. Lasch, trans.] New York: Simon and Schuster. Freud, S. (1989). The Freud Reader. [P. Gay, ed.]. New York: W. W. Norton. Giorgi, A. (1975). An application of phenomenological method in psychology. In A. Giorgi, C. Fischer, & E. Murray (eds.). Duquesne studies in phenomenological psychology (2). Pittsburgh: Duquesne University Press. Horney, K. (1950). Neurosis and Human Growth. New York: W. W. Norton. Iso, A., Seppo, E., & Crowely, D. (1991). Adolescent substance abuse and leisure boredom. Journal of Leisure Research, 23, 260-271. Johansson, G., Aronsson, G., & Lindstrom, B. O. (1978). Social, psychological and neuroendocrine stress reactions in highly mechanized work. Ergonomics, 21, 583-600. Johnston, L. D., & O'Malley, P. M. (1986). Why do the nation's students use drugs and alcohol: Knowles, R. (1986). Human development and human possbility. Lanham: University Press of America. Leon, G. R., & Chamberlain, K. (1973). Emotional arousal, eating patterns, and body images as differential factors associated with varying success in maintaining a weight loss. Journal of Consulting and Clinical Psychology, 40, 474-480. O'Connor, D. (1967). The phenomena of boredom. Journal of Existentialism, 7 (27), 381-399. O'Hanlon, J. F. (1981). Boredom: Practical consequences and n theory. Acta Psychologica, 49, 53-82. Perkins, R. E., & Hill, A. B. (1985). Cognitive and affective aspects of boredom. British Journal of Psychology, 76, 221-233. Samuels, D. J., & Samuels, M. (1974). Low self-concept as a cause of drug abuse. Journal of Drug Education, 4, 421-438. Sartre, J. P. (1947). Existentialism, [B. Frechtman, trans.]. New York: Philosophical Library. Sherman, J. E., Zinser, M. C., Sideroff, S. I., & Baker T. B. (1989). Subjective dimension of heroin urges, influence of heroin-related and affectively negative stimuli. Addictive Behaviors, 14, 611-623. Smith, R. P. (1981). Boredom: A review. Human Factors, 23 (3), 329-340. Straus, E. (1980). Disorders of personal time. Phenomenological Psychology. New York and London: Garland. Wangh, M. (1979). Some psychoanalytic observations on boredom. International Journal of Psychoanalysis, 60, 515-527. Wertz, F. (1983). From everyday to psychological description: Analyzing the moments of qualitative data analysis. Journal of Phenomenological Psychology, 14 (2), 197-241. From checker at panix.com Tue May 17 14:49:42 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:49:42 -0400 (EDT) Subject: [Paleopsych] Prevention: Do You Have the Silent Syndrome? How boredom affects health; ways to prevent boredom Message-ID: Do You Have the Silent Syndrome? How boredom affects health; ways to prevent boredom Prevention, May 2000 v52 i2 p140 by Linda Mooney It's been linked to everything from heart attack to drug abuse, and it can even lead to premature death. Find out what this hidden risk factor is--and learn 29 simple (and fun!) ways to neutralize it It may promote back pain, heart attack, depression, and hostility. It may lead you to abuse drugs, drive drunk, develop an eating disorder, or cheat on your spouse. It may even shrink your brain like a cotton shirt in scalding water. And it could kill you. One researcher speculates that when all else fails, this mysterious risk factor "entertains itself" by making you sick--possibly even by promoting the development of cancer. What is this silent threat? Boredom. Sounds impossible, but it's true. Twenty-one percent of Americans say that they are regularly bored. A lot of researchers think the number is much higher. They view boredom as a serious--and often undetected--form of stress. They say that it's so pervasive in America that we just don't notice it. And yet it's remarkably easy--and fun--to cure. Why It's Trouble Boredom is ennui, malaise, the blahs, a rut, a big yawn. When you're bored, "nothing matters much," says Sam Keen, PhD, from Sonoma, CA, author of Inward Bound (Bantam, 1992). "You have no burning dreams or lively hopes. Not even outrage. You go through the day automatically, by the numbers, without feeling. Same old rat race." Boredom is sometimes tough to detect because it's less of a feeling than an "overwhelming absence of feeling," says Steve Vodanovich, PhD, associate professor of psychology at the University of West Florida in Pensacola, whose studies have found that boredom-prone people report more emotional symptoms than other people. When you're bored, you may sense that there's "something wrong" but can't put your finger on it. If you could, you might take up an interesting hobby, travel, or sign up for a workshop. But more often than not, says Dr. Vodanovich, "boredom can drive us to self-destructive acts." Studies have linked boredom to accidents, drug use, pathological gambling, even street violence. Surveys show that many people ease their boredom with over-eating, drinking, and infidelity. Boredom can also lead to depression. (See "When Is Boredom Depression?" on p. 144.) Everyone's susceptible, though studies show that boredom is more common in men. You're also more likely to be bored the older you get, as you fall victim to the "been there, done that" phenomenon. That process may be accelerated by TV and movies, which allow you to have "virtual" experiences that might otherwise have taken you many years to accumulate, if at all, suggests Augustin De la Pena, PhD, a psychophysiologist at the Sleep Disorders Center in Austin, TX. Dr. De la Pena's controversial theory--that your mind "entertains itself" through aging and disease processes--is based on his review of about 1,000 animal and human studies. Thanks to your accumulated screen time, "you start getting bored earlier in life, aging quicker, and developing all sorts of disorders and diseases in order to stay, in a sense, entertained," he says. Other evidence is not speculative. The famed Framingham Heart Study found that bored housewives were more than twice as likely to have a heart attack than other women. Studies by Marian Diamond, PhD, professor of anatomy in the department of integrative biology at the University of California in Berkeley, found that when rats are deprived of stimulation--science-talk for bored--their cortex shrinks. Dr. Diamond was even able to stimulate brain growth in old rats when she supplied them with toys and the company of other rats. Usually, it's not wise to try to translate rat studies to humans, but in this case you can make an exception. Take a tip from the rats: A little stimulation can be a healthy thing. "Boredom is a gentle signal to move on to a new challenge. If you don't respond, stronger ones can follow, such as depression, pain, illness, even death," says Dr. Keen. 29 Ways to Get Out of Your Rut In 1996, 48-year-old Alaskan Salli Slaughter-Mason took a look at her life--a loving husband, wonderful kids, a great job, and strong community ties--and asked herself, "Is that all there is?" Despite their fairy-tale life, the Slaughter-Masons weren't happy. They were too busy, too stressed, and stuck in a rut. So, over oysters on their 18th wedding anniversary, Salli asked her husband, "What do you want to do before you die?" Neither could answer the question right away. But by evening's end, they had an answer: a chance to see the world. Months later, they sold all their possessions, left their Anchorage home and their jobs, and set off to homeschool their 7- and 14-year-old girls on a yearlong sabbatical around the world. Although it cost them half of their retirement nest egg, Salli says it was "the best thing we've ever done. Through our journey, we grew closer, stronger, and happier as a family and as individuals. None of us will ever be the same." The Slaughter-Masons' solution is one way--a big way--to shake up your life. But you don't have to cash in your 401(k) to stir up a little excitement. Here are some easy--and fun--rut busters: When you're bored with your relationship ... 1. Dare to be bare. When was the last time you greeted your mate unexpectedly in the nude? Light a few candles, pour wine, play some Pavarotti, and offer to rub his feet. Slip a passionate letter into her briefcase. Break the boring cycle: Do or say something each day that surprises your spouse and you. 2. Go for goose bumps. Is a movie, nature walk, or bike ride your idea of a pleasant weekend afternoon? Then try a bungee jump, go skydiving, or hop aboard the most spine-tingling roller coaster you can find. In a study of 53 married couples assigned to spend 1 1/2 hours a week together doing self-defined pleasant or exciting activities, those who did exciting activities showed a greater increase in marital satisfaction afterward than those who engaged in "pleasant" pastimes. 3. Trim some trees. Sure, it's his job, but he's going to be doing the vacuuming and grocery shopping instead. Swapping chores spices up the routine and gives a new sense of appreciation for what your other half does. 4. Open your home. Just as couples bond over babies, bringing any new life into your house adds a spark. Take in an exchange student, or adopt a homeless pet. Host weekly dinner parties or a game of poker with friends. [Graphic omitted] 5. Make old times new. Did you ever enjoy going to rock concerts, cooking lasagna together, or sharing hot chocolate at the 30-yard line? Do you remember all the laughs you had playing Trivial Pursuit or Scrabble? Dust off those good times and give them another chance, but add a little intrigue to any competitive pursuits --play for money (or something sexual!). 6. Do tell, do ask. Is life a yawn for your partner too? Chances are, you're both going stir-crazy. This is a wonderful time to re-open the lines of communication and build intimacy. Bring up the subject in a fun way--for example, as you take your surprised spouse to a swing dancing class or foreign film festival. When you're bored with your job ... 7. Look between the lines. If your job is boring you out of your gourd, consider the possibility that it could be your fault. "Boredom is in the mind of the individual and is not a function of the environment," says Ellen Langer, PhD, professor of psychology at Harvard University and author of The Power of Mindful Learning (Addison-Wesley, 1997). To prove her point, she took women typically bored by football and divided them into groups. She told one group just to watch the Super Bowl game; another to watch it but also look for three to six things they'd never noticed before. In the end, the women who were looking for interesting things liked the game more than before--and the more things they found, the more they enjoyed it. To apply this to your job, walk in tomorrow as if it were your first day. Keep your eyes and ears open for interesting things that you've never noticed before. See if it doesn't feel like a new job too. [Full Size Picture] 8. Jump-start your day. Even the commute can feel like work. Make it enjoyable so you arrive in a better mood. Go to a local pool first thing in the morning and swim. Take the scenic route to work. Listen to a classic book on tape or your favorite CD. Or shut off the news and enjoy the silence of your thoughts. 9. Sign up. Take a PowerPoint workshop or a night course in accounting or a second language. Ask the boss if you can serve on the quality control committee. Start a Toastmasters group. Gathering new skills and talents not only keeps you challenged but builds on your realm of expertise so that you can ask for more responsibility--a big plus, since having more say over your work makes any job more interesting. 10. Loosen up. When it's possible, blur the line between work and play. Form a team for a daily 3 PM game of hallway soccer; just 15 minutes can break the monotony and get creative juices flowing. Or push to have meetings in exotic locales. More companies are trying this, citing that the more informal environment allows participants to relax and open up. Can't afford to take the staff to an island? How about the local diner for coffee and bagels--or serve refreshments in your own conference room? Be creative as you stick to your budget. 11. Decide where you're going. Is crunching numbers crushing your spirit? Maybe you should try for a position in budget planning. A lack of professional direction or focus can make your job feel like just that--a job, not a career. With a clear goal, the day-to-day routine feels more meaningful and less boring. 12. Don't answer that! Just when you start writing that cumbersome dog-food proposal, the phone rings or your computer says, "You've got mail." And your concentration evaporates. According to one study, being disrupted a lot contributes to feelings of boredom and dissatisfaction with a task, especially if the interruption isn't urgent. To really get into the project, close your door and let phone messages go through to voice mail. [Full Size Picture] 13. Stop picking on you! Saying or thinking "My speech was awful" or "I'll never be as good as Tom" can make your work even more tiresome. In one study, people who were the most judgmental when it came to themselves showed an increased tendency to become bored. Treat yourself like your best friend; pick up fresh flowers for your office when you've finished a tough project. 14. Play hooky. Sneak out of the daily grind for an afternoon of pure hedonism. Get a pedicure. Take off your shoes and walk barefoot in the grass. Have a root beer float in an old-fashioned diner. If it rains, walk without an umbrella. You'll return to work the next day feeling recharged. 15. Love it or leave it. If all else fails--and you can't redesign your job--move on. Rediscover your real passion. Finding a field that's compatible with your self-identity gives a general sense of purpose and can break you free of ruts at home too. When you're bored with your free time ... 16. Spook yourself. Try parasailing or amateur stand-up comedy (bring friends who'll laugh no matter how you do). If you're single and haven't dated in eons, ask out that cute salesclerk. Adventure and challenge trigger a release of brain chemicals known as endorphins, which lift your mood and make you feel youthful. 17. Pull the plug on ER. Not only is vegging out on Mattock reruns unproductive, but, according to one study, it can be addictive. TV gradually requires more and more of your time to deliver that entertainment "high." Research also shows that there's a reason it's called the boob tube: It can dim your analytical abilities. [Full Size Picture] 18. Fry up some blackened redfish. Take a class in woodworking, black-and-white photography, fly-fishing, Shakespearean acting, or Cajun-style cooking. Learn Russian or sign language. Take bareback riding lessons. Learn to scuba dive. A Rutgers University study found that people who did a variety of things were more satisfied with their life. 19. Join the crowd. Start a sci-fi book group, volunteer for a local political party, lead a scout troop, or join a nearby chapter of the National Audubon Society. It opens you up to a whole new world of people, and often interpersonal relationships are the best spark plug to a more interesting life. 20. Imagine being someone else. Paint an accent wall bold blue. Change your decor. Rearrange your furniture for a fresh perspective. Hang wind chimes. Listen to Miles Davis if you are an Elvis fan. Try the catfish that you think you hate. If you're a jeans person, dine out dressed in black satin, and don a scarf or even a wig. By doing things that are out of character, you reinvent yourself as the person you want to be. 21. Be a kid again. Start a snowball fight. Cut your sandwich into triangles, and learn a new word every day. Wave to people in passing cars. Throw a slumber party and have your friends bring stories and photos; pop popcorn and watch movies. 22. Let fate guide you. Open up to the events section of your newspaper, close your eyes, and point. Whatever you touch, you do--period. You may finally get to see the local museum or historical sights in your area. Or maybe you'll end up at a great bar or restaurant you've never gone to before. Trying this helps you break out of the "there's nothing to do" box you've built in your mind. Do the same thing with a map. 23. Set a quota. Tell yourself you're going to meet someone new once a week, and plan to have at least five fun, memorable experiences each month. As the end of the week or month approaches, check your progress. If you're lagging behind, you know it's time to become more assertive and start making plans. [Full Size Picture] 24. Nudge the drudge. Scraping macaroni off the dinner plates and emptying the cat litter may feel more mentally fatiguing than, say, all-day vigorous gardening, simply because you enjoy one more than the other. To stave off boredom, use music and humor to make unpleasant tasks feel like play or a game. Dusting to the oldies, anyone? When you' re bored with retirement ... 25. Travel right. Even travel can become a routine if you're not careful. If you always play golf and stay near the hotel, stay in a bed-and-breakfast and go parasailing. Pass up the grilled chicken that you know and love for a local sampling of alligator or curried lamb with banana-raisin chutney. Track down the hideaway hot spots, make friends with the locals, and camp under the stars. Or sign up for an adult education study tour and travel with a group for days or weeks learning about art, archaeology, or wildlife ecology. 26. Live for today--and tomorrow. Can't see beyond tonight's pot roast? Mapping the highlights of your past, present, and future is a great way to gain perspective on what's important--and set your sights on the great things to come. 27. Peak past your prime. Just because your body's best days are behind it, that doesn't mean that you can't reach for new heights. Pursue activities that don't require heaps of physical prowess, such as learning to play the saxophone or trying out for a role in your local amateur theater. Don't be afraid to do something new. 28. Be a pal. Losing touch with the gang from your old neighborhood or neglecting your best friend who moved to Seattle--then not making new friends--can be a recipe for loneliness and boredom. One study done on older adults revealed that a lack of friendships was a major contributing factor in their depression and boredom. [Full Size Picture] 29. Stay in the driver's seat. Research says that most of us are happiest when the wheel is still ours as we age. Patients in one nursing-home experiment lived longer when given greater control of their institutionalized environment. As much as the kids or relatives may mean well, in other words, politely encourage them to butt out. Top 10 Boring Tasks(*) 1. Standing in line 2. Laundry 3. Commuting 4. Meetings 5. Dieting 6. Exercising 7. Weeding lawn or garden 8. Housework 9. Political debates 10. Opening junk mail (*) National poll results from the Boring Institute RELATED ARTICLE: When Is Boredom Depression? If you're like most Americans, you probably ignore boredom , hoping it'll pass. Or maybe you start a new business. Or take up a hobby. Or have an affair. But if these fail, you may find yourself depressed. So how can you tell when seemingly harmless boredom has evolved into something worse? "When the nothingness is replaced by violent feelings and vivid imagination; when the void is alive with demons," says Sam Keen, PhD. "Think of boredom as the common cold of the psyche, and depression as pneumonia." If you've been experiencing painful boredom for several weeks, you're probably depressed. Seek counseling. A psychiatrist, psychologist, or counselor can help you get to the root of your problem. RELATED ARTICLE: Who's Bored? Everyone gets bored sometimes, but certain personalities are more prone than others. For simplicity, Sam Keen, PhD, from Sonoma, CA, and author of Inward Bound (Bantam, 1992), breaks us down into the two personality types, A and B. Here he describes how each type deals with boredom. [Full Size Picture] Type A personalities: Fast-paced, sociable people escape boredom by running from it or keeping it in their unconscious, but once it catches up with them, they can fall prey to fatigue, or worse: depression or disease. Type B personalities: Laid-back folks are more boredom-prone but are not often depressed because they tolerate boredom better than anxiety and aggression. They are not threatened by inactivity or lack of intensity. The connection between boredom and disease is less obvious for them. Which is better? Both could learn something from the other, says Warren Rule, PhD, professor of rehabilitation counseling at Virginia Commonwealth University, Medical College of Virginia in Richmond. Type A's, who usually have the most difficulty with boredom, are great at planning their futures but could profit from getting more pleasure out of the present. Type B's, on the other hand, should spend more time consciously figuring out what's missing in their life, then learn to take risks. Linda Mooney is Prevention's beauty editor. From checker at panix.com Tue May 17 14:49:54 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:49:54 -0400 (EDT) Subject: [Paleopsych] Boredom proneness and psychosocial development. John D. Watt; Stephen J. Vodanovich. Message-ID: Boredom proneness and psychosocial development. John D. Watt; Stephen J. Vodanovich. The Journal of Psychology, May 1999 v133 i3 p303(1) Author's Abstract: The effect of boredom proneness as measured by the Boredom Proneness Scale (R. F. Farmer & N. D. Sundberg, 1986) on college students' psychosocial development was investigated via the Student Developmental Task and Lifestyle Assessment (SDTLA; R. B. Winston, T K. Miller, & J. S. Prince, 1995). Low boredom-prone students had significantly higher scores on the following SDTLA measures: career planning, lifestyle planning, peer relationships, educational involvement, instrumental autonomy, emotional autonomy, interdependence, academic autonomy, and salubrious lifestyle. Gender differences on boredom proneness and psychosocial development measures are discussed. -------------- According to Chickering and Reisser (1993, p. 2), "Psychosocial theories view development as a series of developmental tasks or stages, including qualitative changes in thinking, feeling, behaving, valuing, and relating to others and to oneself." In the context of higher education, psychosocial development refers to the study of how traditional-aged college students resolve biological and psychological changes within themselves in relation to environmental experiences and expectations. Although a number of psychosocial theories have been proposed to help explain how individuals develop psychosocially, Chickering's conceptualization has, perhaps, been the most influential (Chickering, 1969; Chickering & Reisser, 1993). Through his research, Chickering recognized that students in higher education experience unique challenges and opportunities and undergo changes in more than their intellectual development. Specifically, Chickering (1969) identified seven broad changes experienced by traditional-aged college students. Chickering and Reisser (1993) revised, renamed, and reordered the original seven vectors into the following developmental stages: developing competence, managing emotions, moving through autonomy toward interdependence, developing mature interpersonal relationships, establishing identity, developing purpose, and developing integrity. Chickering's intention was to develop a conceptualization that could be used by researchers to address the needs of students beyond basic academic concerns. Chickering and Reisser stated that "human development should be the organizing purpose for higher education" and "community colleges and four-year institutions can have significant impact on student development along the major vectors" (p. 265). One variable that may significantly affect college students' psychosocial development is boredom. Boredom is, perhaps, best viewed as "an aversion for repetitive experience of any kind, routine work, or dull and boring people and extreme restlessness under conditions when escape from constancy is impossible" (Zuckerman, 1979, p. 103) and "a state of relatively low arousal and dissatisfaction, which is attributed to an inadequately stimulating situation" (Mikulas & Vodanovich, 1993, p. 3). Mikulas and Vodanovich (p. 3) discussed boredom in terms of state, meaning a "state of being or state of consciousness, a particular combination of perceptions, affect, cognitions, and attributions." That is, this definition reflects the common consideration of boredom as a transitory state. It should be noted, however, that boredom has also been discussed and assessed in trait terms (Farmer & Sundberg, 1986; Iso-Ahola & Weissinger, 1990; Watt & Ewing, 1996). Boredom has been associated with a host of social and psychological issues. In education, it has been linked to low grades and diminished academic achievement (Freeman, 1993; Maroldo, 1986), truancy (Irving & Parker-Jenkins, 1995), dropout rates (Robinson, 1975; Sartoris & Vanderwell, 1981; Tidwell, 1988), school dissatisfaction (Aldridge & DeLucia, 1989; Gjesne, 1977), and oppositional behavior (Larson & Richards, 1991). In industry, boredom has been associated with job dissatisfaction (O'Hanlon, 1981), property damage (Drory, 1982), and increased accident rates (Branton, 1970). In a clinical context, boredom has been reported to be significantly positively related to depression, anxiety, hopelessness, loneliness, hostility (Farmer & Sundberg, 1986; Vodanovich, Verner, & Gilbride, 1991), overt and covert narcissism (Wink & Donahue, 1997), alienation (Tolor, 1989), and borderline personality disorder (James, Berelowitz, & Vereker, 1996). Significant negative correlations have been reported between boredom proneness and self-actualization (McLeod & Vodanovich, 1991); purpose in life (Weinstein, Xie, & Cleanthous, 1995); sexual, relationship, and life satisfaction (Watt & Ewing, 1996); and persistence and sociability (Leong & Schneller, 1993). Boredom has also been implicated as a contributing factor in substance use (Johnston & O'Malley, 1986), pathological gambling (Blaszczynski, McConaghy, & Frankova, 1990), and eating disorders (Ganley, 1989). Although in the past decade we have witnessed an increasing interest in boredom research and scholarship, it remains a neglected topic in both psychology and education. Our purpose in the present study was to investigate the effect of boredom proneness on psychosocial development among traditional-aged college students. On the basis of the previous review, we postulated that boredom proneness would be negatively correlated with psychosocial development. Specifically, low boredom-prone students were expected to possess significantly higher psychosocial development scores. Gender differences for both boredom proneness and psychosocial development were also examined. On the basis of past research (Vodanovich & Kass, 1990a; Watt & Blanchard, 1994; Watt & Ewing, 1996; Watt & Vodanovich, 1992a), we hypothesized that men would have significantly higher boredom-proneness scores than women. Because of inconsistencies in the literature, no specific predictions were made regarding gender differences on psychosocial development. Method Participants The participants were 76 female and 66 male (N = 142) student volunteers attending undergraduate psychology and education classes at a large university in the midwestern United States. The participants' ages ranged from 17 to 24 years (M = 19.4, SD = 1.4). The majority of the participants were White (90.8%), 1st-year students (69.7%), unmarried (98.6), and living in a single-sex residence hall (35.2%). Measures Boredom proneness. Farmer and Sundberg's (1986) 28-item (true-false) Boredom Proneness Scale (BPS) was used to assess the tendency to experience boredom . Sample items from this measure include "It is easy for me to concentrate on my activities," "It takes a lot of change and variety to keep me really happy," and "It takes more stimulation to get me going than most people." In this study, the BPS was modified to a 7-point Likert-type format. Responses ranged from highly disagree (1) to highly agree (7); higher scores reflected greater boredom proneness. Adequate internal reliability consistency for the true-false format of the BPS was reported by Farmer and Sundberg (1986; r = .79) and Watt and Davis (1991; r = .82). Comparable internal consistency reliabilities have also been reported by Watt and Blanchard (1994; r = .81) and Watt and Ewing (1996; r = .84), using the same 7-point Likert-type format used in this study. The internal consistency reliability of the BPS in the present study was .83. Sufficient validity evidence for the BPS has also been reported. For instance, the BPS has been significantly positively related to depression, hopelessness, loneliness, and amotivation (Farmer & Sundberg, 1986). Other researchers have reported significant positive relationships between boredom proneness and impulsivity (Watt & Vodanovich, 1992b); hostility, anxiety, and depression (Vodanovich et al., 1991); sexual boredom and relationship dissatisfaction (Watt & Ewing, 1996); and poor impulse control and dogmatism (Leong & Schneller, 1993). Significant negative relationships have been reported between boredom proneness and self-actualization (McLeod & Vodanovich, 1991), life satisfaction (Farmer & Sundberg, 1986), need for cognition (Watt & Blanchard, 1994), and positive affect (Vodanovich et al., 1991). Last, factor analytic evidence suggests that boredom may best be viewed as a multidimensional construct (Vodanovich & Kass, 1990b; Vodanovich, Watt, & Piotrowski, 1997). Specifically, Vodanovich and Kass (1990b) found the BPS to consist of the following five factors: External Stimulation (need for excitement, change, and variety; internal consistency reliability .78 for the present study), Internal Stimulation (difficulty in keeping oneself interested and entertained; .70 for the present study), Affective Responses (negative emotional reactions to boredom ; .63 for the present study), Perception of Time (perception of slow time passage; .72 for the present study), and Constraint (feelings of restlessness and impatience; .74 for the present study). Student Developmental Task and Lifestyle Assessment. We used the Student Developmental Task and Lifestyle Assessment (SDTLA, Form F95; Winston et al., 1995) to assess psychosocial development among traditional-aged college students. The SDTLA is designed to measure certain aspects of Chickering's model of psychosocial development (see Chickering, 1969; Chickering & Reisser, 1993). The 153-item SDTLA is composed of three developmental task areas: (a) establishing and clarifying purpose, (b) developing autonomy, and (c) having mature interpersonal relationships. Each task is divided into subtasks. In addition, the SDTLA has the Salubrious Lifestyle Scale, designed to assess the degree to which a student's lifestyle is consistent with or promotes good health and wellness practices. The SDTLA also contains a 7-item Lie Scale, to assess response bias. The establishing and clarifying purpose task consists of four subtasks: (a) educational involvement (the degree to which students have well-defined educational goals and plans, are actively involved in the academic life of their school, and are knowledgeable about campus resources); (b) career planning (the degree to which students are able to formulate specific vocational plans, make a commitment to a chosen career field, and take the appropriate steps necessary to prepare themselves for eventual employment); (c) lifestyle planning (the degree to which students are able to establish a personal direction and orientation in their lives, taking into account personal, ethical, and religious values, future family planning, and educational and vocational objectives); (d) cultural participation (the degree to which students are actively involved in a wide variety of activities and exhibit a wide array of cultural interests and a developed sense of aesthetic appreciation). The developing autonomy task is composed of four subtasks: (a) emotional autonomy (the degree to which students trust their own ideas and feelings, have self-assurance to be confident decision makers, and are able to voice dissenting opinions in groups); (b) interdependence (the degree to which students recognize the reciprocal nature of the relationship between themselves and their community and fulfill their citizenship duties and responsibilities); (c) academic autonomy (the degree to which students have developed the capacity to deal with ambiguity and to monitor and control behavior in ways that allow for the attainment of personal goals and fulfillment of responsibilities); and (d) instrumental autonomy (the degree to which students are able to structure their lives and manipulate their environment in ways that allow them to satisfy daily needs and meet personal responsibilities without assistance from others). The mature interpersonal relationships task is composed of two subtasks: (a) peer relationships (the degree to which students have developed mature peer relationships characterized by greater trust, independence, frankness, and individuality); and (b) tolerance (the degree to which students are accepting of those of different backgrounds, lifestyles, beliefs, cultures, races, and appearances). Sufficient reliability and validity data have been supported for earlier versions of the SDTLA (see Hess & Winston, 1995; Winston, 1990; Winston & Miller, 1987). With one exception, scale and subscale reliability scores for the SDTLA used in the present study were judged sufficient: establishing and clarifying purpose task (r = .91), educational involvement subtask (r = .78), career planning subtask (r = .82), lifestyle planning subtask (r = .80), cultural participation subtask (r = .61), developing autonomy task (r = .86), emotional autonomy subtask (r = .71), interdependence subtask (r = .74), academic autonomy subtask (r = .77), instrumental autonomy subscale (r = .55), mature interpersonal relationships task (r = .75), peer relationships subtask (r = .63), tolerance subtask (r = .76), Salubrious Lifestyle Scale (r = .74), and Lie Scale (r = .17). Because of the poor internal consistency reliability associated with the Lie Scale, it was not used in further analyses. Results Correlational Analyses We computed Pearson correlation coefficients to examine the relationship between boredom proneness and psychosocial development. Consistent with expectations, the results indicated significant negative correlations between boredom proneness total and subscale scores and psychosocial development scores (see Table 1). [TABULAR DATA FOR TABLE 1 OMITTED] Gender Differences Specifying age and class standing (1st, 2nd, 3rd, 4th year) as covariates, we computed a one-way multivariate analysis of covariance (MANCOVA) to test for the effect of gender on psychosocial development scores. With Wilks's criterion, an overall main effect for gender was found, F(11, 128) - 2.3, p [less than] .05, [[Eta].sup.2] = .16. Follow-up univariate analyses indicated that women had significantly higher scores on the following psychosocial development measures: educational involvement, F(1, 138) = 7.9, p [less than] .01; lifestyle planning, F(1, 138) = 9.2, p [less than] .005; instrumental autonomy, F(1, 138) = 5.5, p [less than] .05; tolerance, F(1,138) = 8.0, p [less than] .005; salubrious lifestyle, F(1, 138) = 5.6, p [less than] .05; and academic autonomy, F(1, 138) = 4.8, p [less than] .05. Age and class standing were selected as covariates because past research has generally revealed a relationship between psychosocial development and these two variables (Pascarella 8,: Terenzini, 1991). Age, however, did not provide reliable adjustment to any of the dependent variables and, thus, was omitted from future analyses. Means, adjusted means, and standard deviations for psychosocial development by gender are reported in Table 2. A one-way multivariate analysis of variance (MANOVA) was computed to test for the effect of gender on the five factors of the BPS. The findings indicated an overall main effect for gender, F(5, 133) = 4.1, p [less than] .005, [[Eta].sup.2] = .13. Univariate contrasts indicated that men (M = 32.7, SD = 8.2, n = 65) had significantly [TABULAR DATA FOR TABLE 2 OMITTED] higher scores than women (M = 28.0, SD = 7.2, n = 74) on external stimulation, F(1, 137) = 13.1, p [less than] .001. No significant differences related to gender were found on the other BPS subscales. Multivariate Analyses A median split was conducted on the distribution of boredom proneness scores, to categorize participants into high (M = 43.6, SD = 11.1, n = 73) and low (M = 34.0, SD = 9.8, n = 66) boredom groups. We computed a two-way MANCOVA to assess the effect of boredom proneness on psychosocial development scores, after adjusting for the effects of gender and class standing. A significant main effect for boredom proneness was found, F(11, 125) = 5.3, p [less than] .001, [[Eta].sup.2] = .32. Univariate analyses indicated that the low boredom-prone students had significantly higher scores than the students with high boredom proneness on the following measures of psychosocial development: educational involvement, F(1, 135) = 20.8, p [less than] .001; career planning, F(1, 135) = 10.2, p [less than] .005; lifestyle planning, F(1, 135) = 22.5, p [less than] .001; peer relationships, F(1, 135) = 23.1, p [less than] .001; instrumental autonomy, F(1, 135) = 17.2, p [less than] .001; emotional autonomy, F(1, 135) = 15.0, p [less than] .001; salubrious lifestyle, F(1, 135) = 20.8, p [less than] .001; interdependence, F(1, 135) = 14.1, p [less than] .001; cultural participation, F(1, 135) = 6.2, p .05; and academic autonomy, F(1, 135) = 25.8, p [less than] .05 (see Table 3). [TABULAR DATA FOR TABLE 3 OMITTED] Discussion Pascarella and Terenzini (1991, p. 557) commented that "Students not only make statistically significant gains in factual knowledge and in a range of general cognitive and intellectual skills; they also change on a broad array of value, attitudinal, psychosocial, and moral dimensions." One variable that may negatively affect college students' psychosocial development is boredom proneness. Correlational analyses revealed significant negative relationships between boredom proneness and psychosocial development (see Table 1). In addition, after we controlled for the effects of gender and class standing, low boredomprone students were found to possess significantly higher psychosocial development scores. Our findings are consistent with the characterization of the boredom -prone person as one who (a) lacks motivation, goals, ambition, and a sense of meaning or purpose; (b) experiences varying degrees of negative affect, such as hopelessness, anxiety, depression, hostility, and loneliness; and (c) engages in maladaptive and unhealthy behaviors. Our findings revealed a significant gender difference with respect to boredom proneness. Specifically, the men had significantly higher boredom -proneness scores than the women on the external stimulation subscale, which assessed an individual's need for challenge, excitement, and variety. The finding of a gender difference with respect to a high need for external stimulation is congruent with past findings (Vodanovich & Kass, 1990a; Watt & Blanchard, 1994; Watt & Vodanovich, 1992a). That is, men appear to experience greater boredom than women in situations where a perceived lack of external stimulation exists. One possible reason for the reported gender difference is that men tend to make more stable and less complex attributions for their boredom than women (Polly, Vodanovich, Watt, & Blanchard, 1993). Unfortunately, little empirical attention has been focused on how best to alleviate boredom . Polly et al. (1993, p. 130) suggested that "long-term strategies that focus on increasing the ability to generate internal stimulation or activity may prove successful in reducing the likelihood of boredom." Watt and Blanchard (1994) also speculated that boredom proneness may be alleviated by the self-generation of information or by keeping oneself entertained. Presumably, individuals who are capable of providing their own stimulation are better able to escape the negative experience of boredom. Indeed, Trunnell, White, Cederquist, and Braza (1996) reported using meditative techniques, such as mindfulness (being fully present and engaged in life as it is currently happening), to reduce boredom among college students engaging in educational outdoor experiences. Future researchers should continue to explore additional adaptive ways of reducing boredom, as well as examining whether such reductions may result in an increase in psychosocial development. Gender differences were also reported for psychosocial development. Specifically, the women had significantly higher scores measuring educational involvement, lifestyle planning, instrumental autonomy, tolerance, salubrious lifestyle, and academic autonomy. Extant research does suggest that gender may influence the psychosocial development of students in higher education. Specifically, research on the effects of gender seems to indicate that there are different developmental patterns for women and men, especially concerning the issues of intimacy, identity, and autonomy. For women it appears that an increased emphasis on connectedness and relationships positively promotes their development of identity and autonomy (Gilligan, 1982; Straub, 1987; Straub & Rodgers, 1986; Thomas & Chickering, 1984). Men, in contrast, are believed to develop identity and autonomy through separation (Gilligan, 1982). College student development professionals study the psychosocial development of students in order to better understand their developmental tasks and stages. A more complete understanding of students' developmental tasks and stages and the variables that affect them is crucial in order to design and implement more effective programs and services, as well as provide individual and group counseling services related to general and specific student developmental needs. REFERENCES Aldridge, M., & DeLucia, R. C. (1989). Boredom: The academic plague of first year students. Journal of the Freshman Year Experience, 1(2), 43-56. Blaszczynski, A., McConaghy, N., & Frankova, A. (1990). Boredom proneness in pathological gambling. Psychological Reports, 67, 35-42. Branton, P. (1970). A field study of repetitive manual work in relation to accidents at the workplace. International Journal of Production Research, 8, 93-107. Chickering, A. W. (1969). Education and identity: San Francisco: Jossey-Bass. Chickering, A. W., & Reisser, L. (1993). Education and identity (2nd ed.). San Francisco: Jossey-Bass. Drory, A. (1982). Individual differences in boredom proneness and task effectiveness at work. Personnel Psychology, 35, 141-151. Farmer, R. F., & Sundberg, N. D. (1986). Boredom proneness - the development and correlates of a new scale. Journal of Personality Assessment, 50, 4-17. Freeman, J. (1993). Boredom, high ability and achievement. In V. P. Varma (Ed.), How and why children fail (pp. 29-40). London: Jessica Kingsley. Ganley, R. M. (1989). Emotion and eating in obesity: A review of the literature. International Journal of Eating Disorders, 8, 343-361. Gilligan, C. (1982). In a different voice: Psychological theory and women's development. Cambridge, MA: Harvard University Press. Gjesne, T. (1977). General satisfaction and boredom at school as a function of the pupil's personality characteristics. Scandinavian Journal of Educational Research, 21, 113-146. Hess, W. D., & Winston, R. B., Jr. (1995). Developmental task achievement and students' intentions to participate in developmental activities. Journal of College Student Development, 36, 314-321. Irving, B. A., & Parker-Jenkins, M. (1995). Tackling truancy: An examination of persistent non-attendance amongst disaffected school pupils and positive support strategies. Cambridge Journal of Education, 25, 225-235. Iso-Ahola, S. E., & Weissinger, E. (1990). Perceptions of boredom in leisure: Conceptualization, reliability, and validity of the leisure boredom scale. Journal of Leisure Research, 22, 1-17. James, A., Berelowitz, J. A., & Vereker, M. (1996). Borderline personality disorder: A study in adolescence. European Child and Adolescent Psychiatry, 5(1), 11-17. Johnston, L. D., & O'Malley, P. M. (1986). Why do the nation's students use drugs and alcohol: Self reported reasons from nine national surveys. Journal of Drug Issues, 16, 29-66. Larson, R. W., & Richards, M. H. (1991). Boredom in the middle school years: Blaming schools versus blaming students. American Journal of Education, 99, 418-443. Leong, F. T., & Schneller, G. R. (1993). Boredom proneness: Temperamental and cognitive components. Personality and Individual Differences, 14, 233-239. Maroldo, G. K. (1986). Shyness, boredom, and grade point average among college students. Psychological Reports, 59, 395-398. McLeod, C. R, & Vodanovich, S. J. (1991). The relationship between self-actualization and boredom proneness. Journal of Social Behavior and Personality, 6, 137-146. Mikulas, W. L., & Vodanovich, S. J. (1993). The essence of boredom. The Psychological Record, 43, 3-12. O'Hanlon, J. F. (1981). Boredom: Practical consequences and a theory. Acta Psychologica, 49, 53-82. Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students: Findings and insights from twenty years of research. San Francisco: Jossey-Bass. Polly, L. M., Vodanovich, S. J., Watt, J. D., & Blanchard, M. J. (1993). The effects of attributional processes on boredom proneness. Journal of Social Behavior and Personality, 8, 123-132. Robinson, W. P. (1975). Boredom at school. British Journal of Educational Psychology, 45, 141-152. Sartoris, P. C., & Vanderwell, A. R. (1981). Student reasons for withdrawing from the University of Alberta: 1978-79. Canadian Counsellor, 15, 168-174. Straub, C. (1987). Women's development of autonomy and Chickering's theory. Journal of College Student Personnel, 28, 198-204. Straub, C., & Rodgers, R. F. (1986). An exploration of Chickering's theory and women's development. Journal of College Student Personnel, 27, 216-224. Thomas, R., & Chickering, A. (1984). Education and identity revisited. Journal of College Student Personnel, 25, 392-399. Tidwell, R. (1988). Dropouts speak out: Qualitative data on early school departures. Adolescence, 23, 939-954. Tolor, A. (1989). Boredom as related to alienation, assertiveness, internal-external expectancy, and sleep patterns. Journal of Clinical Psychology, 45, 260-265. Trunnell, E. P., White, F., Cederquist, J., & Braza, J. (1996). Optimizing an outdoor experience for experiential learning by decreasing boredom through mindfulness training. Journal of Experiential Education, 19(1), 43-49. Vodanovich, S. J., & Kass, S. J. (1990a). Age and gender differences in boredom proneness. Journal of Social Behavior and Personality, 5, 297-307. Vodanovich, S. J., & Kass, S. J. (1990b). A factor analytic study of the Boredom Proneness Scale. Journal of Personality Assessment, 55, 115-123. Vodanovich, S. J., Verner, K. M., & Gilbride, T. V. (1991). Boredom proneness: Its relationship between positive and negative affect. Psychological Reports, 69, 1139-1146. Vodanovich, S. J., Watt, J. D., & Piotrowski, C. (1997). Boredom proneness in African-American college students: A factor analytic perspective. Education, 118(2), 229-236. Watt, J. D., & Blanchard, M. J. (1994). Boredom proneness and the need for cognition. Journal of Research in Personality, 28, 44-51. Watt, J. D., & Davis, F. E. (1991). The prevalence of boredom proneness and depression among profoundly deaf residential school adolescents. American Annals of the Deaf 136, 409-413. Watt, J. D., & Ewing, J. E. (1996). Toward the development and validation of a measure of sexual boredom. The Journal of Sex Research, 33, 57-66. Watt, J. D., & Vodanovich, S. J. (1992a). An examination of race and gender differences in boredom proneness. Journal of Social Behavior and Personality, 7, 169-175. Watt, J. D., & Vodanovich, S. J. (1992b). Relationship between boredom proneness and impulsivity. Psychological Reports, 70, 688-690. Weinstein, L., Xie, X., & Cleanthous, C. C. (1995). Purpose in life, boredom, and volunteerism in a group of retirees. Psychological Reports, 76, 482. Wink, P., & Donahue, K. (1997). The relation between two types of narcissism and boredom. Journal of Research in Personality, 31, 136-140. Winston, R. B., Jr. (1990). The Student Developmental Task and Lifestyle Inventory: An approach to measuring students' psychosocial development. Journal of College Student Development, 31, 108-120. Winston, R. B., Jr., & Miller, T. K. (1987). Student Developmental Task and Lifestyle Inventory manual. Athens, GA: Student Development Associates. Winston, R. B., Jr., Miller, T. K., & Prince, J. S. (1995). Student Developmental Task and Lifestyle Assessment (unpublished instrument). Athens: The University of Georgia. Zuckerman, M. (1979). Sensation seeking: Beyond the optimal level of arousal. Hillsdale, NJ: Erlbaum. From checker at panix.com Tue May 17 14:50:22 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:50:22 -0400 (EDT) Subject: [Paleopsych] Acedia, Tristitia and Sloth: Early Christian Forerunners to Chronic Ennui. Ian Irvine. Message-ID: Acedia, Tristitia and Sloth: Early Christian Forerunners to Chronic Ennui. Ian Irvine. Humanitas, Spring 1999 v12 i1 p89 This article focuses on the relevance of early Christian writings on acedia and tristitia to the primary modern and postmodern maladies of the subject, i.e., chronic ennui, alienation, estrangement, disenchantment, angst, neurosis, etc. The focus will be on the 'chronic ennui cycle' which has been extensively discussed by Steiner (1971), Bouchez (1973), Kuhn (1976), Healy (1984), Klapp (1986) and Spacks (1995). [1] It can be described as a cycle of boredom and addiction which robs individuals of meaning and a sense of the elan vitale. This cycle has undergone various mutations of form over the centuries. Many of the writers mentioned above have plotted its course of development from classical times to the present. Such discussions begin with the descriptions of taedium vitae, luxuria and the horror loci supplied by Roman philosophers and writers such as Lucretius, Petronius and Seneca. They also encompass analyses of the spiritual illnesses of acedia and tristitia written by the Desert Fathers and of the vari ous emotional and medical conditions described by Medieval and Early Modern poets and medical professionals, e.g., saturnine melancholy, spleen, fits of the mothers, and 'The English Malady.' Chronic ennui an obsession of romantic and realist writers. Due largely to the immense sociocultural changes that struck Europe in the nineteenth century the problem of chronic ennui (sometimes termed 'the spleen,' hypp, languer, nerves and disenchantment) inevitably became a major theme (if not obsession) for romantic and realist poets and thinkers. By the late nineteenth century it became tangled up with the concept of 'degeneration' and also with the fin de siecle phenomenon. By that stage it signified a particular kind of subjective suffering brought about by prolonged exposure to certain types of social institutions and sociocultural stresses. In short chronic ennui was associated with the costs to the subject of urbanisation, bureaucratisation and the industrial revolution. In a sense, then, the concept was used to illustrate the dark side of modernity. The decadents and later modernist poets, writers, artists, culture critics and philosophers made use of it in discussing alienation, reification, absurdity, aboulie, anomie, desacralisation, angst, bad faith, ne urosis, character armouring and so on. As said, this essay will consider the contributions of the Early Christian Fathers to modern conceptions of 'chronic boredom,' with particular attention to the problem of the 'ennui cycle.' Some Modern Descriptions of the Ennui Cycle > From the beginning of the eighteenth century the French idea of 'chronic ennui' signified a particular kind of subjective suffering. At the deepest level the idea signified a cycle of subjective discontent, a cycle that--at least at the level of symptoms--progressed invariably through three distinct phases. The first stage was one of anxious boredom , of nameless objectless anxiety, which was accompanied by fantasies of release from that anxiety This mood, in due course, gave way to a second stage characterised by bursts of frantic activity designed to defeat or flee from the inner feelings of discontent characteristic of the previous stage. This activity had as its goal the denial of the previous feelings by immersion in various more or less repetitive (sometimes absurd) habits. This flurry of activity gave way to a third stage of psychospiritual numbness which allowed a person to feel temporarily free from the anxieties and impulsive acting out typical of the previous periods. We may see this third stage as a state of non-being similar to that experienced by the heroin or smack addict, the sex addict, the gambler, the food addict, or the drugged patient in a psychiatric ward. [2] This cycle need not be particularly spectacular. The ritualistic activities of the second stage, for example, may revolve around hundreds of routine actions, activities, sayings (rationalisations), and thoughts which in combination act to keep the subject fundamentally disconnected from more wholesome experiences of selfhood. Consistent symptoms. We may list the various specific symptoms attached to the ennui cycle. Although such symptoms are experienced differently by different people--i.e., according to gender, race, class, age, and so forth--the core description of the malaise nevertheless seems to reveal a certain degree of consistency across social positionings and, as we shall see with the writings of the Desert Fathers, across time. The core symptoms are: 1. States/feelings of subjective worthlessness and meaninglessness. 2. Feelings/intimations that the subject is missing out on life. The feeling also that time is a burden and that one is old before one's time. 3. States of being periodically possessed by certain malign impulses/forces over which one has little or no effective control. 4. Feelings that the subject is estranged from/divided within/ dispossessed of his/her 'healthy self'--that is, a feeling that the way one acts or experiences oneself in the world seems to be merely an act, or worse, an act that is destructive in that it leads to a narrowing of life possibilities. 5. Feelings of revulsion toward, or obsessive fascination with, one's own body and bodily functions or with the bodies and bodily functions of others. (Various social and cultural commentators on modernism, e.g., Ihab Hassan, have described a particular state of ambivalence toward the realm of the feminine, the female body and the specifically female biological functions.) 6. Impulses to act violently or maliciously towards others, towards one's self or towards the world in general. These may be extreme or petty--indeed, pettiness as manifested in moods of jealousy, envy, backbiting, greed, etc., are features of the ennui cycle and are connected to the nineteenth-century critique of bourgeois culture in general. 7. A sense that 'objects' out there in the world resonate in the consciousness of subjects as though they are malign and have special powers over human moods, desires, and impulses, and over a subject's fate or destiny. 8. The loss of an animated, enchanted state of identification with the world/cosmos/nature, with others in society, and with one's own needs and desires. Many nineteenth-century poets and thinkers described this stage as the loss of 'vision' or as the loss of the communal religious experience. 9. Physical feelings--long-lasting in nature--of being burdened, weighed down, exhausted, by the normal activities/interactions of everyday existence. Where persons blame others for this state of being or give themselves wholly over to flight from self, writers as diverse as Kierkegaard, Sartre, Schopenhauer, Camus, and George Steiner have spoken of 'normative,' 'active,' or sometimes 'bourgeois' ennui. Those who are to some extent aware of their malaise are often deemed to be afflicted with 'creative boredom/ennui' or 'spiritual ennui.' Since the nineteenth century this form of l'ennui morbide has been characterised by an additional symptom: 10. The feeling or intuition that society and its institutions are in some way connected to, or nurturant of, the particular experience of ennui suffering felt by a given subject--that perhaps the norms of society are in some way 'generative' of the malady. The artists and theorists who have expressed this intuition link the phenomenon of subjective ennui to the great economic, technological, social, political and religious changes that shook Europe in the early modern period, e.g., secularisation, urbanisation, industrialisation, the rise of the bourgeoisie, bureaucratisation, the political revolutions of the period, the scientific revolution, etc. From George Cheyne (The English Malady, 1733) onwards, symptoms associated with 'subjective ennui' have been linked to various kinds of sociocultural phenomena. [3] "Great Ennui" characteristic of post-traditional society. The connections of ennui with many other post-Enlightenment (usually secular) concepts describing subjective disintegration, melancholia and psychic torment are many. It is no understatement to suggest that variations on this relatively simple subjective cycle of consciousness were at the core of many of the great nineteenth-and early twentieth-century critiques of modernity. In this sense 'ennui' in conjunction with other words has long had the potential to launch a full-scale critique of Western civilisation. The perils facing the subject raised on modernity may equally be the perils of the collective. George Steiner, for example, speaks of 'The Great Ennui' as a defining characteristic of post-traditional Western society in general. He sees it as a central motivating force behind the many calamities of the twentieth century, notably two world wars, the ecological crisis, the technocratic tendencies of modern social structures, anti-Semitism and other forms of minority scapegoating, and, finally, the adven t of the atomic age. Acedia: Forerunner to Chronic Ennui Judging by the dearth of primary sources the problem of chronic ennui (then termed taedium vitae) was not a major issue for classical writers and poets; other themes far and away predominated. Among the Ancient Greeks the problem was virtually unheard of and it is only in the early decades and centuries of the first millennium that the problem is mentioned with any degree of alarm among Roman intellectuals. Likewise, what is reported is nothing like the mood of chronic ennui as described by those who would follow. [4] It was only in the late Roman period that the malaise of chronic ennui began to assert a major and continuous pull on the imaginations of the literate--thanks mainly to the writings of the Desert Fathers of Christendom. [5] Whilst the Desert Fathers were developing specifically Christian perspectives on humanity's psychospiritual relationship to God, self, society and the cosmos, they were also writing about a new way of looking at psychospiritual suffering. Their writings formed the foundations of Christianity's understanding of chronic ennui, foundations which stayed firm for almost one thousand years. [6] Whether Christianity itself was the cause of the malady or whether it merely provided the most thorough diagnosis of chronic ennui for the age is open to debate, [7] but what is certain is that during the fourth century A.D. the classical conceptions of taedium vitae underwent certain crucial developments. The horror loci and the various vices of diversion that the Romans had associated with these two states of being were incorporated into a fundamentally Christian view of the soul-affirming and soul-destroying passions. The developments led many people to invent new terms to describe what they and others were feeling. In t his sense, various modern commentators have noted that modern discussions of chronic ennui owe much to earlier religious discussions of the temptation of acedia. [8] Of the many terms used at that time to describe states of consciousness similar to chronic ennui--some of the most well known being tristitia, siccitas, desidia and pigritia (sloth)--the word acedia came to predominate. [9] It now seems likely that some of the Desert Fathers associated acedia with the dreaded 'noonday demon' of Psalm 90:6. [10] Indeed the hour of noon seems to have been a particularly dangerous time for the solitary monks since, when the noontide demon arrived, he often brought with him a whole host of additional temptations (viewed as combinations of demons and evil thoughts, [LANGUAGE NOT REPRODUCEBLE IN ASCII] which could assuage the monk's feelings of chronic boredom and make him abandon the coenobitic life forever. [11] Acedia, or sloth, among the worst temptations. The concept of acedia thus denoted both a 'movement of the soul' and a specific 'evil spirit.' In this sense it must be understood in relation to dualistic conceptions of humanity's place in the cosmos current in the late Roman period. It is now known that the Desert Fathers drew on dualistic tendencies inherent in Iranian, Hellenistic, Stoic, Gnostic and Judaic worldviews to formulate the so-called 'demonological' view of the capital sins or temptations. The demonological system held that human actions in the world were influenced by both good and bad angels or spirits. The bad spirits were believed to be under the control of Satan and the good were said to be under God's control. The bad spirits skewed the innate passions (the [LANGUAGE NOT REPRODUCEBLE IN ASCII]in non-life-affirming directions. They did this by inciting evil thoughts or passions ([lambda]o[gamma][iota][sigma][micro]o[iota]). Evil thoughts, according to Evagrius, [12] could become attached in consciousness to remembered or desired objects which thus became invested with destructive emotional energy, e.g., gold could become attached to a greedy state of mind. Such evil thoughts may eventually gain control of the rational mind at which point non-life-affirming deeds might result. Every capital sin, temptation, or evil thought was attached to a specific demon or evil spirit. The Desert Fathers sought peace ([eta][sigma][upsilon][chi][iota][alpha]) from the incessant war between sin and virtue by trying to make the passions subservient to the rational intellect. This state was known technically as [alpha][pi][alpha][upsilon][epsilon][iota][alpha] which meant 'to be at one with God.' To many of the Desert Fathers acedia was one of the worst temptations (demons) because it tried to make the monk give up the religious life completely. It was thus one of the major hurdles to controlling the passions and thus to the monk's salvation and desired union with God. Of the many descriptions of coenobitic acedia [13] perhaps the best summary of its early medieval characteristics is to be found in Cassian's De Institutis Coenobiorum (Foundations of Coenobitic Life) where acedia figures as the sixth of the eight major temptations. [14] In this work the older classical descriptions of taedium vitae, melancholy and black gall are clearly reworked to fit into a specifically Christian framework. Various symptoms are discussed, many of which replicate classical symptoms of chronic ennui, e.g., horror loci, inexplicable sadness, addiction to objects (luxuria) and a certain desire to do anything rather than confront the negative emotional forces (temptations) that were trying to possess one's being: Our sixth contending is with that which the Greeks call 'a-kedia' (from a-, 'not'; kedos, 'care') and which we may describe as tedium or perturbation of heart. It is akin to dejection [tristitia], and especially felt by wandering monks and solitaries, a persistent and obnoxious enemy to such as dwell in the desert, disturbing monks especially about midday, like a fever mounting at a regular time, and bringing its highest tide of inflammation at definite accustomed hours to the sick soul.... When this besieges the unhappy mind, it begets aversion from the place, boredom with one's cell, and scorn and contempt for one's brethren, whether they be dwelling with one or some way off, as careless and unspiritually minded persons. Also, towards any work that may be done within the enclosure of our own lair, we become listless and inert. It will not suffer us to stay in our cell, or to attend to our reading: we lament that in all this while, living in the same spot, we have made no progress, we sigh and complain that bereft of sympathetic fellowship we have no spiritual fruit; and bewail ourselves as empty of all spiritual profit...and we that could guide others...have edified no man, enriched no man with our precept and example. We praise other and far distant monasteries, describing them as more helpful to one's progress, more congenial to one's soul's health....Finally we conclude that there is no health for us so long as we stay in this place, short of abandoning the cell wherein to tarry further wi ll be only to perish with it, and betaking ourselves elsewhere as quickly as possible. Towards eleven o'clock or midday, it induces such lassitude of body and craving for food as one might feel after...hard toil. Finally one gazes anxiously here and there, and sighs that no brother of any description is to be seen approaching: one is for ever in and out of one's cell, gazing at the sun as though it were tarrying to its setting: one's mind is in an irrational confusion ...one is slothful and vacant in every spiritual activity, and no remedy, it seems, can be found for this state of siege than a visit from some brother, or the solace of sleep. Finally our malady suggests that in common courtesy one should salute the brethren, and visit the sick, near and far. It dictates such offices of duty and piety as to seek out this relative or that...far better to bestow one's pious labour upon these than sit without benefit, or profit in one's cell. [15] This demon is said to work "hand in hand with the fifth demon 'Dejection,"' which also has much in common, at the symptom level, with various forms of malevolent boredom. We also note similarities between the state described and modem forms of depression. Need to confront one's demons. The modern secular mind might hastily jump to the conclusion that the monks were bored for good reason and that the desire to flee their cells was merely a natural response to the absurdities of an ascetic lifestyle. Such a conclusion does not account for the fact that these states of consciousness were, in a sense, courted by the monks. The demons had to be brought out of hiding, so as to speak, before one could truly experience [LANGUAGE NOT REPRODUCIBLE IN ASCII] (spiritual oneness with God by control of the passions by the rational mind). The monks very probably chose such inhospitable surroundings and arduous lifestyle as means to bring on a state of spiritual catharsis. They perhaps sought to improve themselves by a series of confrontations with aspects of their lives which normal living kept submerged. In this sense the goals of their practices could be seen (with certain reservations) as creative and thus in opposition to normative forms of ennui. The very fact that the fathers spent so much time and energy trying to sort out the destructive passions from the 'angelic' ones might suggest that acute forms of normative ennui existed outside religious circles and that perhaps some people opted for the ascetic life as a means of overcoming such addictive and ultimately destructive states of being. Cassian's other work, Collationes, or Conversations (425 A.D.), also deals with acedia as experienced by the Desert Fathers. Particularly relevant is the interview with Father Daniel, one of the Desert Fathers. What emerges from the conversation is the idea that chronic boredom is something through which one passes in order to experience new heights of spiritual oneness with God. Something like the 'creative,' solitary ennui of later artists, poets and writers, and the spiritual ennui of shamans, priests/priestesses, and mystics of many traditions seems to be the goal. Cassian has one of the monks relate the following experience: We feel overwhelmed, crushed by dejection [tristitia] for which we can find no motif. The very source of mystic experiences is dried up ... the train of thought becomes lost, inconstant and bewildered .... We complain, we try to remind our spirit of its original goals. But in vain. Sterility of the soul! And neither the longing for Heaven nor the fear of Hell are capable of shaking our lethargy. [16] Pope Gregory I (The Great) wrote his Morals on the Book of Job some 150 years after Cassian's Foundations and Collationes. In the process, he added the final touches to the medieval idea of acedia. He reduced the number of capital sins from eight to seven, merging tristitia and acedia into one sin. He also helped universalise the ideas of the Desert Fathers since he wrote for a larger audience. As a result, all Christendom came to believe that the Capital Sins were central to the Christian moral system. One of those sins was the temptation of acedia which would later take the title of sloth. [17] In the process of its universalisation the solitary, sometimes excruciating and cathartic, confrontation between the self and the Demon of Noontide gradually lost its emphasis as a mode of attaining salvation. Despite centuries of pronounced theological debate there were few alterations to the concept of acedia from this period on. [18] The Efficacy of Demonological Approaches to Acedia In surveying the numerous early medieval texts concerned with acedia it seems clear that it was an early form of morbid ennui. On this I agree with Kuhn's conclusions. [19] Many of the symptoms described as indicative of acedia (e.g., horror loci; tedium vitae; chronic depression; unwarranted sadness; crippling lethargy; lack of joy; lack of at-peaceness with the universe; addiction to activities, objects, or states of mind which give no true fulfilment; the constant desire to flee from ascending states of deep anxiety by resort to such addictions; and, lack of mystic vision or imagination [20]) certainly carried over into later descriptions of the various forms of chronic boredom. The spiritual techniques (fasting, prayer and solitude) by which the demon of acedia was made manifest, and the internal conflicts which the Church Fathers experienced and described in their works, seem to have much in common with shamanic and religious practices endemic to many other world traditions. (Traditional shamans, for instance, routinely confronted various demons, devils, and spirits of discord and decay. [21]) In more recent times many artists and poets have described their sufferings from chronic ennui in terms of exorcistic, cathartic and mystical imagery similar to that used by the Desert Fathers. One usually confronts such destructive psychospiritual forces for the purpose of purifying the self and thus of protecting the community from the harms that could result from the passions turned noxious. One faces the abyss, pursues the via negativa, in order to become spiritually and emotionally whole. [22] The states of chronic ennui associated with such a confrontation are far removed from the form s of the malaise that currently afflict Western civilisation--what I have labeled 'normative ennui.' So prevalent is the malady of 'normative ennui' in the West today that people who see value in contemplation, in spiritual disciplines designed to improve themselves emotionally, are derided and even stigmatised as lazy and non-productive. It is quite possible that the early Church Fathers were confronting a malaise nurtured by the great urban centres of the age. They were, perhaps, taking on a disease nurtured by empire--by urbanisation and bureaucratisation. Such a reading would suggest that they were taking on powers that would one day station themselves at the very centre of Western civilisation. If so, the showdown between the Demon of Noontide and the monks of the new religion in the arid wastelands of the African deserts is one of the most neglected psychospiritual events of Western history. We are led to perhaps the most important question: did medieval Christianity nurture or counteract acedia? The question is important for any modern assessment of chronic ennui, especially since George Steiner has recently suggested that this secular version of acedia is still, essentially, a religious problem. [23] My own position on these early Christian commentaries concerning the causes and cures for the problem is ambivalent. It could be argued that in the shift from the often solitary confrontation with acedia in the inhospitable deserts of Northern Africa to the universalised confrontation with it characteristic of the later medieval period (diagnosed for all Christians under the title of sloth) there was lost that element of catharsis which has been encouraged by many traditional peoples to treat states of psychospiritual disintegration. In my opinion, the switch represented the end of serious attempts by mainstream, institutional Christianity to tackle the ennui/acedia malaise. From about 1200 on the mystics, poets, and artists of the West took up the struggle instead. > From an ethical perspective the confrontations with ennui experienced by the early Church Fathers were conditioned by a specific kind of religious belief system, a system that often confused noxious passions with perfectly healthy ones and which privileged masculine forms of the rational intellect over the body (particularly the functions of the female body). In this sense some of the acedia experienced by the monks must be attributed to their attempts to live basically 'passionless lives,' i.e., to reify their experience of the lifeworld and live instead in the world of reason and the spirit. Consequently, they may have treated one form of chronic ennui (normative ennui) with a religious form of the same malaise. This may be part of the reason that Christianity has been essentially unable to defeat the joylessness associated with the various forms of the ailment. It is a tendency, I believe, peculiar to the moral systems developed by the monotheistic religions. Yet the monks were fine psychologists, and it is clear that on many occasions they saw acedia and the other capital temptations as distortions of otherwise life-affirming passions. If there is one essential lesson to be learnt from their writings it is that we should respect the power and subtlety of the malaise they confronted in their own beings. The Demon of Noontide was no pushover: it could erode the will, afflict the body and darken a person's emotional terrain with depression, madness and thoughts of suicide. It could also offer escape from cathartic confrontation with self by insinuating into consciousness all manner of inauthentic desires, pseudo-needs, and fantasies of relief which, though serving to ward off the worst excesses of acedia, could ultimately lead only to spiritual ruin. In this sense the courage of the Desert Fathers as they attempted to name and confront this subtle and destructive psychospiritual entity surely deserves our admiration if not our awe. For this reason alone the writing s of the Desert Fathers must surely contribute something to our attempts to understand modern forms of chronic ennui. If Rome can be seen as the origin of the concept of 'normative ennui,' then the deserts of northern Africa may be seen as the site of the first major creative and spiritual response to that malaise. Postscript: Postmodernist Ennui Early Christian descriptions of acedia and related vices did not view 'society' as the cause of the subjective suffering described. This is in sharp contrast to most modern descriptions of acedia's progeny terms, e.g., chronic ennui, anomie, and alienation. Romantic, modernist, and postmodern uses of these words invariably encompass the idea that something is wrong with the link between the self and the 'other' of society. In this sense such concepts often represent an implicit critique of modernity. The depression, languer, and melancholy that characterised nineteenth-century ennui contradicted the great Enlightenment bourgeois ideals of progress, competition, scientific and technological advancement, and social evolution in general: ennui played gollum to the sturdy hobbit of liberalism. Understanding the enormous implications of our current distrust of the social apparatus may allow us to better define and illustrate the immensity of the new crisis of the subject currently afflicting Western societies. In this new phase chronic ennui has become associated with schizophrenic, depressive, narcissistic and psychopathic symptoms and with what Bouchez (1973) terms the 'derealisation' of subjective life characteristic of the late twentieth century. How relevant then are terms like taedium, vitae, acedia, tristitia, siccitas, saturnine melancholy, 'The English Malady,' l'ennui morbide, etc., to discussions of the postmodern maladies of the subject? I would argue that 'chronic ennui' is more virulent than ever in the postmodernist phase of our society (though different in character from earlier outbreaks). It can be argued that postmodernist ennui represents a specific disintegrative response to the particular social formations characteristic of advanced capitalism and advanced statism in general. Destructive norms. It is my argument that the fragmentation of the subject occasioned by the new phase of modernity (sometimes called 'high modernity' or 'postindustrialism') lies on a continuum with, but is qualitatively different from, earlier states of subjective suffering. The web of society and culture that is supposed to help sustain people's material and psychospiritual needs is perhaps more toxic than ever to the real needs of subjects. In this new phase the norms of the social web bespeak an advanced state of normative schizophrenia and psychopathology. If we wish to find a way out of the soul-destroying routines of the postmodern ennui cycle with its consumeristic addictions (see Perec's Things), its narcissism and love of empty spectacle, its insane hunger for more and more objects to fill up the void of a life without meaning, I would suggest that we reassess the long tradition of writings on the maladies of the subject. We might ask ourselves where exactly the Desert Fathers went wrong, and just as importantly, wh ere they were on the right track. Like George Steiner in his work on the 'Great Ennui' in In Bluebeard's Castle (1971), we might admit that our current maladies of the self (and their social manifestations) are deeply related to the more general problems of the history of human spirituality. If this is the case, and I believe it is, then the battle between the Desert Fathers and the Demon of Noontide is of the utmost significance for modern discussions of the history of subjectivity. Ian Irvine, Coeditor of The Animist: Electronic Journal of the Arts, lectures in Medieval Studies at La Trobe University, Bendigo, Australia. (1.) George Steiner, In Bluebeard's Castle (1971); Madeleine Bouchez, L'Ennui (1973); Reinhard Kuhn, The Demon of Noontide: Ennui in Western Literature (1976); Sean Desmond Healy, Boredom, Self and Culture (1984); Orrin Klapp, Overload and Boredom (1986); and Patricia Spacks, Boredom (1995). (2.) Such comparisons are more than coincidental. The discovery of the endogenous opiates in the mid 1970s highlighted the fact that many people in having recourse to various substances and activities do so in order to 'self-inject' themselves with various internally manufactured opiates. It is now known that a large proportion of the human psychobiological system is geared to pain management--physical and psychological. In due course this third stage returned the person to the suffering of the first stage. (3.) Reinhard Kuhn (1976) in following this line suggests that ennui must be seen as the major subjective psychospiritual malady that affected individuals in the early phases of modernity. (4.) See Kuhn (1976, 36 and 39) and Healy (1984, 16). (5.) In particular the following figures and texts are seminal: Evagrius of Pontus's (b. 345) Of the Eight Capital Sins, St. John Chrysosthomos's Exhortations to Stagirius, Nilus's Treatise on the Eight Evil Spirits, and Johanis Cassian's The Foundations of Coenobitic Life and the Eight Capital Sins and Collationes (or Conversations). (6.) Bloomfield's The Seven Deadly Sins (1952) is still one of the best discussions of the moral system behind medieval Christianity. The best overview of chronic ennui's kindred term acedia as it figured in theological and religious texts during the medieval period can be found in Wenzel's The Sin of Sloth: Acedia in Medieval Thought and Literature (1967). For in-depth discussion of the actual relationship between acedia and modern forms of ennui see Kuhn (1976, Ch. 3). Kuhn, like many commentators, sees the concept of acedia as a medieval subspecies of chronic ennui. In the same context see Healy's comments on acedia: 'With the development of Christianity into a religion of the people at large, the vice (of acedia) went through immense complexities of definition and attribution as it changed from being an exclusively eremitic affliction, an occupational hazard as it were, into a weakness capable of besetting any Christian' (1984, 17). (7.) Of those who have dealt with the historical questions raised by chronic ennui in general, and normative ennui in particular, most have tended to ignore the earliest outbreaks of the malady and have instead concentrated on the historical and social forces that contributed to the great epidemic of chronic ennui that struck Europe during the onset of modernity. Only a few writers, in particular Kuhn (1976, 41-42), have approached the important question of just why chronic ennui's ancestor malady 'acedia' took such a grip on the early Christian imagination. Kuhn cites as reasons the rigorous spiritual lives experienced by the Desert Fathers, the fact that states of normality seemed rather boring in comparison to the mystical heights to which the monks attempted to soar, and the actual arid surroundings in which the monks lived. Such reasoning does not account for the fact that acedia became a base for--in Kuhn's words--the 'secularisation' and 'universalisation' of the ennui malaise in the later medieval pe riod. Nor does it solve for us the question of whether Christianity as a cultural phenomenon could be blamed for the later explosion of the malady or whether we should look elsewhere for the causes, e.g., to economic and social factors or to other cultural factors. (8.) Flaubert, Kierkegaard, Baudelaire, Racine, Balzac, Sainte-Beuve, Maurice Barres, Marcelle Tinayre and Paul Bourget have all written on or dramatised the connection between acedia and forms of chronic ennui. Kuhn (1976, 42 and 55) discusses the many post-Enlightenment writers who had pointed to the connection. Healy (1984, 16-18) also speaks of the historical dimensions to this connection. See also Clive's comments (1965, 359). He says chronic ennui is "the acedia of the twentieth century--the experience of being 'condemned to freedom' in a world seemingly devoid of objective values." (9.) See Appendix One 'Etymology of Acedia, Ennui, Spleen and Boredom,' for a discussion of the point where acedia took on cultural meanings similar to that of the modern day term chronic ennui. (10.) See for example Cassian's comments (ed. Waddell, 1974, 229) in De Institutis Coenobiorum (Foundations of Coenobitic Life), 425 A.D.: Our sixth contending is with that which the Greeks call, and which we may describe as tedium or perturbation of heart.... [S]ome of the Fathers declare it to be the Demon of Noontide which is spoken of in the xcth Psalm. (11.) Caillois (1937, 54-83 and 143-86) relates the Demon of Noontide to les demons de midi, i.e. various classical spirits (mainly female) of mischief and temptation who made their presence felt around midday--e.g., sirens, nymphs, harpies, nereids, etc. (12.) Evagrius, 'Texts on Discrimination in Respect of Passions and Thoughts' (1983, 38-52). (13.) See the Checklist of Authors for comments on the theme of acedia/ennui as it relates to the works of Nilus, St. John Chrysosthomos, St. Jerome and Evagrius of Pontius. (14.) For comments on the role of acedia as the 'Demon of Noontide' in The Foundations of Coenobitic Life and The Eight Capital Sins see Wenzel (1967, Chapters 1 and 2). See also Rivers (1955, 293) and Revers (1949), esp. Chapter 1 'Die Acedia bei Johannes Cassianus.' (15.) Cassian (ed. Waddell, 1946, 229-231). (16.) Johanis Cassian, Collationes [or Conversations] (425A.D.) [IV, 2]. (17.) See Kuhn (1976, 54-55) for comments regarding the contribution of Gregory The Great and his work Morals On the Book of Job to medieval and modern conceptions of acedia and chronic ennui. (18.) Readers who wish to take up all aspects of the theological debate over acedia as manifested in Scholasticism and later medieval monasticism are directed to Wenzel (1967, esp. Chapter 3). (19.) Kuhn (1976, 53-64) makes a strong case for the similarities between acedia and modern forms of chronic ennui. (20.) Though chronic ennui is rarely these days associated with specifically Christian perceptions of the absence of joy or inspiration, the idea that human beings have lost touch with 'spiritual powers,' however vaguely imagined, remains. (21.) Roccatagliata (1986, 4) argues that the exorcism of demons by resort to solitude, fasting, drugs, intense prayer chanting/singing, and dance has long been central to 'demonological' approaches to mental illness. He says that at the time the church fathers were writing, mythological, animistic, biological and humoral approaches to 'disturbances of the soul' were more or less in decline in favour of the Christian demonological system. According to Roccatagliata (p. 14), the Church Fathers and the church Apologists 'unified animistic and sacred outlooks, as well as the mystical ideologies led by Orpheus, Pythagorus, and the philosophies of Plato and the Stoics' in order to create their new approach. It is thus likely that some of the Church Fathers saw themselves as what we would term 'therapists' in relation to both the major psychological disturbances of the age and the more existential disturbances of the soul experienced by 'normal' people. Both types of unease merge in the concept of acedia, both had a spiritual solution: exorcism of the evil spirits in the name of the Christian deity. Such a reading of the struggles of the Desert Fathers would suggest that their ennui was more similar to what I have called 'creative ennui' than to the other major ennui categories, i.e., 'dysfunctional' or 'normative' ennui. (22.) Kuhn (1976, 45) also points to the cathartic element in the practices of the early Church Fathers. In particular, he speaks of the relationship between acedia and the via negativa: "...acedia is almost a precondition for a life of eternal bliss...it is the 'noche oscura del alma' that lay between Saint John of the Cross and divine grace.... The via negativa that passes through acedia is a road fraught with hazards and with promises. It represents 'a dangerous proving ground through which the soul can purify itself and sometimes it serves as a prelude to the joys and beatitude of ecstasy."' (23.) See my analysis of In Bluebeard's Castle in my Ph.D. thesis entitled Uncomfortably Numb: The Emergence of the Normative Ennui Cycle (1998, chapter 8). From checker at panix.com Tue May 17 14:50:36 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 10:50:36 -0400 (EDT) Subject: [Paleopsych] "Why are you bored?": an examination of psychological and social control causes of boredom among adolescents. Linda L. Caldwell; Nancy Darling; Laura L. Payne; Bonnie Dowdy. Message-ID: "Why are you bored?": an examination of psychological and social control causes of boredom among adolescents. Linda L. Caldwell; Nancy Darling; Laura L. Payne; Bonnie Dowdy. Journal of Leisure Research, Spring 1999 v31 i2 p103(1) The purpose of this study was to better understand the causes of boredom using psychologically based and social control models of boredom . For this study, 82 8th grade students completed two questionnaires, a face to face interview, and participated in a four day activity diary over a two week period of time. Hierarchical linear modeling (HLM) was used to assess the extent to which adolescents' level of boredom differed depending upon their reason for participating in the activity and on the individual characteristics they brought to the situation. Both psychological and social control variables helped to explain boredom. The results are discussed from a developmental and practical perspective. ---------------- Introduction Research on boredom has spanned decades and has been approached from a variety of philosophical, sociological, and psychological perspectives. During this time, discussion in the literature has addressed causes and consequences of boredom. The only apparent consensus is that boredom is a complex phenomenon. Understanding boredom during adolescence is even more challenging because boredom is compounded by concomitant developmental processes. These developmental issues, such as autonomy development, changing cognitive abilities, evolving relationships with parents, and the liminal quality of behavioral demands, make boredom particularly salient for youth. In addition, the amount of free time available to adolescents and the increasing control they have over this time compared to their childhood years suggests free time may provide a new challenge to adolescents as they take on increasing responsibilities for structuring their own time. The increasing focus on boredom during adolescence is, in part, due to the fact that boredom has been linked with a number of problem behaviors such as alcohol and drug abuse (Iso-Ahola & Crowley, 1991; Orcutt, 1985), higher rates of dropping out of school (Farrell, Peguero, Lindsey, & White, 1988) and vandalism (Caldwell & Smith, 1995). Clearly, none of these behaviors are developmentally or societally productive. Thus, the general purpose of this study was to better understand the phenomenon of adolescent boredom in free time. Theories of Boredom Existing research provides us with an understanding of the associated outcomes of boredom in free time, but the body of knowledge is less clear regarding the causes of boredom. Two major perspectives help us understand the causes of boredom : psychological theories and social control theories. These theories are discussed in the following section, and where appropriate, developmental considerations are addressed. Psychological explanations suggest that boredom stems from (a) a lack of awareness of stimulating things to do in leisure (Iso-Ahola & Weissinger, 1987); (b) a lack of intrinsic motivation, and in particular self-determination, to act on the desire to alleviate boredom (Iso-Ahola & Weissinger, 1987; Weissinger, Caldwell, & Bandalos, 1992); and (c) a mismatch between one's skill and the challenge at hand (e.g., Csikszentmihalyi, 1990). The latter is also known as the understimulation model of boredom (e.g., Larson & Richards, 1991). Cognitive psychology suggests that adolescents are maturing in many ways that might influence perceptions of boredom . As adolescents grow older, they mature in their capacity to temper or regulate their interactions with their circumstances (Elliott & Feldman, 1990). At lower maturation levels, once boredom is perceived, adolescents might lack the ability to (a) identify changes that could be made and/or (b) perceive ways in which they could act on the desired change. In addition, the speed, efficiency, and capacity of basic cognitive processes change (Keating, 1990) which might contribute to being understimulated, and thus bored. For example, some tasks may seem to be repetitive as cognitive abilities mature, thus producing feelings of boredom. Psychologically based theories, however, have been based on adult populations and have not addressed the specific developmental tasks of adolescence. The developmental process of autonomy development (Steinberg, 1990), for example, suggests that boredom may be a response of resistance to external control, such as the influence of parents or other adults (Larson & Richards, 1991). This type of boredom might occur in situations when an adolescent is unable to exercise autonomy and at the same time is unable to physically leave the situation; in this case the adolescent may disengage psychologically through the experience of boredom (Eccles et al., 1993). Social control and resistance theories of boredom imply that boredom becomes a standard means of communication that turns into a routine aspect of the adolescent culture. This perspective suggests that free time activities that are structured by the dominant adult culture might be likely to produce boredom in adolescents because they interfere with the normative developmental impetus towards autonomy (Shaw, Caldwell, & Kleiber, 1995). For example, Larson and Richards (1991) stated that ". . . the frequent occurrence of boredom in adolescence is a product of subcultural (or personal) resistance to adult and school authority. . ." (p. 422). Related to the social control perspective, the forced-effort theory of boredom (Larson & Richards, 1991; O'Hanlon, 1981) indicates that boredom occurs when individuals are forced to expend cognitive energy and effort on tasks construed as homogeneous. For adolescents, this boredom response might occur when parents, teachers, or coaches obligate routine, practice activities. In this case, participation is extrinsically motivated either by social pressure or by their instrumental role in the attainment of intrinsically motivated goals. Both the forced-effort and social control/resistance theories of boredom play a role in adolescent boredom in a school context (Larson & Richards, 1991). Thus, extracurricular activities may offer opportunities for adolescents to engage in compelling leisure experiences. Obligatory activities, however, may undermine the potential for adolescents to exercise autonomy and increase the likelihood that adolescents experience boredom within these settings. As adolescents are in transition from a position of dependence on parents to one of increased freedom (i.e., autonomy), the negotiation and balance of decision making power is often problematic (Steinberg, 1990). Steinberg (1990) suggested that until a mutually comfortable position between parent (or by extension, other adults) and adolescent is achieved, tensions are likely. Thus if an adolescent perceives too much control of his or her actions by parents, social control theory suggests that boredom is a typical response. This study used psychologically based and social control models to extend our understanding of adolescent boredom in leisure. The study has two levels of analysis, individual difference and situational. Table 1 summarizes the theories of boredom , related variables in this study, and corresponding hypotheses. At the individual difference level, we examined two variables that reflect differences in responses to boredom across situations (i.e., leisure experiences). The first variable, parental monitoring, reflects the social control/resistance model of boredom. The second individual difference variable was level of intrinsic motivation and reflects psychological theories of boredom. Both of these variables allowed us to take into consideration factors that might contribute to boredom across situations. At the situational level, we examined factors associated with boredom within an individual by examining three possible reasons for participating in a particular activity: Had to, wanted to, and had nothing else to do. Each reason stemmed from either a social control or psychologically based perspective. These variables are described in more detail below. Individual Difference Variables The general level of intrinsic motivation perceived by the adolescent and the general level of parental monitoring of the individual are important because of their potential to moderate or mediate the experience of boredom . Weissinger et al. (1992) suggested, for example, that individual difference variables such as desire for intrinsic rewards will generalize across situations, and thus, are important considerations in understanding boredom . In this study, intrinsic motivation was important because of its (a) recognized importance to leisure experience (e.g., Gunter, 1987; Iso-Ahola, 1979; Neulinger, 1981) and (b) relationship to the experience of boredom from a psychological perspective (e.g., Weissinger et al., 1992). The general level of parental monitoring perceived by an adolescent taps the extent to which the adolescent exercises autonomy and self-determination in leisure experiences versus the extent to which activities are controlled and monitored by parents. Although this variable is new to the leisure literature, previous work has suggested that level of parental monitoring does have some influence on the leisure of adolescents (e.g., Caldwell & Darling, in press), especially engagement in problem behavior and substance use (Steinberg, Fletcher, & Darling, 1994). Situation Level Variable At the situational level, we assessed the reason for participating in the activity. Three reasons, "I had to," "I wanted to," and "I had nothing else to do," directly reflect common reasons given by adolescents to explain their behavior. Each of these reasons relates to a psychologically or social control based theory of boredom. The "had to" situation reflects the feeling that someone (parent, teacher, coach, etc.) exerted influence on the adolescent producing a feeling of obligation. Boredom associated with this reason is thought to be a result of the social control/resistance models of boredom . The "wanted to" situation reflects self-determination and intrinsic motivation. We viewed the role of self-determination and intrinsic motivation as indicative of the psychologically based theories. The "nothing else to do" situation suggests a lack of stimulation, lack of optimal arousal, and/or lack of awareness of leisure opportunities, stemming from the psychologically based theories. Contexts of Boredom in Free Time We felt that it was important to understand the context of boredom . Different leisure contexts and activities may be associated with different outcomes (Caldwell & Darling, in press; Caldwell, Smith, & Weissinger, 1992b). In addition, Weissinger et al. (1992) suggested that a study examining both context and dispositional factors with regard to boredom in leisure was an important "next step" study. Their finding was supported by the work of Larson and Richards (1991) who concluded that both context and individual difference variables were important in understanding adolescent boredom in and out of school. Thus, we felt it important to understand whether or not differences in boredom existed depending on the type of activity. Hypotheses This study sought to understand the causes of boredom in free time among adolescents using both the psychological and social control/resistance theories at two levels of analysis (individual and situational; see Table 1). We predicted that regardless of level of analysis, when adolescents felt as though they were autonomous and self-determined they would be less bored. Conversely, when adolescents felt controlled, they would experience boredom. Thus, at the situational level we hypothesized that the "want to" situation would produce the lowest levels of boredom; we could not hypothesize which of the other two reasons for participation would better predict boredom . At the individual difference level we hypothesized that high levels of perceived parental monitoring would be predictive of higher levels of boredom. We also hypothesized that low levels of intrinsic motivation would predict higher levels of boredom. We examined the relationship of context to level of boredom in post hoc analysis and thus made no predictions. Methods and Procedures Data for this investigation came from phase two of a three year longitudinal study conducted in a middle school in central Pennsylvania. In phase [TABULAR DATA FOR TABLE 1 OMITTED] two, all students in grades six through eight were asked to volunteer to complete an in-school questionnaire about parents, free time, friends, school achievement, identity, and self esteem. The present study used data from all grade eight students who volunteered to participate in an extended study. As part of this extended study, grade eight students who participated in the in-school survey were contacted by phone and asked to participate in a one hour in-depth interview (about dating as well as relationships with their friends and family), participate in an activity diary (about leisure activities), and to complete a follow-up questionnaire (about school achievement, parental monitoring, and other parenting practices). The data reported here come from the activity diary, in-school survey, and follow-up survey portions of the study. A process of active consent was used. All students whose parents signed and returned the consent form (indicating either approval or refusal) received a coupon for a free Dairy Queen Blizzard. If students participated in the extended study, they received a movie ticket. For the phase two study, a total of 600 recruitment letters were sent to all parents of middle school students. Out of the 600, 398 parents gave permission for their children to participate in the general study (66% response rate). Of the 398 students, 143 were in grade eight. (Thus 72% of all grade eight students' parents provided consent.) Of these 143 grade eight students, 86 (60% of the 143 students who were allowed to participate) participated in the in-depth interview and follow-up survey and 82 (57%) students participated in the activity diary portion of the project. Scheduling conflicts were the most common reason for refusal to participate in the study. Participants predominantly identified themselves as white (92%), with 56% of the mothers and 60% of the fathers having graduated from college. The sample was 51% female, with a mean age of 13.2 years (s.d. = .44). Instrumentation and Procedures The in-school questionnaire was self-administered in large group settings (cafeterias and study halls) and took approximately 30 minutes to complete. The research team administered the questionnaires to the students. Questions were asked about parents, friends, leisure, school achievement, intrinsic motivation, boredom in free time, and problem behaviors (e.g., vandalism, substance use, etc.). Grade eight students who agreed to participate in the extended study completed a follow-up questionnaire at home and brought it with them to the personal interview. These grade eight students indicated their consent to continue their participation by filling out a sheet of paper asking for the continued cooperation. Each student who agreed to continue was then provided with a take-home questionnaire. Questions on the follow-up questionnaire and interview focused on parental monitoring, information disclosure to parents, conflict over rules, adolescent autonomy, self-esteem, and dating. These grade eight students were contacted by a research assistant and one-on-one interviews were scheduled with a trained interviewer. Once an adolescent participated in the in-depth interview and completed the follow-up questionnaire, he or she then began the activity diary component of the study. The activity diary was used to assess the daily free time behaviors and experiences of the adolescents in the study. The instrument was pretested on several eighth graders to ensure the questions and response categories were easily understood. Data were collected via phone interviews Monday through Thursday between 7:00 and 9:30 p.m. All phone interviewers completed a two session training program prior to the phone interviews. Participants were randomly scheduled to be interviewed four times over a two week period, although the two week period differed for each adolescent. Initially, data were to be collected on weekend days and weekdays. Interviewers had a difficult time scheduling interviews on weekend days, however, so the research team decided to collect data on Monday through Thursday evenings only. Although this meant that we have no weekend data, the data are more homogeneous and reflect the weekday pattern of activities and experiences of the grade eight students in this sample. Data collection for the activity diary portion of the study began in March and continued through mid-June. After a brief introduction, the adolescent who was phoned was first asked to identify the main activity done that day between after school and dinner time, and then between dinner time and bedtime. The interviewer chose one of the activities to be the focus of a series of follow-up questions. The focal activity chosen represented a leisure situation. Thus, if the adolescent did homework and hung out with friends, the interviewer chose hanging out to be the focal activity. In the rare case where neither of the two activities were leisure oriented, the interviewer randomly chose one. If both activities were leisure, one was chosen at random. All questions asked in the activity diary interview related to the focal activity. A variety of questions were asked, and covered topics such as experience (e.g., boredom), with whom the youth participated, location of activity, reason for doing activity, and so on. Measures The dependent variable, level of boredom for each activity, was assessed through a single item that asked participants to respond to how bored versus how involved they were in their activity where 1 = very involved and into it and 5 - very bored. Although several scales exist that measure boredom and reflect a more dimensionalized perspective, we used a single item to make it easier to respond over the phone and to reduce the burden of response time. As it was, the phone diary took about 20 minutes to complete. This single item seemed adequate for our purpose, which was to simply know if they were bored or not. Pre-test member checks indicated this was a valid measure for assessing a 13 year old's perception of whether a situation was boring or not. Situation Level Variables. Reason for participation in the activity was assessed using a single item "Why did you participate in the activity?" Response choices included "had to," "wanted to," and "because there was nothing else to do." Again, a single item was used and pre-test member checks indicated this question and response categories adequately captured the intent of the question. Adolescents' comments indicated this variable had high face validity. The other situational level variable was the specific activity in which the adolescent participated. This variable was created by classifying each activity into one of six categories: media/home-based; school-based (e.g., arts, sports); social; outdoor/active; miscellaneous (e.g., church service, driving from the airport, and being interviewed); and maintenance/work. Decisions about classification of these activities was done by the team of researchers and was relatively straight forward. Although the categories are not mutually exclusive, for the purposes of this study this classification scheme was adequate. Individual Difference Level Variables. Parental monitoring was assessed by a standard monitoring index (Patterson & Stouthamer-Loeber, 1984). Responses from this item came from the follow-up questionnaire. In this case, we used the mother as the parent of interest because typically mothers are more involved in parenting at all ages, especially monitoring children (Steinberg, 1990). Students responded to the stem "How much does your mother REALLY know" for five situations, including: "Where you go at night? How you spend your money? What you do with your free time? Where you are most afternoons after school?" Responses were coded on a three point response format, with 1 represented "knows a lot," 2 represented "knows a little," and 3 represented "doesn't know" (Cronbach's alpha = .80). Intrinsic motivation was measured with a nine item index adapted from Harter (1981). This measure was comprised of the following items that were included on the in-school questionnaire: "I like challenging work," "I like to figure things out for myself," "I'd rather figure out mistakes on my own," "I like solving hard problems on my own," "I know how I'm doing without a report card," "I like hard school subjects," "I know how I'm doing without a teacher telling me," "I find difficult work interesting," and "I know if something is good when I turn it in". The response format was 1 = this is not at all like me and 5 = this is really like me (Cronbach's alpha = .86). Analytic Strategy This paper used a three-fold analytic strategy. First, descriptive statistics were examined. Next, inferential analyses were performed to assess the predictors of boredom at the individual difference level and the situation level. Finally, post hoc analyses were used to illustrate the nature of the relationships found and to gain insight into the differences between the predictors of boredom across activity types. Gender was included in the analysis due to past research that has indicated significant gender differences in all variables of interest to this study (e.g., Shaw, et al., 1995). Analysis of the diary data was complicated by the non-independence of observations because each adolescent reported on activities for four different days. Hierarchical linear modeling (HLM), a technique specifically designed to decompose variance into common source and situational variance (Bryk & Raudenbusch, 1992), was used to assess the extent to which adolescents' level of boredom differed depending upon their reason for participating in the activity and on the individual characteristics they brought to the situation (i.e., perceived parental monitoring, intrinsic motivation, and gender). In these analyses, HLM parses variance into a situational component (differences within an adolescent's level of boredom across different situations as predicted by reason for participation) and an individual component (differences between different adolescents' boredom predicted by gender, perceived parental monitoring, and intrinsic motivation). The former category is considered situational because reason for participation varied across different leisure situations or occasions, while the latter reflects characteristics individuals bring to all situations in which they participate. HLM analyses provides two types of information: (a) an estimate of the component of variance in the outcome measure (boredom ) that can be attributed to individual differences between people and to differences within people across situations, and (b) information about the extent to which each variance component can be predicted by its respective predictors (reason for participation, gender, intrinsic motivation, and perceived parental monitoring(1)). These analyses rely on data from 81 individuals observed across 234 situations. Because each adolescent reported on only four different situations, the model adopts the assumption of traditional regression models that the relationship between reason for participation and boredom is uniform across individuals. Results Descriptive statistics illustrating the relationship between reason for participation and adolescents' levels of boredom , intrinsic motivation, and perceived parental monitoring are presented in Tables 2 and 3(2). In general, these descriptive findings support the general pattern hypothesized to underlie the relationships: when adolescents engage in activities because they want to they report lower levels of boredom during the activity, and higher levels of intrinsic motivation compared to those adolescents who are participating in activities because they felt they had to do it or had nothing else to do. Contrary to our hypothesis, however, perceived parental monitoring was higher for those 8th graders who wanted to do the activity. Overall, males reported slightly higher levels of boredom, lower levels of intrinsic motivation, and lower levels of perceived parental monitoring than females. TABLE 2 Descriptive Statistics of Reason for Participation, Intrinsic Motivation, and Parental Monitoring Nothing Else Had To Wanted To To Do Mean (s.d.) Mean (s.d.) Mean (s.d.) Boredom 2.69 (1.34) 1.72 (.83) 2.64 (1.11) Intrinsic Motivation 3.14 (.83) 3.44 (.87) 3.12 (.90) Parental Monitoring 1.58 (.50) 1.63 (.49) 1.46 (.50) N 54 208 57 Note: Boredom coded as: 1 = very involved and into it, 5 = very bored Intrinsic Motivation coded as: 1 = low intrinsic motivation, 5 = high intrinsic motivation Parental Monitoring coded as: 1 = doesn't know, 2 = knows a little bit, 3 = knows a lot Predicting Boredom. HLM was used to predict levels of boredom from individual difference variables (i.e., intrinsic motivation, perceived parental monitoring, and gender) and situational variables (i.e., reason for participation). Reading and interpreting a conventional HLM table is not necessarily intuitive. Table 4 reflects the results of a series of steps in HLM that one conducts to get to the "bottom line." Although the most important results are reported at the top of the table, critical diagnostic information is contained in the middle and bottom. The information in the middle of the table, which is the first piece of diagnostic information, essentially tells us that we can proceed with our interpretation - the model contains sufficient variance to warrant investigation. The next question to ask is how much of the observed variation in boredom can be explained by differences at the situation level (that is, within person), and how much by individual differences (that is, between persons). Baseline model statistics (lower portion of Table 4) indicate how much total variance can be explained by the model whereas the current model shows how much our model is actually explaining. Examination of the baseline model suggests that 23% of variance in adolescents' reported boredom can be explained by individual differences while the remaining 77% is attributable to situational differences plus error. The variance attributed to the individual difference level (23%) is calculated by dividing the baseline variance due to individual differences (.2649) by the total variance (.2649 + .9064). The [R.sup.2] scores are the proportion of variance at that level that is explained based on the proportion of variance that is possible. In addition to providing insight into the relative proportion of variance attributable to situational and individual differences, these baseline figures are also important because in an HLM analysis the ability of variables to predict the outcome is judged against only that proportion of the variance at the same explanatory level as the variable. Thus in examining the proportion of variance that can be explained by situational factors, we examined the proportion of variance within person attributable to reason for participation. Similarly, the success of intrinsic motivation, perceived parental monitoring, and gender in predicting boredom is examined relative to the between person variability. TABLE 3 Descriptive Statistics for Perceived Parental Monitoring, Intrinsic Motivation, and Boredom by Gender Boredom Intrinsic Motivation Parental Monitoring M (sd) M (sd) M (sd) Males 2.12 (1.13) 3.28 (.87) 2.36 (.59) Females 2.00 (1.04) 3.38 (.90) 2.50 (.48) Boredom coded as: 1 = very involved and into it, 5 = very bored Intrinsic Motivation coded as: 1 = low intrinsic motivation, 5 = high intrinsic motivation Parental Monitoring coded as: 1 = doesn't know, 2 = knows a little bit, 3 = knows a lot [TABULAR DATA FOR TABLE 4 OMITTED] TABLE 5 Relationship of Activity Type to Reason for Participation Nothing Else Had To Wanted To To Do % (N)(a) % (N) % (N) Home based 8.2% (4) 51.0% (25) 40.8% (49) School based 31.5% (23) 67.1% (49) 1.4% (1) Social 2.9% (2) 76.5% (52) 20.6% (14) Outdoor 1.9% (1) 85.2% (46) 13.0% (7) Miscellaneous 17.5% (7) 60.0% (24) 22.5% (9) Maintenance/Work 48.6% (17) 34.3% (12) 17.1% (6) a N reflects the number of times an activity in the category was used as the focal activity for the activity diary interview. The top of Table 4 provides information to test our hypotheses. The estimated coefficient for base boredom (2.056) represents the mean level of boredom across situations. The estimated coefficients for gender, intrinsic motivation, perceived parental monitoring, and reason for participation ("had to v. want to" and "had to v. nothing else to do") are the regression coefficients representing the relationship between each variable and boredom. At the individual difference level, results indicate that adolescents with lower intrinsic motivation and lower levels of perceived parental monitoring are more likely to be bored (p [less than] .05). Gender does not predict individual differences in boredom. Using data from the bottom of Table 4, we see that 21% ([R.sup.2]) of the 23% of variance in boredom attributable to individual differences is explained by intrinsic motivation and perceived parental monitoring. Because HLM cannot analyze categorical data, the three reasons for participation were dummy-coded into two dichotomous variables, with the "had to" condition serving as the reference category (coded 1). As indicated on the top of Table 4, adolescents participating in an activity because they "wanted to" were less bored than when they participated in an activity because they "had to" (T(1,234) = 4.31, p = .000). However, no difference existed between how bored adolescents were when they participated in an activity because they "had to" or because they "had nothing else to do" (T(1,234) = .239, p = .812). Ten percent ([R.sup.2], bottom of table 4) of the 77% of the within-person (i.e., situational) variance in boredom can be attributed to adolescents' reason for participating. Influence of Context on Reason and Boredom. Does the reason adolescents participate in leisure activities" vary by activity type? Descriptive statistics presented in Table 5 suggest that they do. For outdoor, social, school-based, and miscellaneous activities, adolescents were more likely to "want to" do the activity, followed by "nothing else to do," and "had to" (except for school-based activity, where "had to" was the next most common reason). Although adolescents were more likely to "want to" participate in school-based programs, almost one third of the adolescents reported they "had to" participate in these activities. The only other activity with a relatively high proportion of "had to" responses was maintenance/work activities. There was almost a 50-50 split on reason given to participate in home-based activities between "nothing to do" and "wanted to." Not surprisingly about 41% of the time adolescents had nothing else to do, which was associated with some type of home-based activity. Social and miscellaneous activities (e.g., driving from airport) were also associated with having nothing else to do about 20-23% of the time. The Effect of Reason on Boredom by Activity: To gain further insight into the nature of the relationship between reason for participation, activity type and boredom, mean levels of boredom were calculated separately by reason for participation within each activity type (Table 6). Again, due to the nonindependent nature of the data, tests of significance were not performed. Although evidence for variability in boredom across activities existed, within each activity type the "wanted to" situation was associated with the lowest level of boredom . These results are consistent both with the HLM analyses and also with the interpretation that the relationship between reason for participation and boredom is not due to motivational differences in activity type. Discussion The purpose of this study was to help us better understand why adolescents are bored by contrasting two perspectives of boredom : psychologically [TABULAR DATA FOR TABLE 6 OMITTED] based theories and theories related to social control and resistance. Through a series of analyses, we have painted a picture of adolescent boredom as experienced in free time activities using information about individual differences and situations. At the situational level, the results were as predicted. Having no choice (i.e., feeling pressured by external factors) or perceiving nothing to do (i.e., no optimally arousing options) were predictive of boredom, whereas being self-determined in activity choice (wanted to) was strongly associated with being involved (and not bored) in the activity. Furthermore, most of the variance in boredom came from situational factors (77% of the total possible to be explained), suggesting that adolescents are more prone to be influenced by "the moment" rather than those presumably stable individual difference characteristics they possess. At the individual difference level, we found mixed support for our predictions. In the next sections we will discuss in more detail these findings, and where appropriate, offer developmental explanations or speculations. In particular, we will discuss developmental issues of autonomy, identity, and attention focusing. In some cases methodological issues will be discussed. Social Control Social control theory suggests that adult control may be associated with boredom in adolescent leisure experiences. We found mixed support for this contention, although the evidence to the contrary only explains a small proportion of variance and may be an artifact of measurement. In this study, the lack of autonomy ("I had to") clearly was positively associated with feelings of boredom. Perceived parental monitoring, however, was negatively associated with boredom . In the leisure literature, the role of parents in the leisure of adolescents has been minimally addressed. Although parental influences on youth leisure experiences have been explored from a purchase decision perspective (Howard & Madrigal, 1990) and from situations where parents were spectators in adolescent competitive sports (Left & Hoyle, 1995), neither of these studies examined consequences of parental involvement on the outcomes (e.g., enjoyment, identity, and boredom) of adolescent leisure experiences. In trying to understand why lower levels of parental monitoring were associated with boredom , it is important to consider the developmental stage of these 13 year olds. At 13, it is still probably considered reasonable and safe for parents to know where, what, and with whom the adolescent is engaged; thirteen year olds are in the early stages of making the transition to increased freedom in decision making. Thus, parental monitoring may not have been construed to be lack of autonomy, but rather was seen as supportive. Parental monitoring is associated with the authoritative style of parenting that is associated with enhanced engagement and performance in school (e.g., Steinberg, Lamborn, Dornbusch, & Darling, 1992). Our findings suggested that it is possible this effect carries over into a leisure context. Other possible reasons may explain that higher levels of perceived monitoring by parents was associated with lower levels of boredom . First and most obvious, it could be that a parent had facilitated the experience in the first place, alleviating the adolescent from having to think of something to do. If this were the case, however, there would be a fine line between a parent facilitating an experience and a parent usurping autonomy. In other words, rather than having the perception that parental involvement with one's actions in free time was 'legitimate,' an adolescent might construe this involvement as over-controlling. In this case, according to both developmental theory and the social control/resistance theory of boredom, this situation would produce boredom . Thus, a future research question to address would be to determine whether level of perceived parental control was 'legitimate' or not, and how these perceptions relate to boredom. Measurement issues are also important to consider. Our measure of parental monitoring was a general one, and not specific to the situation. As we saw from the HLM analysis, situation specific reasons for participating in the activity were stronger than individual difference variables. Finally, it is possible that parental monitoring was not a good measure of autonomy from parents. In sum, parental monitoring and parental influences on adolescents have not been well addressed by leisure researchers and this study suggests that further inquiry into this line of thinking might be productive. Social control not only comes from parents, but also from other adults such as coaches and leaders of extracurricular activities. About one third of the adolescents in this study reported they "had to" do some of the school based extracurricular activities. A frequent activity reported on during the period of the activity diary was a play in which many adolescents participated. In many cases, these adolescents really wanted to participate. In other cases (almost one third of the time), they felt obligated, and as we reported, feeling obligated to participate was linked with a higher level of boredom . These results are consistent with Larson and Richards' (1991) finding that adolescents reported being bored 30% of the time during extracurricular activities. Note, however, that adolescents may want to participate in the extracurricular activity in general, although on a particular day they may have preferred doing something else. If further study supports these relationships, then the structure of after school activities should be consistent with the developmental process of autonomy formation and allow for more adolescent ownership of these activities so that choice and self-determined behavior are facilitated. We have addressed the social control perspective considering adult structures and obligations as the controlling factor. Another consideration for future research is to examine whether the social control that peers have over each other produces the same results. The interactive decision making among adolescents in terms of deciding what to do may leave some adolescents feeling pressured or at least feeling like they lack control in some situations (Csikszentmihalyi & Larson, 1984; Hultsman, 1993). Evidence from an interpretive study on boredom of at risk youth suggested that social control stemming from peer expectations is associated with boredom (Brake, 1997) but this relationship is a complex one. Nothing to Do The lack of anything else to do as a reason to participate in an activity was associated with higher levels of boredom than self-determined behaviors. Doing an activity because "I wanted to" implies that through self-determination and autonomy, adolescents are making an active choice of something do to. From a cognitive psychology perspective, this implication suggests that (a) these adolescents know of interesting things to do and (b) they have the skills and ability to carry out their desires. Perceiving nothing else to do indicates the opposite. In one sense, perceiving nothing to do might be the result of the inability to decide what to do. Kleiber and Rickards (1984) have suggested that although having free choice is critical to adolescents as they learn to increase their autonomy, actually deciding what to do might be extremely difficult. In part, Kleiber and Rickards suggested that this is because choosing an activity can be perceived as a reflection of who one is, and adolescents are faced with balancing the personal, peer, and parental demands on who they perceive themselves to be. Thus, doing something because the adolescent perceived nothing to do might actually be the result of an inability to choose something that satisfactorily balanced these perceived demands. Or, it could be that there was actually nothing to do. Whether having nothing to do was real or imagined, or whether having nothing to do was based on an inability to choose something satisfactory, we do not know. The fact is that some adolescents in this study participated in an activity by default. In this case, the default choice did not carry the benefits of actively and deliberately choosing an activity. This conclusion is supported by much of the work of Silbereisen and his colleagues (e.g., Silbereisen, Eyferth & Rudinger, 1986) who discussed adolescent development from an "action in context" perspective. This perspective suggests that adolescents who are active producers of their own development (that is, make self-determined and deliberate choices) are healthier and more productive. In our research, we found a lack of deliberate choice undermined the leisure experience. Larson and Kleiber (1993) suggested that because early adolescence is a critical period where one learns to focus one's attention, leisure activities that allow for, or even demand, self-controlled actions rather than other-directed activities (e.g., parents or coaches) are important to adolescent development. If adolescents could learn skills to help them direct their attention and focus on pleasurable leisure activities, they might learn to reduce the perception that there is nothing to do. It appears that some adolescents evolve naturally into the ability to self-direct, control, and focus their attentions but that others need assistance in learning how to do this. The ability to focus and direct one's attention increases with age; early on, adult structures actually help facilitate directed attention (Larson & Kleiber, 1993). But, as just seen, having to do a leisure activity due to adult structure is related to increased boredom. Thus, again, a balance needs to be achieved between levels and type of control offered by adults. This suggests that gauging the developmental level of the adolescent in terms of ability to focus and control one's attention is important in judging the degree of structure or guidance needed. Future research could address this issue. Other perspectives that provide insight into the default choice of nothing else to do suggest that boredom is an experience of inner conflict (e.g., Bernstein, 1975; Frankl, 1969; Keen, 1977) or ennui (e.g., Healy, 1984; Kuhn, 1976). These types of boredom are more deep-seated, chronic, not dependent upon external factors, and possibly more pathological. Harlow (1997) suggested that boredom is fashionable among today's youth, citing recent song rifles as evidence (e.g., "Boring Summer" by CIV, "Bored" by the Deftones, "Being Bored" by Merril Bainbridge, and "Boring Life" by Far). He stated that music critics refer to this music as "angst and roll," and defined angst as melancholy and disdain for life's type of existential crisis. We do not know whether or not those adolescents who reported participating in an activity because of nothing else to do experienced this type of ennui as a way of being, or if truly there was objectively or perceptually nothing else to do. Future research on adolescents and boredom should not ignore this potentially productive perspective. This research might stem from an identity formation perspective. As adolescents mature, discover and create who they are, discovering or creating an identity based on boredom or ennui is not developmentally productive. Our study findings are consistent with the long held understanding that intrinsic motivation and self determination, as hallmarks of leisure, are antithetical to the experience of boredom and are associated with high levels of being involved in an activity. To the extent that we can facilitate adolescent choice of activities, mitigate adult control and structure, or reduce feelings of obligatory participation, we can reduce feelings of boredom. >From a developmental perspective, this autonomy enhancing potential of leisure is important. This study has suggested that not only is boredom a complex phenomenon, especially when viewed within a leisure context, but also boredom might be linked to developmental processes such as autonomy development, cognitive agility, and possibly identity development. Future research that employs innovative and/or mixed method approaches is needed to continue to unravel the puzzle of adolescent boredom in leisure. Author Identification Notes: Linda L. Caldwell, School of Hotel, Restaurant and Recreation, Management, 201 Mateer Building, The Pennsylvania State University, University Park, PA 16802, 814 863-8983, LindaC at psu.edu. The authors would like to thank the anonymous reviewers for their helpful feedback. 1 Type of activity was not included in the HLM analysis due to its polychotomous nature. Variables with more than three levels are not easily dealt with in HLM analysis; thus we used type of activity in planned post hoc descriptive comparisons. 2 No statistical tests are reported testing differences in mean levels across groups due to the nonindependence of observations. That is, this analysis has repeated measures of categorical, independent variables, for which no statistical analysis is available. References Brake, S. B. (1997). Perspectives on boredom for at risk adolescent girls. Unpublished masters thesis, The Pennsylvania State University. Bernstein, H. (1975). Boredom and the ready-made life. Social Research, 42, 512-537. Bryk, A. S. & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data analysis methods, Newbury Park, CA: Sage. Caldwell, L. L., & Darling, N. (in press). Leisure context, parental control, and resistance to peer pressure as predictors of adolescent partying and substance use: An ecological perspective. Journal of Leisure Research. Caldwell, L. L. & Smith, E. A. (1995). Health behaviors of leisure alienated youth. Loisir & Societe/Leisure and Society, 18, 143-156. Caldwell, L. L., Smith, E. A., & Weissinger, E. (1992a). Development of a leisure experience battery for adolescents: Parsimony, stability, and validity. Journal of Leisure Research, 24, 361-376. Caldwell, L. L., Smith, E. A., & Weissinger, E. (1992b). The relationship of leisure activities and perceived health of college students. Loisir & Societe/Leisure and Society, 15, 545-556. Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper and Row. Csikszentmihalyi, M. & Larson, R. (1984). Being adolescent. New York: Basic Books. Eccles, J., Midgely, C., Wigfield, A., Buchanan, C., Reuman, D., Flanagan, C., & MacIver, D. (1993). Development during adolescence: The impact of stageenvironment fit on young adolescents' experiences in schools and in families. American Psychologist, 48, 90101. Elliot, G. R., & Feldman, S. S. (1990). Capturing the adolescent experience. In S.S. Feldman & G. R. Elliott (Eds.) At the threshold, pp. 1-13, Cambridge, MA: Harvard University Press. Farrell, E., Peguero, G., Lindsey, R., & White, R. (1988). Giving voice to high school students: Pressure and boredom, ya know what I mean? American Educational Research Journal, 4, 489-502. Frankl, V. E. (1969). The will to meaning: Foundations and applications of logotherapy. New York: World. Gunter, B. G. (1987). The leisure experience: Selected properties. Journal of Leisure Research, 19(2), 115-130. Harlow, J. (1997). The grand ennui: Running to escape or racing to embrace? Unpublished paper, The Pennsylvania State University. Harter, S. (1981). A new self-report scale of intrinsic versus extrinsic orientation in the classroom: Motivational and informational components. Developmental Psychology, 17, 300-312. Healy, S. D. (1984). Boredom, self, and culture. Cranbury, NJ: Associated University Presses. Howard, D. R., & Madrigal, R. (1990). Who makes the decision: The parent or the child? The perceived influence of parents and children on the purchase of recreation services. Journal of Leisure Research, 22, 244-258. Hultsman, W. (1993). The influence of others as a barrier to recreation participation among early adolescents. Journal of Leisure Research, 25, 150-164. Iso-Ahola, S. E. (1979). Basic dimensions of definitions of leisure. Journal of Leisure Research, 11, 28-39. Iso-Ahola, S. E., & Crowley, E. D. (1991). Adolescent substance abuse and leisure boredom. Journal of Leisure Research, 23(3), 260-271. Iso-Ahola, S. E., & Weissinger, E. (1987). Leisure and boredom. Journal of Social and Clinical Psychology, 5(3), 356-364. Keating, D. P. (1990). Adolescent thinking. In S. S. Feldman & G. R. Elliott (Eds.), At the threshold (pp. 54-89). Cambridge, MA: Harvard University Press. Keen, S. (1997, May). Chasing the blahs away: Boredom and how to beat it. Psychology Today, 78-84. Kleiber, D. A., & Rickards, W. H. (1984). Leisure and recreation in adolescence: Limitation and potential. In M. G. Wade (Ed.), Constraints on leisure (pp. 289-318). Springfield, IL: Charles C. Thomas. Kuhn, R. C. (1976). The demon of noontide: Ennui in western literature. Princeton, NJ: Princeton University Press. Larson, R. W., & Kleiber, D. A. (1993). Structured leisure as a context for the development of attention during adolescence. Loisir et Societe/Society and Leisure, 16(1), 77-98. Larson, R. W., & Richards, M. H. (1991). Boredom in the middle school years: Blaming schools versus blaming students. American Journal of Education, 99(4), 418-443. Leff, S. S., & Hoyle, R. H. (1995). Young athletes' perceptions of parental support and pressure. Journal of Youth and Adolescence, 24, 187-203. Neulinger, J. (1981). The psychology of leisure. (2nd ed.). Springfield: Charles C. Thomas. O'Hanlon, J. (1981). Boredom: Practical consequences and a theory. Acta Psychologica, 53, 53-82. Orcutt, J. D. (1985). Contrasting effects of two kinds of boredom on alcohol use. Journal of Drug Issues, 14, 161-173. Patterson, G. R., & Southhamer-Loeber, M. (1984). The correlation of family management practices and delinquency. Child Development, 55, 1299-1307. Shaw, S. M., Caldwell, L. L., & Kleiber, D. A. (1995). Boredom, stress, and social control in the daily activities of adolescents. Journal of Leisure Research, 28, 274-292. Silbereisen, R. K., Eyferth, K., & Rudinger, G. (Eds.) (1986). Development as action in context: Problem behavior and normal youth development. New York: Springer. Steinberg, L. (1990). Autonomy, conflict and harmony in the family. In S. S. Feldman & G. R. Elliott (Eds.) At the threshold, pp. 255-276, Cambridge, MA: Harvard University Press. Steinberg, L., Fletcher, A., & Darling, N. (1994). Parental monitoring and peer influences on adolescent substance use. Pediatrics, 93, 1-5. Steinberg, L., Lamborn, S. K., Dombusch, S. M., & Darling, N. (1992). Impact of parenting practices on adolescent achievement: Authoritative parenting, school involvement, and encouragement to succeed. Child Development, 63, 1266-1281. Weissinger, E., Caldwell, L. L., & Bandalos, D. L. (1992). Relation between intrinsic motivation and boredom in leisure time. Leisure Sciences, 14, 317-325. From jvkohl at bellsouth.net Tue May 17 15:08:01 2005 From: jvkohl at bellsouth.net (JV Kohl) Date: Tue, 17 May 2005 11:08:01 -0400 Subject: [Paleopsych] Olfaction and behavior: was What's the survival value of PTSD? In-Reply-To: <009701c55ad4$e554be70$6501a8c0@callastudios> References: <01C54AF6.7D585B80.shovland@mindspring.com> <016001c54b3a$6b9b66a0$6501a8c0@callastudios> <42897207.5050104@bellsouth.net> <009701c55ad4$e554be70$6501a8c0@callastudios> Message-ID: <428A08D1.3030106@bellsouth.net> http://www.nytimes.com/2005/05/17/opinion/17pinker.html?th&emc=th Steven Pinker writes: "...Swedish neuroscientists scanned people's brains as they smelled a testosterone derivative found in men's sweat and an estrogen-like compound found in women's urine. In heterosexual men, a part of the hypothalamus (the seat of physical drives) responded to the female compound but not the male one; in heterosexual women and homosexual men, it was the other way around." .........and, in the next paragraph................ "The role of pheromones in our sexuality must be small at best. When people want to be titillated or to check out a prospective partner, most seek words or pictures, not dirty laundry." --------------------------------------------------------------------------------------------------------------------------- I take issue with Pinker's simplistic link between the understated unconscious affect of putative human pheromone on neuroanatomy (Savic et al's findings), and his preemptive conclusion that pheromones play a minimal role in our sexuality. His mistake is common. As indicated by his words/pictures (not dirty laundry) association with sexual titillation, he addresses only conscious choice, and ignores the unconscious affect of pheromones on hormone levels (and behavior) throughout a lifetime of experience (as reviewed in Kohl et al 2001). http://www.nel.edu/22_5/NEL220501R01_Review.htm Using similar faulty logic, Pinker could say that people are more interested in words describing food or pictures of food, and the role of food's olfactory appeal is "small at best." A logical person would not deny the primary role of olfaction (i.e., chemical appeal) when it comes to food choice. Nevertheless, when it comes to sexuality, Pinker, and most people do not think logically about the olfactory/chemical appeal of a prospective partner. Pinker's faulty logic would gives us the impression that words or pictures are satisfactory substitutes when it comes to our sexual appetite. Alice Andrews wrote: > Is the huge attraction to the scent something essential, i.e. about > 'matching' immune systems and personalities, about desiring something > rare/special, about desiring something disordered, about desiring > something that shows fitness, etc etc...? Or is it just that I > happened to have fallen in love with a man who happened to have had > these particular characteritics and smell, and now I'm locked into it > by association? Or a little of both? The initial attraction is largely due to androgen associated reproductive fitness as manifest both in his testosterone level and in his masculine pheromone production. Immune system correlates are more important depending on menstrual cycle phase. Once your sexual response cycle has been conditioned to respond to the scent of a high testosterone male, someone who is less chemically/reproductively fit is also less likely to provide a sufficiently stimulating androgenic stimulus. > > Three years ago we corresponded about love and pheromones and I got > your permission to post/share your responses on EP-yahoo. I'm pasting > here because it's pretty interesting. And exactly a year ago I wrote > you an email re the above question re personality and pheromones. I no > longer have that email, but I do have your response. Here's some of > it... I figure it's okay to share: It is. But since I began this post with Pinker's comments relative to homosexual orientation, and you mentioned the link to the immune system, I will add that the sexual orientation - immune system correlates were first detailed in Diamond, M., T. Binstock, J. V. Kohl (1996). "From fertilization to adult sexual behavior: Nonhormonal influences on sexual behavior." Hormones and Behavior 30(December): 333-353. The immune system and the olfactory system have functional similarities in recognition of self/non-self. Accordingly, we will be learning more about the immune system link to pheromone production/response and its link to sexual orientation. For example: Homosexuals produce natural body odor (e.g. pheromones) that is distinguished from heterosexual body odor, and homosexual prefer the natural body odor of other homosexuals. This extends the mammalian model for olfactory conditioning of visual appeal (which Pinker ignores) via genotypic and phenotypic expression to homosexual preferences, which lie along the same continuum as the preferences you now appear to be "locked into" by association. Thanks for your interest, Jim Kohl www.pheromones.com > > > Alice Andrews wrote: > >>Is there any evidence to suggest that particular odors are signals >>of particular personalities? Certainly high testosterone and these >>pheromones and personality must be linked, no? >> > Yes. Also, since stress increases cortisol, which decreases > testosterone, a confident man's > pheromone production would be indicative of reproductive fitness. You > know the type; acts > like he owns the joint, presents as an alpha male, attracts most of > the women. > >> The three men who share this >>particular scent (musky, musty, almost like mildew) all have similar >>personalities...Somewhat 'disordered' (a little boderline, narcissistic, >>schizoid, etc.) >>I'd be curious to know if there is anything out there on any correlation. (I >>have not found yet.) >> > Watch out for the schizoid. DHEA production varies and so does the > natural body odor of > schizophrenics. In homosexual males it's the ratio of androsterone to > etiocholanolone, which > are the primary metabolites of DHEA. Homosexuals prefer the odor of > other homosexuals (this > will be published later this year by others). > >> ------------------------------------------------------- >> >>AA: >> > I was wondering if there's any literature on (or talk of) female > pheromones at ovulation > having the capability to alter or inhibit or increase a particular > type of sperm-one that > is more likely to impregnate? > > > JVK: > The egg has been described as an active sperm-catcher; pretty sure we > cited this in my > book, but > no info I've seen indicates pheromonal effects on type of sperm. This > is an interesting > thought, > nonetheless. I hope you follow up with your inquiry to other experts. > Pheromone receptors > also > are present on sperm cells (presumably to guide them to the egg). > > AA: > If such a sperm is more 'costly' in some way to manufacture, it would > make sense that a > man would 'conserve' most 'fertile,' 'viable,' 'healthy' > sperm for when female was at her most fertile. Or perhaps it is just > as simple as: when a > man detects pheromones most (or likes them most), he is > most turned on and produces MORE semen, thus more chance for > fertilization to occur. And > perhaps more normal sperm cells are present? Any > thoughts? > > JVK: > The literature I've seen indicates a continuum of sperm production > based on ratios of > luteinizing hormone > (LH) and follicle stimulating hormone (FSH), with FSH being largely > responsible for > development. However, it > is an LH surge that accompanies both ovulation in women, and a > testosterone increase in > men exposure to > ovulatory women's pheromones (copulins). There is also some literature > (Sperm Wars) that > mentions increased > aniticipatory volume of semen, but no indications of sperm quality as > I recall. > > Sorry I can't be of more help, (read that your book got Jim Brody's > approval, congrats!) > > Jim > --------------------------------------------------- > > AA: > I sometimes wonder if the feelings of Love during conception could > possibly alter the > quality of sperm, too... > neurotransmitters/hormones/peptides etc in woman feeling love during > sex-------->affect > (copulins) pheromones (type or amount)----> > affect sperm quality??? > And/or 'love chemicals' in men simply affecting sperm quality > etc....??? Hmmm.... > > > JVK: > A possibility, since many if not all neuronal systems feedback on > the gonadotropin releasing hormone neuronal system, which drives > everything about reproduction (and, of course, is directly affected > by pheromones.) An example: increasing estrogen levels are linked > to increased oxytocin release with orgasm in women. If oxytocin also > increased with testosterone, bonding would be facilitated. Perhaps > the bonding mechanism influences fertility. Or maybe something so > simple as the immune system functions of paired mates adjusting to > the ongoing presence of a mate, facilitating conception via immune > system interaction with sperm production. Much to think about; more > to study. > > Jim > > ----- Original Message ----- > From: JV Kohl > To: Alice Andrews ; The new > improved paleopsych list > Sent: Tuesday, May 17, 2005 12:24 AM > Subject: Re: [Paleopsych] What's the survival value > ofposttraumaticstressdisorder? > > Alice, > I've long thought that the link between PTSD and rape is > olfactory. War vets response triggered by smoke; > women's response triggered by the natural scent of a man--or event > associated odors: alcohol, etc. The natural > scent of a man can evoke chemical changes in reproductive hormone > levels, which would also affect personality. > The association with natural masculine scent is most likely to > alter intimacy with a rape victim's loving spouse/lover. > She will respond to him, unfortunately, as her traumatized body > responded to the rape. > > I wonder how much you've heard, read about the olfactory > connection--and how much validity you think > there is to it. > > Jim Kohl > www.pheromones.com > > Alice Andrews wrote: > >> Steve wrote: >> >> Her chemistry will change, and depending on where she is >> developmentally (her life-history), her personality may >> actually change! (Pre, say, 25 years of age). >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From anonymous_animus at yahoo.com Tue May 17 15:41:09 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 17 May 2005 08:41:09 -0700 (PDT) Subject: [Paleopsych] the redundancy of time In-Reply-To: <200505171450.j4HEooR23927@tick.javien.com> Message-ID: <20050517154110.75670.qmail@web30812.mail.mud.yahoo.com> >>In general, a man shooting heroin into his vein does so largely for the same reason you rent a video: to dodge the redundancy of time. << --Very profound! I think I agree. Michael __________________________________ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail From ursus at earthlink.net Tue May 17 16:26:52 2005 From: ursus at earthlink.net (Greg Bear) Date: Tue, 17 May 2005 09:26:52 -0700 Subject: [Paleopsych] Olfaction and behavior: was What's the survival valueof PTSD? In-Reply-To: <428A08D1.3030106@bellsouth.net> Message-ID: There's an enormous amount of anecdotal and research evidence that people do in fact enjoy dirty laundry-among other sexual scents. Why raid panties in dorms? Why the attraction to oral sex? Why put musk and ambergris in perfume? (Ever seen a Kodiak bear roll around in the beached and rotting corpse of a dead whale?*) They just don't like to talk about it in public, and apparently, neither does Mr. Pinker. Women may be more crucially sensitive to scent than men-after all, they have to make more important genetic decisions, since their productive capacity is much more limited. I can't tell you how many times women have told me about choosing their husbands because they smell good. When Somerset Maugham asked one of H.G. Wells's young female lovers what she found so attractive in the short, reedy-voiced, spiky-haired fellow, she answered, "He smells like honey." But of course, Wells also smelled like money at that point in his career! Greg *Decaying blubber probably produces every sterol known to the universe. _____ From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org] On Behalf Of JV Kohl Sent: Tuesday, May 17, 2005 8:08 AM To: Alice Andrews Cc: The new improved paleopsych list Subject: [Paleopsych] Olfaction and behavior: was What's the survival valueof PTSD? http://www.nytimes.com/2005/05/17/opinion/17pinker.html?th &emc=th Steven Pinker writes: "...Swedish neuroscientists scanned people's brains as they smelled a testosterone derivative found in men's sweat and an estrogen-like compound found in women's urine. In heterosexual men, a part of the hypothalamus (the seat of physical drives) responded to the female compound but not the male one; in heterosexual women and homosexual men, it was the other way around." .........and, in the next paragraph................ "The role of pheromones in our sexuality must be small at best. When people want to be titillated or to check out a prospective partner, most seek words or pictures, not dirty laundry." ---------------------------------------------------------------------------- ----------------------------------------------- I take issue with Pinker's simplistic link between the understated unconscious affect of putative human pheromone on neuroanatomy (Savic et al's findings), and his preemptive conclusion that pheromones play a minimal role in our sexuality. His mistake is common. As indicated by his words/pictures (not dirty laundry) association with sexual titillation, he addresses only conscious choice, and ignores the unconscious affect of pheromones on hormone levels (and behavior) throughout a lifetime of experience (as reviewed in Kohl et al 2001). http://www.nel.edu/22_5/NEL220501R01_Review.htm Using similar faulty logic, Pinker could say that people are more interested in words describing food or pictures of food, and the role of food's olfactory appeal is "small at best." A logical person would not deny the primary role of olfaction (i.e., chemical appeal) when it comes to food choice. Nevertheless, when it comes to sexuality, Pinker, and most people do not think logically about the olfactory/chemical appeal of a prospective partner. Pinker's faulty logic would gives us the impression that words or pictures are satisfactory substitutes when it comes to our sexual appetite. Alice Andrews wrote: Is the huge attraction to the scent something essential, i.e. about 'matching' immune systems and personalities, about desiring something rare/special, about desiring something disordered, about desiring something that shows fitness, etc etc...? Or is it just that I happened to have fallen in love with a man who happened to have had these particular characteritics and smell, and now I'm locked into it by association? Or a little of both? The initial attraction is largely due to androgen associated reproductive fitness as manifest both in his testosterone level and in his masculine pheromone production. Immune system correlates are more important depending on menstrual cycle phase. Once your sexual response cycle has been conditioned to respond to the scent of a high testosterone male, someone who is less chemically/reproductively fit is also less likely to provide a sufficiently stimulating androgenic stimulus. Three years ago we corresponded about love and pheromones and I got your permission to post/share your responses on EP-yahoo. I'm pasting here because it's pretty interesting. And exactly a year ago I wrote you an email re the above question re personality and pheromones. I no longer have that email, but I do have your response. Here's some of it... I figure it's okay to share: It is. But since I began this post with Pinker's comments relative to homosexual orientation, and you mentioned the link to the immune system, I will add that the sexual orientation - immune system correlates were first detailed in Diamond, M., T. Binstock, J. V. Kohl (1996). "From fertilization to adult sexual behavior: Nonhormonal influences on sexual behavior." Hormones and Behavior 30(December): 333-353. The immune system and the olfactory system have functional similarities in recognition of self/non-self. Accordingly, we will be learning more about the immune system link to pheromone production/response and its link to sexual orientation. For example: Homosexuals produce natural body odor (e.g. pheromones) that is distinguished from heterosexual body odor, and homosexual prefer the natural body odor of other homosexuals. This extends the mammalian model for olfactory conditioning of visual appeal (which Pinker ignores) via genotypic and phenotypic expression to homosexual preferences, which lie along the same continuum as the preferences you now appear to be "locked into" by association. Thanks for your interest, Jim Kohl www.pheromones.com Alice Andrews wrote: Is there any evidence to suggest that particular odors are signals of particular personalities? Certainly high testosterone and these pheromones and personality must be linked, no? Yes. Also, since stress increases cortisol, which decreases testosterone, a confident man's pheromone production would be indicative of reproductive fitness. You know the type; acts like he owns the joint, presents as an alpha male, attracts most of the women. The three men who share this particular scent (musky, musty, almost like mildew) all have similar personalities...Somewhat 'disordered' (a little boderline, narcissistic, schizoid, etc.) I'd be curious to know if there is anything out there on any correlation. (I have not found yet.) Watch out for the schizoid. DHEA production varies and so does the natural body odor of schizophrenics. In homosexual males it's the ratio of androsterone to etiocholanolone, which are the primary metabolites of DHEA. Homosexuals prefer the odor of other homosexuals (this will be published later this year by others). ------------------------------------------------------- AA: I was wondering if there's any literature on (or talk of) female pheromones at ovulation having the capability to alter or inhibit or increase a particular type of sperm-one that is more likely to impregnate? JVK: The egg has been described as an active sperm-catcher; pretty sure we cited this in my book, but no info I've seen indicates pheromonal effects on type of sperm. This is an interesting thought, nonetheless. I hope you follow up with your inquiry to other experts. Pheromone receptors also are present on sperm cells (presumably to guide them to the egg). AA: If such a sperm is more 'costly' in some way to manufacture, it would make sense that a man would 'conserve' most 'fertile,' 'viable,' 'healthy' sperm for when female was at her most fertile. Or perhaps it is just as simple as: when a man detects pheromones most (or likes them most), he is most turned on and produces MORE semen, thus more chance for fertilization to occur. And perhaps more normal sperm cells are present? Any thoughts? JVK: The literature I've seen indicates a continuum of sperm production based on ratios of luteinizing hormone (LH) and follicle stimulating hormone (FSH), with FSH being largely responsible for development. However, it is an LH surge that accompanies both ovulation in women, and a testosterone increase in men exposure to ovulatory women's pheromones (copulins). There is also some literature (Sperm Wars) that mentions increased aniticipatory volume of semen, but no indications of sperm quality as I recall. Sorry I can't be of more help, (read that your book got Jim Brody's approval, congrats!) Jim --------------------------------------------------- AA: I sometimes wonder if the feelings of Love during conception could possibly alter the quality of sperm, too... neurotransmitters/hormones/peptides etc in woman feeling love during sex-------->affect (copulins) pheromones (type or amount)----> affect sperm quality??? And/or 'love chemicals' in men simply affecting sperm quality etc....??? Hmmm.... JVK: A possibility, since many if not all neuronal systems feedback on the gonadotropin releasing hormone neuronal system, which drives everything about reproduction (and, of course, is directly affected by pheromones.) An example: increasing estrogen levels are linked to increased oxytocin release with orgasm in women. If oxytocin also increased with testosterone, bonding would be facilitated. Perhaps the bonding mechanism influences fertility. Or maybe something so simple as the immune system functions of paired mates adjusting to the ongoing presence of a mate, facilitating conception via immune system interaction with sperm production. Much to think about; more to study. Jim ----- Original Message ----- From: JV Kohl To: Alice Andrews ; The new improved paleopsych list Sent: Tuesday, May 17, 2005 12:24 AM Subject: Re: [Paleopsych] What's the survival value ofposttraumaticstressdisorder? Alice, I've long thought that the link between PTSD and rape is olfactory. War vets response triggered by smoke; women's response triggered by the natural scent of a man--or event associated odors: alcohol, etc. The natural scent of a man can evoke chemical changes in reproductive hormone levels, which would also affect personality. The association with natural masculine scent is most likely to alter intimacy with a rape victim's loving spouse/lover. She will respond to him, unfortunately, as her traumatized body responded to the rape. I wonder how much you've heard, read about the olfactory connection--and how much validity you think there is to it. Jim Kohl www.pheromones.com Alice Andrews wrote: Steve wrote: Her chemistry will change, and depending on where she is developmentally (her life-history), her personality may actually change! (Pre, say, 25 years of age). -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Tue May 17 16:50:50 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 12:50:50 -0400 (EDT) Subject: [Paleopsych] NYT: More Diseases Pinned on Old Culprit: Germs Message-ID: More Diseases Pinned on Old Culprit: Germs http://www.nytimes.com/2005/05/17/health/17infe.html By NICHOLAS BAKALAR Infectious disease used to be a simple matter: this germ causes that illness. Doctors just had to find the germ, kill it, and cure the disease. But the old rules no longer apply. A report issued last month by the American Academy of Microbiology paints a much more complex picture of infectious disease. Germs, scientists are learning, are probably the cause of many illnesses that were never thought to be infectious, and determining exactly how a germ contributes to disease is no longer simple. The old rules date to 1883, when the German bacteriologist Robert Koch laid down three laws - now called Koch's postulates - that infectious disease specialists have used ever since to determine whether an organism causes a disease: The suspected germ must be consistently associated with the disease; it must be isolated from the sick person and cultured in the laboratory; and experimental inoculation with the organism must cause the symptoms of the disease to appear. In 1905, a fourth rule was added: The organism must be isolated again from the experimental infection. Using Koch's postulates as a starting point, scientists figured out the cause, prevention and treatment for one infectious disease after another. In the mid-20th century, some experts began to believe that infectious disease might be permanently conquered. But microbes have been found to metamorphose into new and more destructive forms, to jump from animals to humans, to hide where they are hard to find and to resist the most powerful antibiotics available. Moreover, said Dr. Ronald Luftig, an author of the academy's report and a professor of microbiology at the Louisiana State University Health Science Center, "There have been a lot of chronic human illnesses thought to be genetic or environmental, but when you look at them in more detail, it turns out there's involvement of bacteria, groups of bacteria or viruses." Microbes use a variety of mechanisms to attack cells and create havoc. Human papillomavirus, for example, inserts its nucleic acid into host cells, integrating into the cell's genes and altering the normal process of cell division to cause the uncontrolled growth of cervical cancer. Hepatitis B invades the liver, provoking an immune response that stimulates the scarring, cirrhosis and fibrosis that can lead to liver failure. At the same time, it causes genetic mutations that promote tumor growth and deadly liver cancer. Crohn's disease, a chronic inflammation of the intestines, may result from the presence of an infectious organism combined with a person's genetic susceptibility. By suppressing the immune system, inhibiting cell division and directly affecting the function of cells, germs demonstrate an astounding subtlety and resourcefulness in creating biological chaos. And it gets worse. Some microbes can contribute to more than one disease. The papillomavirus, for example, can lead not only to cervical cancer, but also to cancer of the penis and anus, venereal warts, common warts and cancers of the head and neck. Epstein-Barr virus, the cause of infectious mononucleosis, is almost as versatile, associated with Burkitt's lymphoma in Africa and with throat cancer and Hodgkin's disease, among other cancers. Helicobacter pylori, found in the mid-1980's to be a cause of peptic ulcer disease, was later implicated as a contributor to gastric lymphoma as well. Even saying that a microbe "causes" a cancerous lesion is problematic. Dr. David S. Pisetsky, a professor of medicine at Duke University Medical Center, points out that most infections do not lead to cancer, and he hesitates to alarm patients by overstating the connection. "These viruses are associated with cancer, but causality is complicated," he said. "In many instances, the viral infection is part of a chain of causality, and not the sole factor." The important questions to ask, Dr. Pisetsky adds, are "What's the risk, and how can I reduce the risk?" "If you have a virus associated with neck and head cancer," he said, "that's one more reason to quit smoking." In the case of a virus known to lead to cervical cancer, he went on, increased vigilance is in order, with Pap smears and regular examinations. All of this, combined with the fact that many germs (especially viruses) are impossible to culture in a laboratory, make it all the harder to find the microbe that causes the illness. Often, the first step is a simple observation of patients by clinicians, really little more than a hunch: a doctor notices a chronic illness that always seems to be associated with something that looks infectious. This is exactly what happened when Dr. N. M. Gregg, an Australian ophthalmologist, discovered congenital rubella syndrome. He made the connection between the cataracts he was seeing in children and their mothers' German measles during pregnancy. Sometimes epidemiological patterns offer the initial hint, as was the case with Kaposi's sarcoma, once a rare lesion caused by a type of herpes virus that began to occur frequently in gay men whose immune systems were compromised. Once the association is made, the search for the organism can begin. The gut is inhabited by hundreds of species of microbes, and the guilty party may be hiding among them. Germs can lurk in the nervous system, like the varicella virus that causes chickenpox and then lies in wait to cause herpes zoster, or shingles, decades later. And some germs can cause infection in one place in the body, and a disease in an entirely different place. Even the most sensitive molecular techniques are sometimes not good enough to find the guilty microbe. There are almost certainly still unknown microbes creating chronic illness. "One of the suspects in multiple sclerosis is Epstein-Barr virus," Dr. Luftig said. "The DNA of the virus integrates into your cells; it's there permanently. Is it a cause? Maybe." Dr. Luftig suggests several other diseases that may have microbial triggers. "There's an enterovirus that's involved in destroying pancreatic islet cells," he said. "Maybe diabetes is caused by an immune reaction to infection. Intrauterine exposure to infection may play a role in schizophrenia." No one yet knows for sure. But researchers - doctors, microbiologists, epidemiologists, geneticists - have their suspicions, and are searching carefully. "We're not saying that everything is due to microbes," Dr. Luftig said. "But the more investigative tools we develop and the more we have interacting groups of researchers with varying specialties, the more we can start to pick out potential agents that were never before suspected." From checker at panix.com Tue May 17 16:51:30 2005 From: checker at panix.com (Premise Checker) Date: Tue, 17 May 2005 12:51:30 -0400 (EDT) Subject: [Paleopsych] NYT: A Critic Takes On the Logic of Female Orgasm Message-ID: A Critic Takes On the Logic of Female Orgasm http://www.nytimes.com/2005/05/17/science/17orga.html [I thought Victorian women were supposed to be completely sexless. So which is it?] By [2]DINITIA SMITH Evolutionary scientists have never had difficulty explaining the male orgasm, closely tied as it is to reproduction. But the Darwinian logic behind the female orgasm has remained elusive. Women can have sexual intercourse and even become pregnant - doing their part for the perpetuation of the species - without experiencing orgasm. So what is its evolutionary purpose? Over the last four decades, scientists have come up with a variety of theories, arguing, for example, that orgasm encourages women to have sex and, therefore, reproduce or that it leads women to favor stronger and healthier men, maximizing their offspring's chances of survival. But in a new book, Dr. Elisabeth A. Lloyd, a philosopher of science and professor of biology at Indiana University, takes on 20 leading theories and finds them wanting. The female orgasm, she argues in the book, "The Case of the Female Orgasm: Bias in the Science of Evolution," has no evolutionary function at all. Rather, Dr. Lloyd says the most convincing theory is one put forward in 1979 by Dr. Donald Symons, an anthropologist. That theory holds that female orgasms are simply artifacts - a byproduct of the parallel development of male and female embryos in the first eight or nine weeks of life. In that early period, the nerve and tissue pathways are laid down for various reflexes, including the orgasm, Dr. Lloyd said. As development progresses, male hormones saturate the embryo, and sexuality is defined. In boys, the penis develops, along with the potential to have orgasms and ejaculate, while "females get the nerve pathways for orgasm by initially having the same body plan." Nipples in men are similarly vestigial, Dr. Lloyd pointed out. While nipples in woman serve a purpose, male nipples appear to be simply left over from the initial stage of embryonic development. The female orgasm, she said, "is for fun." Dr. Lloyd said scientists had insisted on finding an evolutionary function for female orgasm in humans either because they were invested in believing that women's sexuality must exactly parallel that of men or because they were convinced that all traits had to be "adaptations," that is, serve an evolutionary function. Theories of female orgasm are significant, she added, because "men's expectations about women's normal sexuality, about how women should perform, are built around these notions." "And men are the ones who reflect back immediately to the woman whether or not she is adequate sexually," Dr. Lloyd continued. Central to her thesis is the fact that women do not routinely have orgasms during sexual intercourse. She analyzed 32 studies, conducted over 74 years, of the frequency of female orgasm during intercourse. When intercourse was "unassisted," that is not accompanied by stimulation of the clitoris, just a quarter of the women studied experienced orgasms often or very often during intercourse, she found. Five to 10 percent never had orgasms. Yet many of the women became pregnant. Dr. Lloyd's figures are lower than those of Dr. Alfred A. Kinsey, who in his 1953 book "Sexual Behavior in the Human Female" found that 39 to 47 percent of women reported that they always, or almost always, had orgasm during intercourse. But Kinsey, Dr. Lloyd said, included orgasms assisted by clitoral stimulation. Dr. Lloyd said there was no doubt in her mind that the clitoris was an evolutionary adaptation, selected to create excitement, leading to sexual intercourse and then reproduction. But, "without a link to fertility or reproduction," Dr. Lloyd said, "orgasm cannot be an adaptation." Not everyone agrees. For example, Dr. John Alcock, a professor of biology at Arizona State University, criticized an earlier version of Dr. Lloyd's thesis, discussed in in a 1987 article by Stephen Jay Gould in the magazine Natural History. In a phone interview, Dr. Alcock said that he had not read her new book, but that he still maintained the hypothesis that the fact that "orgasm doesn't occur every time a woman has intercourse is not evidence that it's not adaptive." "I'm flabbergasted by the notion that orgasm has to happen every time to be adaptive," he added. Dr. Alcock theorized that a woman might use orgasm "as an unconscious way to evaluate the quality of the male," his genetic fitness and, thus, how suitable he would be as a father for her offspring. "Under those circumstances, you wouldn't expect her to have it every time," Dr. Alcock said. Among the theories that Dr. Lloyd addresses in her book is one proposed in 1993, by Dr. R. Robin Baker and Dr. Mark A. Bellis, at Manchester University in England. In two papers published in the journal Animal Behaviour, they argued that female orgasm was a way of manipulating the retention of sperm by creating suction in the uterus. When a woman has an orgasm from one minute before the man ejaculates to 45 minutes after, she retains more sperm, they said. Furthermore, they asserted, when a woman has intercourse with a man other than her regular sexual partner, she is more likely to have an orgasm in that prime time span and thus retain more sperm, presumably making conception more likely. They postulated that women seek other partners in an effort to obtain better genes for their offspring. Dr. Lloyd said the Baker-Bellis argument was "fatally flawed because their sample size is too small." "In one table," she said, "73 percent of the data is based on the experience of one person." In an e-mail message recently, Dr. Baker wrote that his and Dr. Bellis's manuscript had "received intense peer review appraisal" before publication. Statisticians were among the reviewers, he said, and they noted that some sample sizes were small, "but considered that none of these were fatal to our paper." Dr. Lloyd said that studies called into question the logic of such theories. Research by Dr. Ludwig Wildt and his colleagues at the University of Erlangen-Nuremberg in Germany in 1998, for example, found that in a healthy woman the uterus undergoes peristaltic contractions throughout the day in the absence of sexual intercourse or orgasm. This casts doubt, Dr. Lloyd argues, on the idea that the contractions of orgasm somehow affect sperm retention. Another hypothesis, proposed in 1995 by Dr. Randy Thornhill, a professor of biology at the University of New Mexico and two colleagues, held that women were more likely to have orgasms during intercourse with men with symmetrical physical features. On the basis of earlier studies of physical attraction, Dr. Thornhill argued that symmetry might be an indicator of genetic fitness. Dr. Lloyd, however, said those conclusions were not viable because "they only cover a minority of women, 45 percent, who say they sometimes do, and sometimes don't, have orgasm during intercourse." "It excludes women on either end of the spectrum," she said. "The 25 percent who say they almost always have orgasm in intercourse and the 30 percent who say they rarely or never do. And that last 30 percent includes the 10 percent who say they never have orgasm under any circumstances." In a phone interview, Dr. Thornhill said that he had not read Dr. Lloyd's book but the fact that not all women have orgasms during intercourse supports his theory. "There will be patterns in orgasm with preferred and not preferred men," he said. Dr. Lloyd also criticized work by Sarah Blaffer Hrdy, an emeritus professor of anthropology at the University of California, Davis, who studies primate behavior and female reproductive strategies. Scientists have documented that orgasm occurs in some female primates; for other mammals, whether orgasm occurs remains an open question. In the 1981 book "The Woman That Never Evolved" and in her other work, Dr. Hrdy argues that orgasm evolved in nonhuman primates as a way for the female to protect her offspring from the depredation of males. She points out that langur monkeys have a high infant mortality rate, with 30 percent of deaths a result of babies' being killed by males who are not the fathers. Male langurs, she says, will not kill the babies of females they have mated with. In macaques and chimpanzees, she said, females are conditioned by the pleasurable sensations of clitoral stimulation to keep copulating with multiple partners until they have an orgasm. Thus, males do not know which infants are theirs and which are not and do not attack them. Dr. Hrdy also argues against the idea that female orgasm is an artifact of the early parallel development of male and female embryos. "I'm convinced," she said, "that the selection of the clitoris is quite separate from that of the penis in males." In critiquing Dr. Hrdy's view, Dr. Lloyd disputes the idea that longer periods of sexual intercourse lead to a higher incidence of orgasm, something that if it is true, may provide an evolutionary rationale for female orgasm. But Dr. Hrdy said her work did not speak one way or another to the issue of female orgasm in humans. "My hypothesis is silent," she said. One possibility, Dr. Hrdy said, is that orgasm in women may have been an adaptive trait in our prehuman ancestors. "But we separated from our common primate ancestors about seven million years ago," she said. "Perhaps the reason orgasm is so erratic is that it's phasing out," Dr. Hrdy said. "Our descendants on the starships may well wonder what all the fuss was about." Western culture is suffused with images of women's sexuality, of women in the throes of orgasm during intercourse and seeming to reach heights of pleasure that are rare, if not impossible, for most women in everyday life. "Accounts of our evolutionary past tell us how the various parts of our body should function," Dr. Lloyd said. If women, she said, are told that it is "natural" to have orgasms every time they have intercourse and that orgasms will help make them pregnant, then they feel inadequate or inferior or abnormal when they do not achieve it. "Getting the evolutionary story straight has potentially very large social and personal consequences for all women," Dr. Lloyd said. "And indirectly for men, as well." From checker at panix.com Wed May 18 22:49:16 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:49:16 -0400 (EDT) Subject: [Paleopsych] LRB: (Jared Diamond) Partha Dasgupta: Bottlenecks Message-ID: Partha Dasgupta: Bottlenecks http://www.lrb.co.uk/v27/n10/print/dasg01_.html London Review of Books Vol. 27 No. 10 dated 19 May 2005 Collapse: How Societies Choose to Fail or Survive by Jared Diamond Allen Lane, 575 pp, ?20.00 Are our dealings with nature sustainable? Can we expect world economic growth to continue for the foreseeable future? Should we be confident that our knowledge and skills will increase in ways that will lessen our reliance on nature despite our growing numbers and rising economic activity? These questions have been debated for decades. If the debate has become increasingly shrill, it is because two opposing ways of looking at the world continue to shape it. If, on the one hand, we look at specific examples of natural assets (fresh water, ocean fisheries, the atmosphere as a carbon `sink': ecosystems generally), there is convincing evidence that at the rate at which we currently exploit them, they are very likely to change character for the worse, with very little warning. On the other hand, if we study historical trends in the price of marketed resources, or improvements in life expectancy, or recorded growth in incomes in regions that are currently rich or on the way to becoming so, the scarcity of resources would appear not yet to have bitten us. If you were to point out that there are acute scarcities in the troubled nations of sub-Saharan Africa, those whose perspective is ecological will tell you that people living in the world's poorest regions are poor because they face acute scarcities relative to their numbers; while those whose perspective is economic will argue that people experience scarcities precisely because they are poor. You may think that a study of past societies could throw some light on the question. Most people have heard about the marked decline in the population of Easter Island some three centuries ago; they may know, too, of the pueblos abandoned by the Anasazi in the south-west United States eight hundred years ago, and of the disappearance of the Maya of the southern Yucat?n a thousand years ago. They are dimly aware that these events had something to do with mismanagement of the local environment, reinforced in some cases by prolonged droughts. But the same people also know that the civilisations of China and India have evolved for about 5000 years and 3500 years respectively, and that neither is about to say goodbye. So what were the differences between the civilisations that disappeared and those that adapted more or less successfully to changing circumstances over the millennia? To say that the societies that have survived have done so because they managed their habitats well, maintained profitable relationships with their neighbours and prevented their members from killing one another, isn't really to say anything: the survival of those societies itself proves all that. We have to know what the successful ones did in order to avoid the mistakes of those that went under. We would then want to know what those societies that have enjoyed material progress in recent centuries did that was not only not wrong, but may even have been right. Jared Diamond, a professor of geography in California, looks for answers to these big questions by examining the societies that collapsed. The body of his book contains accounts of four such societies: the Easter Islanders; the Anasazi; the Mayans; and the Norse in Greenland, who died out six hundred years ago. Diamond demonstrates that the proximate cause of each collapse was ecological devastation brought about, broadly speaking, by one or more of the following: deforestation and habitat destruction; soil degradation (erosion, salinisation and fertility decline); water management problems; over-hunting; over-fishing; population growth; increased per capita impact on the environment; and the impact of exotic species on native species of plants and animals. As might be expected, the relative importance of these factors differs from case to case. Diamond's case studies have a thoroughness appropriate to the moral seriousness of his inquiry. He traces the way, step by tragic step, each of those long-dead societies, as their circumstances changed, made choices that led to their demise. In a moving account of the Norse in Greenland, for example, he shows that there was seemingly nothing wrong with their society. They had brought with them a devout and orderly set of practices relating them to God, to one another and to nature as they had known it in southern Norway. As Diamond discovers, the immigrants possessed an enormous amount of what today is known as `social capital'. And in that lay the source of their eventual collapse; for in their determination to maintain cultural coherence in a foreign environment, they made no attempt to learn from the native inhabitants - the Inuit - about the local ecology and the reasons the latter relied on fish and seals for sustenance. The immigrants' culture had been built instead on cattle, crops and wooden houses. And in choosing to pursue their previous way of life in a fragile, alien landscape, they effectively signed their own death warrant. The Norse in fact died of starvation. This is history as geography, and it is deep and heady stuff. In his search for the root cause of each collapse, Diamond deems no item of information innocuous, no evidence too awkward for consideration. He treats his raw materials like pieces in a jigsaw puzzle - written records, archaeological remains, oral history, palaeontological imprints - and places them against the broad canvas of those societies' geography. He sifts through them to reach the most plausible reconstruction of what happened in the distant past. It is detective work of great skill and integrity. Some might wonder whether we need two hundred pages of analysis to reach an understanding of these collapses, but I find Diamond's obsession with detail wholly understandable. When you embark on a project of such scope as this, the urge to dig and then poke around some more to check what else there might be is irresistible. Diamond's reading of the collapses is original, for nature doesn't figure prominently in contemporary intellectual sensibilities. Economists, for example, have moved steadily away from seeing location as a determinant of human experience. Indeed, economic progress is seen as a release from location's grip on our lives. Economists stress that investment and growth in knowledge have reduced transport costs over the centuries. They observe, too, the role of industrialisation in ironing out the effects on societies of geographical difference, such as differences in climate, soil quality, distance from navigable water and, concomitantly, local ecosystems. Modern theories of economic development dismiss geography as a negligible factor in progress. The term `globalisation' is itself a sign that location per se doesn't matter; which may be why contemporary societies are obsessed with cultural survival and are on the whole dismissive of our need to discover how to survive ecologically. Diamond believes that the risk of collapses such as those experienced by earlier societies should be a matter for increasing concern today. To demonstrate why, he describes how Rwanda's collapse as a society was brought about by unprecedented population growth in a subsistence economy operating in a fragile ecosystem. He also offers a natural history of China and Australia and the impact their populations are now having on their ecosystems. By his reckoning, the situation today is worse than it has ever been: as well as the old dangers of deforestation and so forth, we now have to deal with anthropogenic climate change, the accumulation of toxic chemicals, energy shortages and a near-full use of the Earth's photosynthetic capacity. The concluding chapters of the book are devoted to speculations on the contemporary human condition, responses to dismissals of the concerns of environmentalists by sceptics, and a meditation on our hopes and the perils we face. Which is when the book skids and becomes a mess. Diamond thinks that in order to demonstrate that mankind is currently engaged in unsustainable economic activity, it's enough to offer a sample of the insults modern economies have been inflicting on nature. Thus he reports a case of deforestation here, increased pesticide run-off there, loss of biota somewhere else and carbon emissions everywhere. But we have been travelling that route for nearly five decades now: environmentalists have routinely pointed to the damage modern economic activity inflicts. Moreover, in recent years such environmental scientists as Paul Ehrlich, Edward Wilson and, most recently, Gretchen Daily, Harold Mooney and Walter Reid, have spoken out while taking far greater care with details and qualifications than Diamond appears to believe is necessary. The more important reason why Diamond's rhetoric doesn't play well any longer is that it presents only one side of the balance-sheet: it ignores the human benefits that accompany environmental damage. You build a road, but that destroys part of the local ecosystem; there is both a cost and a benefit and you have to weigh them up. Diamond shows no sign of wanting to look at both sides of the ledger, and his responses to environmental sceptics take the form of `Yes, but . . .' If someone were to point out that chemical fertilisers have increased food production dozens of times over, he would reply: `Yes, but they are a drain on fresh water, and what about all that phosphorus run-off?' Diamond is like a swimmer who competes in a race using only one arm. `In caring for the health of our surroundings, just as of our bodies,' he writes at one point, `it is cheaper and preferable to avoid getting sick than to try to cure illnesses after they have developed' - which sounds wise, but is simply misleading bombast. Technology brings out the worst in him. At one point he claims that `all of our current problems are unintended negative consequences of our existing technology,' to which I felt like shouting in exasperation that perhaps at some times, in some places, a few of the unintended consequences of our existing technology have been beneficial. Reading Diamond you would think our ancestors should all have remained hunter-gatherers in Africa, co-evolving with the native flora and fauna, and roaming the wilds in search of wild berries and the occasional piece of meat. Here I should put my cards on the table. I am an economist who shares Diamond's worries, but I think he has failed to grasp both the way in which information about particular states of affairs gets transmitted (however imperfectly) in modern decentralised economies - via economic signals such as prices, demand, product quality and migration - and the way increases in the scarcity of resources can itself act to spur innovations that ease those scarcities. Without a sympathetic understanding of economic mechanisms, it isn't possible to offer advice on the interactions between nature and the human species. Here is an example of what I mean. Forests loom large in Diamond's case studies. As deforestation was the proximate cause of the Easter Islanders' demise, he offers an extended, contrasting account of the way a deforested Japan succeeded, in the early 18th century, in averting total disaster by regenerating its forests. Now consider another island: England. Deforestation here began under the Romans, and by Elizabethan times the price of timber had begun to rise ominously. In the mid-18th century what people saw across the landscape in England wasn't trees, but stone rows separating agricultural fields. The noted economic historian Brinley Thomas argued that it was because timber had become so scarce that a lengthy search began among inventors and tinkerers for an effective coal-based energy source. By Thomas's reckoning, the defining moment of the Industrial Revolution should be located in 1784, when Henry Cort's process for manufacturing iron was first successfully deployed. His analysis would suggest that England became the centre of the Industrial Revolution not because it had abundant energy but because it was running out of energy. France, in contrast, didn't need to find a substitute energy source: it was covered in forests and therefore lost out. I'm not able to judge the plausibility of Thomas's thesis - there would appear to be almost as many views about the origins, timing and location of the Industrial Revolution (granting there was one) as there are economic historians - but the point remains that scarcities lead individuals and societies to search for ways out, which often means discovering alternatives. Diamond is dismissive of the possibility of our finding such alternatives in the future because, as he would have it, we are about to come up against natural bottlenecks. We should be persuaded by the evidence that has been gathered over the years by environmental scientists that he is right, but simply telling us that we are about to hit bottlenecks won't do, because environmental sceptics would reply that discovering alternatives is the way to avoid them. If the future is translucent at best, what about studying the recent past to see how the human species has been doing? The question then arises: how should we recognise the trade-offs between a society's present and future needs for goods and services? To put it another way, how should we conceptualise sustainable development? The Brundtland Commission Report of 1987 defined it as `development that meets the needs of the present without compromising the ability of future generations to meet their own needs'. In other words, sustainable development requires that each generation bequeath to its successor at least as large a productive base as it inherited. But how is a generation to judge whether it is leaving behind an adequate productive base for its successor? An economy's productive base consists of its capital assets and its institutions. Ecological economists have recently shown that the correct measure of that base is wealth. They have shown, too, that in estimating wealth, not only is the value of manufactured assets to be included (buildings, machinery, roads), but also `human' capital (knowledge, skills health), natural capital (ecosystems, minerals, fossil fuels), and institutions (government, civil society, the rule of law). So development is sustainable as long as an economy's wealth relative to its population is maintained over time. Adjusting for changes in population size, economic development should be viewed as growth in wealth, not growth in GNP. There is a big difference between the two. It is possible to enumerate many circumstances in which a nation's GNP (per capita) increases over a period of time even as its wealth (per capita) declines. In broad terms, those circumstances involve growing markets in certain classes of goods and services (natural-resource intensive products), concomitantly with an absence of markets and collective policies for natural capital (ecosystem services). As global environmental problems frequently percolate down to create additional stresses on the local resource bases of the world's poorest people, GNP growth in rich countries can inflict a downward pressure on the wealth of the poor. A state of affairs in which GNP increases while wealth declines can't last for ever. An economy that eats into its productive base in order to raise current production cannot do so indefinitely. Eventually, GNP, too, would have to decline, unless policies were to change so that wealth began to accumulate. That's why it can be hopelessly misleading to use GNP per head as an index of human well-being. Recently the World Bank published estimates of the depreciation of a number of natural resources at the national level. If you were to use those data (and deploy some low cunning) to estimate changes in wealth per capita, you would discover that even though GNP per capita has increased in the Indian subcontinent over the past three decades, wealth per capita has declined somewhat. The decline has occurred because, relative to population growth, investment in manufactured capital, knowledge and skills, and improvements in institutions, have not compensated for the decline of natural capital. You would find that in sub-Saharan Africa both GNP per capita and wealth per capita have declined. You would also confirm that in the world's poorest regions (Africa and the Indian subcontinent), those that have experienced higher population growth have also decumulated wealth per capita at a faster rate. And, finally, you would learn that the economies of China and the OECD countries, in contrast, have grown both in terms of GNP per capita and wealth per capita. These regions have more than substituted for the decline in natural capital by accumulating other types of capital assets and improving institutions. It seems that during the past three decades the rich world has enjoyed sustainable development, while development in the poor world (barring China) has been unsustainable. These are early days in the quantitative study of sustainable development. Even so, one can argue that estimates of wealth movements in recent history are biased. As regards natural capital, the World Bank has so far limited itself to taking into account the atmosphere as a `sink' for carbon dioxide; minerals, oil and natural gas; and forests as a source of timber. Among the many types of natural capital whose depreciation has not been included are fresh water; soil; forests, wetlands, mangroves and coral reefs as providers of ecosystem services; and the atmosphere as a sink for such forms of pollution as particulates and nitrogen and sulphur oxides. If these missing items were to be included, the poor world's economic performance over the past three decades, including China's, would undoubtedly look a lot worse. The same would be true for the rich world. There are further reasons for thinking that the estimates of wealth changes that I have been referring to are biased. They have to do with the way prices are estimated for valuing natural capital. Empirical studies by earth scientists have revealed that the capacity of natural systems to absorb disturbances is not unlimited. When their absorptive capacities reach their limit, natural systems are liable to collapse into unproductive states. Their recovery is costly, both in time and material resources. If the Gulf Stream were to shift direction or slow down on account of global warming, the change would to all intents and purposes be irreversible. We know that up to some unknown set of limits, knowledge, skills, institutions and manufactured capital can substitute for nature's resources; meaning that even if an economy decumulated some of its natural capital, in quantity or quality, its wealth would increase if it invested sufficiently in other assets. The remarkable increase in agricultural productivity over the past two centuries is a case in point. But there are limits to substitutability: the costs of substitution have been known to increase in previously unknown ways as key resources are degraded. Global warming is a case in point. When the downside risks associated with such limits and thresholds are brought into estimates of sustainable development, the growth in wealth among the world's wealthy nations will in all probability turn out to have been less than present estimates would suggest. It may even have been negative. What I have sketched here is the correct way to determine whether contemporary economic development has been sustainable. It is also the correct way to evaluate public policy, for it tells me that a policy should be accepted if and only if it is expected to lead to an increase in wealth per capita. But you won't find any of this in Diamond's book. There is no evidence that he even realises he doesn't have the equipment to hand with which to study our interactions with nature. Nor, as far as I can judge, has he tried to engage with his economist colleagues to learn whether ecological economists have anything to say on the matter. The many people who will be reading Diamond's book will be fascinated by the historical case studies, but they will also be left with the impression that there is still no intellectual toolkit with which to deliberate over the most significant issue facing humanity today. Worse, they may not even notice they haven't got the tools. So readers will continue as either environmentalists or environmental sceptics, each locked into their own perspective. It is a great pity. [15]Partha Dasgupta's most recent book was Human Well-Being and the Natural Environment. He is the Frank Ramsey Professor of Economics at Cambridge and a fellow of St John's College. References 15. http://www.lrb.co.uk/contribhome.php?get=dasg01 From checker at panix.com Wed May 18 22:49:29 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:49:29 -0400 (EDT) Subject: [Paleopsych] Hal Finney on Frank Tipler Message-ID: From: "\"Hal Finney\"" Date: Mon, 16 May 2005 17:16:18 -0700 (PDT) To: everything-list at eskimo.com, Fabric-of-Reality at yahoogroups.com Subject: Re: Tipler Weighs In Lee Corbin points to Tipler's March 2005 paper "The Structure of the World From Pure Numbers": http://www.iop.org/EJ/abstract/0034-4885/68/4/R04 I tried to read this paper, but it was 60 pages long and extremely technical, mostly over my head. The gist of it was an updating of Tipler's Omega Point theory, advanced in his book, The Physics of Immortality. Basically the OP theory predicts, based on the assumption that the laws of physics we know today are roughly correct, that the universe must re-collapse in a special way that can't really happen naturally, hence Tipler deduces that intelligent life will survive through and guide the ultimate collapse, during which time the information content of the universe will go to infinity. The new paper proposes an updated cosmological model that includes a number of new ideas. One is that the fundamental laws of physics for the universe are infinitely complex. This is where his title comes from; he assumes that the universe is based on the mathematics of the continuum, i.e. the real numbers. In fact Tipler argues that the universe must have infinitely complex laws, basing this surprising conclusion on the Lowenheim-Skolem paradox, which says that any set of finite axioms can be fit to a mathematical object that is only countable in size. Hence technically we can't really describe the real numbers without an infinite number of axioms, and therefore if the universe is truly based on the reals, it must have laws of infinite complexity. (Otherwise the laws would equally well describe a universe based only on the integers.) Another idea Tipler proposes is that under the MWI, different universes in the multiverse will expand to different maximum sizes R before re-collapsing. The probability measure however works out to be higher with larger R, hence for any finite R the probability is 1 (i.e. certain) that our universe will be bigger than that. This is his solution to why the universe appears to be flat - it's finite in size but very very big. Although Tipler wants the laws to be infinitely complex, the physical information content of the universe should be zero, he argues, at the time of the Big Bang (this is due to the Beckenstein Bound). That means among other things there are no particles back then, and so he proposes a special field called an SU(2) gauge field which creates particles as the universe expands. He is able to sort of show that it would preferentially create matter instead of antimatter, and also that this field would be responsible for the cosmological constant which is being observed, aka negative energy. In order for the universe to re-collapse as Tipler insists it must, due to his Omega Point theory, the CC must reverse sign eventually. Tipler suggests that this will happen because life will choose to do so, and that somehow people will find a way to reverse the particle-creation effect, catalyzing the destruction of particles in such a way as to reverse the CC and cause the universe to begin to re-collapse. Yes, he's definitely full of wild ideas here. Another idea is that particle masses should not have specific, arbitrary values as most physicists believe, but rather they should take on a full range of values, from 0 to positive infinity, over the history of the universe. There is some slight observational evidence for a time-based change in the fine structure constant alpha, and Tipler points to that to buttress his theory - however the actual measured value is inconsistent with other aspects, so he has to assume that the measurements are mistaken! Another testable idea is that the cosmic microwave background radiation is not the cooled-down EM radiation from the big bang, but instead is the remnants of that SU(2) field which was responsible for particle creation. He shows that such a field would look superficially like cooled down photons, but it really is not. In particular, the photons in this special field would only interact with left handed electrons, not right handed ones. This would cause the photons to have less interaction with matter in a way which should be measurable. He uses this to solve the current puzzle of high energy cosmic rays: such rays should not exist due to interaction with microwave background photons. Tipler's alternative does not interact so well and so it would at least help to explain the problem. Overall it is quite a mixed bag of exotic ideas that I don't think physicists are going to find very convincing. The idea of infinitely complex natural laws is going to be particularly off-putting, I would imagine. However the idea that the cosmic microwave background interacts differently with matter than ordinary photons is an interesting one and might be worth investigating. It doesn't have that much connection to the rest of his theory, though. Hal Finney From checker at panix.com Wed May 18 22:50:23 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:50:23 -0400 (EDT) Subject: [Paleopsych] Joel Garreau: Inventing Our Evolution Message-ID: Inventing Our Evolution http://www.washingtonpost.com/wp-dyn/content/article/2005/05/15/AR2005051501Inventing Our Evolution 092_pf.html [Adapted from the book "Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human" by Joel Garreau, to be published May 17 by Doubleday, a division of Random House Inc.] We're almost able to build better human beings. But are we ready? Monday, May 16, 2005; A01 The surge of innovation that has given the world everything from iPods to talking cars is now turning inward, to our own minds and bodies. In an adaptation from his new book, Washington Post staff writer Joel Garreau looks at the impact of the new technology. Some changes in what it means to be human: ? Matthew Nagel, 25, can move objects with his thoughts. The paralyzed former high school football star, whose spinal cord was severed in a stabbing incident, has a jack coming out of the right side of his skull. Sensors in his brain can read his neurons as they fire. These are connected via computer to a robotic hand. When he thinks about moving his hand, the artificial thumb and forefinger open and close. Researchers hope this technology will, within our lifetimes, allow the wheelchair-bound to walk. The military hopes it will allow pilots to fly jets using their minds. ? Around the country, companies such as Memory Pharmaceuticals, Sention, Helicon Therapeutics, Saegis Pharmaceuticals and Cortex Pharmaceuticals are racing to bring memory-enhancing drugs to market before the end of this decade. If clinical trials continue successfully, these pills could be a bigger pharmaceutical bonanza than Viagra. Not only do they hold the promise of banishing the senior moments of aging baby boomers; they might improve the SAT scores of kids by 200 points or more. ? At the Defense Sciences Office of the Defense Advanced Research Projects Agency (DARPA) in Arlington, programs seek to modify the metabolisms of soldiers so as to allow them to function efficiently without sleep or even food for as much as a week. For shorter periods, they might even be able to survive without oxygen. Another program seeks to allow soldiers to stop bleeding by focusing their thoughts on the wound. Yet another program is investigating ways to allow veterans to regrow blown-off arms and legs, like salamanders. Traditionally, human technologies have been aimed outward, to control our environment, resulting in, for example, clothing, agriculture, cities and airplanes. Now, however, we have started aiming our technologies inward. We are transforming our minds, our memories, our metabolisms, our personalities and our progeny. Serious people, including some at the National Science Foundation in Arlington, consider such modification of what it means to be human to be a radical evolution -- one that we direct ourselves. They expect it to be in full flower in the next 10 to 20 years. "The next frontier," says Gregory Stock, director of the Program on Medicine, Technology and Society at the UCLA School of Medicine, "is our own selves." The process has already begun. Prozac and its ilk modify personality. Viagra alters metabolism. You can see deep change in the basics of biology most clearly, however, wherever you find the keenest competition. Sport is a good example. "The current doping agony," says John Hoberman, a University of Texas authority on performance drugs, "is a kind of very confused referendum on the future of human enhancement." Some athletes today look grotesque. Curt Schilling, the All-Star pitcher, in 2002 talked to Sports Illustrated about the major leagues. "Guys out there look like Mr. Potato Head, with a head and arms and six or seven body parts that just don't look right." Steroids are merely a primitive form of human enhancement, however. H. Lee Sweeney of the University of Pennsylvania suggests that the recent Athens Olympics may have been the last without genetically enhanced athletes. His researchers have created super-muscled "Schwarzenegger rats." They're built like steers, with necks wider than their heads. They live longer and recover more quickly from injuries than do their unenhanced comrades. Sweeney sees it as only a matter of time before such technology seeps into the sports world. Human enhancement is hardly limited to sport. In 2003, President Bush signed a $3.7 billion bill to fund research at the molecular level that could lead to medical robots traveling the human bloodstream to fight cancer or fat cells. At the University of Pennsylvania, ordinary male mouse embryo cells are being transformed into egg cells. If this science works in humans, it could open the way for two gay males to make a baby -- blurring the standard model of parenthood. In 2004, a new technology for the first time allowed women to beat the biological clock. Portions of their ovaries, frozen when they are young and fertile, can be reimplanted in their sixties, seventies or eighties, potentially allowing them to bear children then. The genetic, robotic and nano-technologies creating such dramatic change are accelerating as quickly as has information technology for the past four decades. The rapid development of all these fields is intertwined. It was in 1965 that Gordon E. Moore, director of Fairchild's Research and Development Laboratories, noted, in an article for the 35th-anniversary issue of Electronics magazine, that the complexity of "minimum cost semiconductor components" had been doubling every year since the first prototype microchip was produced six years before. And he predicted this doubling would continue every year for the next 10 years. Carver Mead, a professor at the California Institute of Technology, would come to christen this claim "Moore's Law." Over time it has been modified. As the core faith of the entire global computer industry, it is now stated this way: The power of information technology will double every 18 months, for as far as the eye can see. Sure enough, in 2002, the 27th doubling occurred right on schedule with a billion-transistor chip. A doubling is an amazing thing. It means the next step is as great as all the previous steps put together. Twenty-seven consecutive doublings of anything man-made, an increase of well over 100 million times-- especially in so short a period -- is unprecedented in human history. This is exponential change. It's a curve that goes straight up. Optimists say that culture and values can control the impact of these advances. "You have to make a distinction between the science and the technological applications," says Francis Fukuyama, a member of the President's Council on Bioethics and director of the Human Biotechnology Governance Project. "It's probably true that in terms of the basic science, it's pretty hard to stop that. It's not one guy in a laboratory somewhere. But not everything that is scientifically possible will actually be technologically implemented and used on a large scale. In the case of human cloning, there's an abstract possibility that people will want to do that, but the number of people who are going to want to take the risk is going to be awfully small." Taboos will play an important role, Fukuyama says. "We could really speed up the whole process of drug improvement if we did not have all the rules on human experimentation. If companies were allowed to use clinical trials in Third World countries, paying a lot of poor people to take risks that you wouldn't take in a developed country, we could speed up technology quickly. But because of the Holocaust -- " Fukuyama thinks the school of hard knocks will slow down a lot of attempts. "People may in the abstract say that they're willing to take that risk. But the moment you have a deformed baby born as a result of someone trying to do some genetic modification, I think there will be a really big backlash against it." Today, nonetheless, we are surrounded by the practical effects of this curve of exponential technological change. IBM this year fired up a new machine called Blue Gene/L. It is ultimately expected to be 1,000 times as powerful as Deep Blue, the machine that beat world chess champion Garry Kasparov in 1997. "If this computer unlocks the mystery of how proteins fold, it will be an important milestone in the future of medicine and health care," said Paul M. Horn, senior vice president of IBM Research, when the project was announced. Proteins control all cellular processes in the body. They fold into highly complex, three-dimensional shapes that determine their function. Even the slightest change in the folding process can turn a desirable protein into an agent of disease. Blue Gene/L is intended to investigate how. Thus, breakthroughs in computers today are creating breakthroughs in biology. "One day, you're going to be able to walk into a doctor's office and have a computer analyze a tissue sample, identify the pathogen that ails you, and then instantly prescribe a treatment best suited to your specific illness and individual genetic makeup," Horn said. What's remarkable, then, is not this computer's speed but our ability to use it to open new vistas in entirely different fields -- in this case, the ability to change how our bodies work at the most basic level. This is possible because at a thousand trillion operations per second, this computer might have something approaching the raw processing power of the human brain. Nathan Myhrvold, the former technology chief of Microsoft, points out that it cost $12 billion to sequence the first human genome. You will soon be able to get your own done for $10, he expects. If an implant in a paralyzed man's head can read his thoughts, if genes can be manipulated into better versions of themselves, the line between the engineered and the born begins to blur. For example, in Silicon Valley, there is a biotech company called Rinat Neuroscience. DARPA provided critical early funding for its "pain vaccine," a substance designed to block intense pain in less than 10 seconds. Its effects last for 30 days. Tests show it doesn't stifle reactions. If you touch a hot stove, your hand will still automatically jerk away. But after that, the torment is greatly reduced. The product works on the inflammatory response that is responsible for the majority of subacute pain. If you get shot, you feel the bullet, but after that, the inflammation and swelling that trigger agony are substantially reduced. The company is deep into animal testing, is preparing reports for scientific conferences, and has now attracted venture capital funding. Another DARPA program, originally christened Regenesis, started with the observation that if you cut off the tail of a tadpole, the tail will regrow. If you cut off an appendage of an adult frog, however, it won't, because certain genetic signals have been switched off. This process is carried out by a mass of undifferentiated cells called a blastema, also called a regeneration bud. The bud has the capability to develop into an organ or an appendage, if it gets the right signals. Early results in mice indicate that such blastemas might be generated in humans. The program, now called Restorative Injury Repair, is aimed at allowing regrowth of a blown-off hand or a breast removed in a mastectomy. (Instances of amputated fingertips regenerating in children under 12 have long been noted in scientific journals.) "We had it; we lost it; we need to find it again" was Regenesis's original slogan. Snooze and Lose? There are three groups of people usually attracted to any new enhancement. In order, they are the sick, the otherwise healthy with a critical need, and the enterprising. This became immediately obvious when a drug called modafinil entered the market earlier this decade. It is intended to shut off the urge to sleep, without the jitter, buzz, euphoria, crash, or potential for paranoid delusion of stimulants such as amphetamines, cocaine or even caffeine. The FDA originally approved modafinil for narcoleptics who fall asleep frequently and uncontrollably. But this widely available prescription drug, with the trade name Provigil, immediately was tested on healthy young U.S. Army helicopter pilots. It allowed them to stay up safely for almost two days while remaining practically as focused, alert and capable of dealing with complex problems as the well rested. Then, after a good eight hours' sleep, it turned out they could get up and do it again for another 40 hours, before finally catching up on their sleep. But it's the future of the third group -- the millions who, in the immortal words of Kiss, "wanna rock-and-roll all night and party every day" -- that holds the potential for changing society. Will people feel that they need to routinely control their sleep in order to be competitive? Will unenhanced people get fewer promotions and raises than their modified colleagues? Will this start an arms race over human consciousness? Consider the case of a little boy born in Germany at the turn of this century. As reported in the New England Journal of Medicine last year, his doctors immediately noticed he had unusually large muscles bulging from his tiny arms and legs. By the time he was 4 1/2 , it was clear that he was extraordinarily strong. Most children his age can lift about one pound with each arm. He could hold a seven-pound dumbbell aloft with each outstretched hand. He is the first human confirmed to have a genetic variation that builds extraordinary muscles. If the effect can be duplicated, it could treat or cure muscle-wasting diseases. Wyeth Pharmaceuticals is testing a drug designed to do just that as a treatment for the most common form of muscular dystrophy. Will athletes try to exploit the discovery to enhance their abilities? "Athletes find a way of using just about anything," says Elizabeth M. McNally of the University of Chicago, who wrote an article accompanying the findings in the New England Journal of Medicine. "This, unfortunately, is no exception." Views of the Future Ray Kurzweil, an artificial-intelligence pioneer and winner of the National Medal of Technology, shrugs at the controversy over the use of stem cells from human embryos: "All the political energy that has gone into this issue -- it is not even slowing down the most narrow approach." It is simply being pursued outside the United States -- in China, Korea, Taiwan, Singapore, Scandinavia and Great Britain, where scientists will probably achieve success first, he notes. In the next couple of decades, Kurzweil predicts, life expectancy will rise to at least 120 years. Most diseases will be prevented or reversed. Drugs will be individually tailored to a person's DNA. Robots smaller than blood cells -- nanobots, as they are called -- will be routinely injected by the millions into people's bloodstreams. They will be used primarily as diagnostic scouts and patrols, so if anything goes wrong in a person's body, it can be caught extremely early. As James Watson, co-winner of the Nobel Prize for discovering the structure of DNA, famously put it: "No one really has the guts to say it, but if we could make better human beings by knowing how to add genes, why shouldn't we?" Gregory Stock of UCLA sees this as the inevitable outcome of the decoding of the human genome. "We have spent billions to unravel our biology, not out of idle curiosity, but in the hope of bettering our lives," he said at a 2003 Yale bioethics conference. "We are not about to turn away from this." Stock sees humanity embracing artificial chromosomes -- rudimentary versions of which already exist. Right now, the human body has 23 chromosome pairs, with the chromosomes numbered 1 through 46. Messing with them is tricky -- you never know when you're going to inadvertently step on unanticipated interactions. By adding a new chromosome pair (Nos. 47 and 48) to the embryo, however, the possibilities appear endless. Stock, in his book "Redesigning Humans: Our Inevitable Genetic Future," describes it as the safest way to substantially modify humans because, he says, it would minimize unintended consequences. On top of that, the chromosome insertion sites could have an off switch activated by an injection if we wanted to stop whatever we'd started. This would give future generations a chance to undo whatever we did. Stock offers this analysis to counter the argument offered by some bioethicists that inheritable genetic line engineering should be unconditionally banned because future generations harmed by wrongful or unsuccessful modifications would have no control over the matter. But the very idea of aspiring to such godlike powers is blasphemous to some. "Genetic engineering," writes Michael J. Sandel, a professor of political philosophy at Harvard, is "the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature. But the promise of mastery is flawed. It threatens to banish our appreciation of life as a gift, and to leave us with nothing to affirm or behold outside our own will." Stock rejects this view. "We should not just accept but embrace the new technologies, because they're filled with promise," he says. Within a few years, he writes, "traditional reproduction may begin to seem antiquated, if not downright irresponsible." His projections, he asserts, are not at all out of touch with reality. From checker at panix.com Wed May 18 22:50:40 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:50:40 -0400 (EDT) Subject: [Paleopsych] Lon L. Fuller and the Enterprise of Law Message-ID: Lon L. Fuller and the Enterprise of Law http://www.capital.demon.co.uk/LA/legal/fuller.htm by Barry Macleod-Cullinane Legal Notes No. 22, 1995 [Lon Fuller up at the very top of the list of American philosophers of law. His work argues for the intrinsically moral nature of law and is a secular exponent of natural law. Here's a good summary and critique of his thought. [Why Fuller's article, "The Case of the Speluncean Explorers," is in the bibliography but not cited in the text is a puzzle. According to Peter Suber, who added to the article more opinions, "the case tells the story of a group of spelunkers (cave-explorers) in the Commonwealth of Newgarth, trapped in a cave by a landslide. As they approach the point of starvation, they make radio contact with the rescue team. Engineers on the team estimate that the rescue will take another 10 days. The men describe their physical condition to physicians at the rescue camp and ask whether they can survive another 10 days without food. The physicians think this very unlikely. Then the spelunkers ask whether they could survive another 10 days if they killed and ate a member of their party. The physicians reluctantly answer that they would. Finally, the men ask whether they ought to hold a lottery to determine whom to kill and eat. No one at the rescue camp is willing to answer this question. The men turn off their radio, and some time later hold a lottery, kill the loser, and eat him. When they are rescued, they are prosecuted for murder, which in Newgarth carries a mandatory death penalty. Are they guilty? Should they be executed? ["Fuller wrote five Supreme Court opinions on the case which explore the facts from the perspectives of profoundly different legal principles. The result is a focused and concrete illustration of the range of Anglo-American legal philosophy at mid-century. My nine new opinions attempt to bring this picture up to date with our own more diverse and turbulent jurisprudence half a century later." [I'm putting the original Fuller article beneath the discussion of Barry Macleod-Cullinane and Suber's introduction below that. (Like too many of my best books, I loaned it out, and it did not come back. I hope this all adds to your legal education.] An occasional publication of: The Libertarian Alliance 25 Chapter Chambers, Esterbrooke Street London SW1P 4NN, England Barry Macleod-Cullinane holds a BA in economics and an MA in Political Philosophy from York University. He is currently a Graduate Teaching Assistant in the Department of Politics at Hull University. _________________________________________________________________ "The only formula that might be called a definition of law offered in these writings is by now thoroughly familiar: law is the enterprise of subjecting human conduct to the governance of rules. Unlike most modern theories of law, this view treats law as an activity and regards a legal system as the product of a sustained purposive effort." Lon L. Fuller - The Morality of Law[1] I INTRODUCTION At a time when legal positivism - the doctrine that law and morality must be separated - was riding high, there emerged an eloquent champion of natural law theory, albeit in a secularized form, whose distinctive and thoughtful arguments won applause even amidst the controversy he sparked. The legal philosophy of Lon L. Fuller (1902-1978) has largely gone unnoticed by those interested in the processes and institutional order of a market society - a fact I am seeking to remedy in the present Paper. However, this should not be taken as my final word on the subject; rather, it represents my first tentative examination of the richness and vitality of Fuller's thought. I hope that such inadequacies as may be found will serve to promote discussion and exploration of the issues raised by Fuller. Fuller's The Morality of Law, first published in 1964, is his most famous and, perhaps, his most controversial work. At a time when legal positivism still dominated jurisprudence, the suggestion that law and morality were not only connected but connected intimately was such an affront to scientistic thinking that it brought repeated charges of "axe grinding" from one reviewer. "[A]s a theory of law, many readers will find what the author says unsatisfactory. He is obviously grinding an axe, and such emphasis inevitably distorts."[2] Interestingly enough, that reviewer, Robert S. Summers, has subsequently come to revise both his evaluation of Ion Fuller's writings and also substantially to shift his methodological outlook towards Fuller's position.[3] But The Morality of Law did not begin controversy: it was, by Fuller's own reckoning,[4] merely round four in a long-running dispute between himself and the English legal theorist, H. L. A. Hart. Moreover, a reading of the particular rounds, the papers and books published by Hart and Fuller, indicates that disagreement was not founded solely upon Hart's legal positivism and his insistence upon the separation of law and morals. Several other themes contributed to define what Fuller, in his later "Reply To Critics", characterised, after Hart, "fundamental differences in our starting points"[5] and which seemed to preclude a coherent dialogue from emerging between him and the legal positivists. These "starting points" include the conception of law as a dynamic process of creation and discovery. In his continuing emphasis upon customary law, i.e. spontaneously evolved rules emerging through dispute arbitration and adjudication combined with the spread of superior ways of doing things through competition and imitation, as exemplified by the law Merchant,[6] Fuller entirely endorses the description by the Italian legal thinker, Bruno Leoni, that: Individuals make the law, insofar as they make successful claims. They not only make previsions and predictions, but try to have these predictions succeed by their own intervention in the process. Judges, jurisconsults, and, above all, legislators are just individuals who find themselves in a particular position to influence the whole process through their own intervention.[7] Indeed, there is an emphasis throughout The Morality of Law upon law not only as an enterprise, but as one which is most at home and compatible with the market order, which it itself mirrors. Building upon his examination of "the conditions that make a duty most understandable and most palatable to the man who owes it",[8] Fuller asks what form of society will most likely meet these conditions. When will observance and compliance of moral and legal duties be most complete? Fuller relates that "the answer is a surprising one: in a society of economic traders".[9] His subsequent approval of F. A. Hayek's close identification of the rule of law with the market order is further confirmed by an excellent presentation of the ideas of Soviet theorist Eugene Pashukanis, particularly his Commodity Theory of law, which Fuller suggests should be renamed "the Commodity Exchange Theory of Legal and Moral Duty"[10] and which serves to underscore the "somewhat startling conclusion that it is only under capitalism that the notion of the moral and legal duty can reach its full development".[11] In addition, the law serves a very important coordinative function by providing a chart against which an individual might orientate his plans and actions and rationally evaluate their (likely) interplay with those of (often anonymous) others.[12] Thus, "[i]n one aspect our whole legal system represents a complex of rules designed to rescue man from the blind play of chance and to put him safely on the road to purposeful and creative activity."[13] It is perhaps illuminating to note that Fuller's examples of this coordinative role are entirely drawn from the commercial sphere (the law of quasi contract, the law of contract, and tort law) and whose "acceptance today represents the fruit of a centuries-old struggle to reduce the role of the irrational in human affairs".[14] And, like Leoni,[15] Fuller is wary indeed of the legislative 'creation' of law as opposed to emergent, customary law. Partly this reflects his lengthy experience as a commercial aibitrator.[16] But it also stems from his distrust of what Hayek describes as the "constructivist rationalism"[17] that grounds legal positivist thinking. For "[t]he characteristic error of the constructivist rationalists .. is that they tend to base their argument on what has been called the synoptic delusion, that is, on the fiction that all the relevant facts are known to some one mind, and that it is possible to construct from this knowledge of the particulars a desirable social order."[18] This concern is especially apparent in Fuller's caution to the legislative draftsman that the latter should not force upon the interpreting agent "senseless tasks".[19] For not only does the problem of interpretation reveal "the cooperative nature of the task of maintaining legality",[20] but it clearly demonstrates how the draftsman has to "be able to anticipate rational and relatively stable modes of interpretation"[21] in order to create meaningful laws. Echoing Hayek, Fuller explains that "[n]o single concentration of intelligence, insight, and good will, however strategically located, can insure the success of the enterprise of subjecting human conduct to the governance of rules."[22] If law-making is essentially a process of entrepreneurial discovery, then only a free market in law can be entirely compatible with the enterprise of law, of "subjecting human conduct to the governance of rules".[23] It is in this light that the present Paper is offered. I divide it into two main sections. First, I deal with the ideas particularly connected with The Morality of Law such as: the distinction between the moralities of duty and aspiration; the nature of law's internal morality; and Hart's criticisms of the foregoing as violating the legal positivist distinction between what is and what ought to be, i.e. the confusion of law and morals. The second section is an elaboration of Fuller's procedural natural law theory. Here I examine his contention that protection of rights can emerge from customary law, taking particular notice of the spontaneous evolution of the medieval law Merchant. I conclude by drawing out the implications of Fuller's ideas for contemporary thinking on the nature of law and society. _________________________________________________________________ II THE MORALITY OF LAW (A) The Two Moralities In opposition to B. F. Skinner's deterministic behavourism, Fuller sees in man the capacity to alter his actions by conscious decision; the role of law being to enable his social orientation. Indeed, though "legal morality can be said to be neutral over a wide range of ethical issues"[1] Fuller argues that it cannot remain neutral to the nature of man himself since "the enterprise of subjecting human conduct to the governance of rules involves of necessity a commitment to the view that man is, or can become, a responsible agent, capable of understanding and following rules, and answerable for his defaults." [2] Guiding man's life are the moralities of duty and aspiration, whose nature and interplay define the role, aims, and limits of law. The morality of aspiration, the pursuit of virtue in Greek philosophy, represents "the morality of the Good Life, of excellence, of the fullest realisation of human powers." [3] In contrast, whereas ...the morality of aspiration starts at the top of human achievement, the morality of duty starts at the bottom. It lays down the basic rules without which an ordered society is impossible. ... It does not condemn men for failing to embrace opportunities for the fullest realisation of their powers. Instead, it condemns them for failing to respect the basic requirements of social living. [4] After Adam Smith, [5] Fuller describes how the rules of language mirror the morality of duty by specifying certain basic requirements for social communication. Rules to achieve excellence are notoriously vague for "the closer a man comes to the higher reaches of human achievement, the less competent are others to appraise his performance." [6] Far from entailing society's dissolution, excellence can only be achieved within a social context; outside of which "none of us could aspire to anything much above a purely animal existence. One of the highest responsibilities [therefore] of the morality of aspiration is to preserve and enrich this social inheritance." [7] In evaluating the passion for "deep play", and whether such 'excessive' forms of gambling should be outlawed, Fuller observes that "there is no way by which the law can compel a man to live up to the excellences of which he is capable." [8] However, when certain activities threaten the social bond the law can look for guidance "to its blood cousin, the morality of duty".[9] A moral scale can be envisaged rising from the most basic conditions required for civil association to the very peaks of human possibilities. Because it is the line of division between duty and the pursuit of excellence that delimits the scope of law's 'compulsive domain'[10] so this "invisible pointer" is shrouded by "the whole field of moral argument" as thinkers engage in "a great undeclared war over the location of this pointe?'. [11] Indeed, this is an excellent characterization of toleration. Those urging the pointer higher betray the intolerance of moral censors who, "[i]nstead of inviting us to join them in realising a pattern of life they consider worthy of human nature, try to bludgeon us into a belief we are duty bound to embrace this pattern." [12] Platonism has "needlessly complicated"[13] the locating of this pointer. By arguing that in order to know bad one has to grasp perfection, Platonism claims that "moral duties cannot be rationally discerned without first embracing a comprehensive morality of aspiration".[14] Yet, the legal impotence that Platonism would have expected from millennia of conflicting substantive moral theories has not materialised: "[t]he moral injunction 'thou shalt not kill' implies no picture of the perfect life. It rests on the prosaic truth that if men kill one another off no conceivable morality of aspiration can be realised." [15] Similarly, Fuller continues, examples can be found throughout "the whole field of human purpose"[16] refuting the proposition that it is impossible to reject "what is plainly unjust without committing ourselves to declare with finality what perfect justice would be like".[17] In this, Fuller is profoundly located within Classical Liberalism's traditional emphasis upon the establishment of "a framework for utopia" (to steal Robert Nozick's beautiful phrasing) that adheres to a conception of 'negative liberty' as opposed to 'positive liberty'. [18] Composed from injunctions upon that range of actions hostile to the continued existence of the social commonwealth, the idea of negative liberty is succinctly conveyed in Herbert Spencer's famous 'law of Equal Freedom'. Hence, that which we have to express in a precise way, is the liberty of each limited only by the like liberties of all. This we do by saying: Every man is free to do that which he wills, provided he infringes not the equal freedom of any other man. [19] This runs in sharp contrast to 'positive liberty' or 'entitlement' claims to welfare fulfilled at another's expense. Indeed, it is the very institutionalisation of class interests by the coercive, redistributive power of the State that endangers social peace. [20] Thus, the very anti-perfectionism of Classical Liberalism enables us "to create the conditions that will permit a man to lift himself upward. It is certainly better to do this than to try to pin him to the wall with a final articulation of his highest good." [21] But "this issue is far from prosaic", Nolan attacks Fuller. [22] For, "if we should not kill on the grounds that we destroy higher aspiration moralities some idea of what those moralities are would be needed other than the implied benefits of leaving people free (to exchange goods and services)." [23] Not only is this infused with Platonism but it betrays a profound naivete concerning the alternatives of social organisation that is dangerously irresponsible. As Ludwig von Mises explains, only two choices (capitalism or socialism) ultimately exist. And, [i]f one further realises that socialism [too] is unworkable, then one cannot avoid acknowledging that capitalism is the only feasible system of social organisation based on the division of labour. ... A return to the Middle Ages is out of the question if one is not prepared to reduce the population to a tenth or a twentieth part of its present number and, even further, to oblige every individual to be satisfied with a modicum so small as to be beyond the imagination of modern man." [24] Surely the reality implied by the severing of the social bonds overcomes Nolan's objection: for, in the extended order, there is a morality of aspiration achieved by "leaving people free (to exchange goods and services)". [25] _________________________________________________________________ (B) The Inner Morality of Law To illustrate "the problems of institutional design",[26] and to derive the requirement for the "Inner Morality of law", Fuller marshals his "mythopoetic powers"[27] in "The Problem of the Grudge Informer".[28] The problem is not that a legal system (of sorts) did not previously exist, but that what did exist not only sanctioned but actually encouraged individuals to settle old-scores through the coercive power of the State. With the dictatorship gone the demand that such "grudge informers" be punished emerges. [29] It falls to the reader. newly elected as Justice" Minister, to resolve the problem. Whilst such acts were not illegal under the old regime, current public sentiment is such that individuals so accused are likely to be lynched. [30] Fuller dampens the constructivist temptation with the sobering tale of King Rex and the eight ways in which he failed to make law. Of the routes to failure: The first and most obvious lies in a failure to achieve rules at all, so that every issue must be decided on an ad hoc basis. The other routes are: (2) a failure to publicise, or at least to make available to the affected party, the rules he is expected to observe; (3) the abuse of retroactive legislation, which cannot itself guide action, but undercuts the integrity of rules prospective in effect, since it puts them under the threat of retrospective change; (4) a failure to make rules understandable; (5) the enactment of contradictory rules or (6) rules that require conduct beyond the powers of the affected party; (7) introducing such frequent changes in the rules that the subject cannot orient his action by them; and, finally, (8) a failure to achieve congruence between the rules as announced and their actual administration. [31] Rather than adopt a substantive natural law approach, Postulating a higher law that sanctions the legislative function of the State (as did some German legal postivists like Gustav Radbruch[32]), Fuller counsels in favour of a procedural natural law approach. Taking up his eight ways to fail to make law Fuller explains that they are mirrored by eight "desiderata" or "eight kinds of legal excellence toward which a system of rules may strive"[33] and embodied in the Inner Morality of law. The IML is essentially a morality of aspiration: whilst the law should be promulgated most of the demands for legal excellence cannot easily be expressed as duties, thereby condemning the Inner Morality of law "to remain largely a morality of aspiration and not of duty. Its primary appeal must be a sense of trusteeship and to the pride of the craftsman." [34] The other major feature of the Inner Morality of law is that, apart from the pursuit of such legal excellence, it is not concerned with any substantive ends. Behind the rule of law stands an approval, originated in what Albert Jay Nock aptly called 'social power' in contrast to 'state power', [35] and which characterizes the social contractarian aspect of Leveller, Lockean, and American Revolutionary thought[36] cited approvingly by Fuller. It is for the external morality of law, anchored in this basic social morality characterising 'social power', to define those substantive ends (i.e., concerns for justice) to be sought through the law. But as has already been intimated, this urge is constrained by the enterprise of law itself: too many regulations would undermine the law. [37] For, as will be shown, there is a disproportionate amount of coercion involved in implementing State legislation, as compared to judgements arising from customary law which relies upon voluntary support. Moreover, the very desiderata comprising the Inner Morality of law can, and must, vary with external circumstances. Fuller's account of how retroactive laws may be justified (to resolve a failure in another desiderata, for instance) is a case in point. But, he concludes, referring back to the division of labour in the extended order; "to know how, under what circumstances, and in what balance these things should be achieved is no less an undertaking than being a lawgiver." [38] In the case of the "grudge informer" retroactive legislation can be justified in the attempt to restore respect for the law. Not only is this a swifter and surer approach than attempting to interpret Nazi statutes as to their legality at the time, or to argue that their very vagueness compels the rejection of such legislation, it also counters the unstated system of rules, imposed by the Party's terror in the streets and pressure on the judiciary, that existed without formal legislative enactment. It is useful at this point, however, to recall the arguments of the Marxist legal theorist, Pashukanis. Pashukanis suggested that there is just as much economic calculation, or evaluation of the consequences accruing from an action, in the moral sphere as there is in the economic sphere. [39] Thus, in bourgeois criminal law we find a table of crimes with a schedule of appropriate punishments or expiations - a kind of price list for misbehaviour. ... The legal subject is thus the legal counterpart of the economic trader. [40] Clearly, with no "price list for misbehaviour", it might naturally be assumed that no wrong was committed by the "grudge informers." But the heart of law is not dependent upon subjective calculations, i.e. the economics of marginal utility or its moral equivalent the morality of aspiration. The conception of acting man re-ordering his activities in accordance with his constant creation and revision of plans is incompatible with the requirements of the morality of duty sustaining society. For, with the continual change of opportunity costs, i.e. the imagined value of the moral desirable alternative foregone at the moment of choice, 'laws' facilitating the settling of grudges will radically influence individual choice and, therefore, action. Marginal utility theory is positively hostile to the social bond in this respect. Thus it remains the province of the morality of duty, or the economics of exchange, to defend the 'basic ground rules' required for social cooperation based upon voluntarily assumed, reciprocal relationships. In this manner it is apparent that even with no legislative proscriptions certain acts will still violate the most basic social duties and, hence, be subject to legal punishment. It is because the IML is anchored in the maintenance of civil association that such "grudge informers" can be viewed as having violated the minimum duties required for social living, regardless of the condition of positive law, and be rightfully made subject to legal and social sanctions. There is, in this, a remarkable similarity to the natural law thesis of a higher law. But Fuller's thesis is far more modest. Arising from his conception of acting man and the necessity of social life are the very institutions of individual rights, property, and, above all, the rule of law that substantive natural law theorists set down in theocratic tablets of stone. (C) The Positivist Quest and Hart's Review I turn now to deal with the issues raised by legal positivism. [41] In his The Law in Quest of Itself[42] Fuller relates how legal positivists have attempted to divorce law and morality. Following the strides made by the natural sciences, these thinkers took up what they perceived - mistakenly - to be the mantle of scientific method. [43] Legal positivism's basic tenets will first be presented as a foundation upon which the subsequent critical appraisal of Hart's Review of The Morality of Law can be set. I have mostly neglected the debate in 1958 in the Harvard Law Review in this context since the key issues are drawn out in Hart's "Review"[44] and Fuller's "Reply to Critics". Two ideas mark the positivist project: (1) a belief in a general criterion by which the law could be identified and differentiated"; (2) and an attempt to draw a "sharp line between law and non-Jaw (especially morals)". [45] Ever since the Power-grounded theories of Hobbes, legal positivism has attempted to ground the identification of law in some master test. To do otherwise would be to slide towards accepting the proposition that 'general acceptance' is the hallmark of law, unpalatable for legal positivists since this would also include moral concepts. Hart's "Review" roughly consists of a restatement of these two basic themes. First, there is his "rule of recognition", an attempt to establish a general criterion to separate law from non-law and particularly aimed at Fuller's wide conception of law that, "admittedly and unashamedly, includes the rules of clubs, churches, schools 'and a hundred and one other forms of human association'." [46] Secondly, Hart argues that it is impossible to speak of an "Internal Morality of law": to do so would be to speak of the "internal morality of poisoning" that stresses the basic requirements and challenges to the excellence of (efficient) poisoning. Hart begins his attack upon the IML with a remark that the eight desiderata "are compared by the author to principles (he says, 'natural laws') of carpentry. They are independent of the law's substantive aims just as the principles of carpentry are independent of whether the carpenter is making hospital beds or torturers' racks." [47] But Fuller's "insistence on classifying these principles of legality as a 'morality' is a source of confusion both for him and his readers"[48] serving to perpetuate "a confusion between two notions that it is vital to hold apart: the notions of purposive activity and morality." [49] Hart then turns to consider how similar principles might not also inform the poisoner's craft, concluding that this would blur efficiency related to a purpose with moral judgements about the activities and purposes themselves. The "grotesque results" of such a confusion stems from Fuller's belief that the purpose of subjecting human conduct to the governance of rules represents "some ideal development of human capacities which is taken to be the ultimate value in the conduct of life",[50] i.e. that it represents a morality of aspiration. I have already defended, against Nolan's objections, the Inner Morality of law in this light. But, to extend Hart's analogy of the poisoner, might not the legislator also use laws, enacted in clear observance to all the desiderata, to terrorize the populace? In reply, Fuller looks to Soviet Russia and the making of "economic crimes" punishable by death. Asking is it inefficient to pass retroactive legislation in this manner, Fuller explains that even Soviet lawyers viewed such actions as undermining confidence in law. Thus, a theory of social disharmony, generated in the trade-off between short-run efficiency and the longer term existence of law, pushes Hart and the legal positivists "across the boundary they have so painstakingly set up to distinguish morality from efficiency ...[and] they might have saved themselves a good deal of trouble by simply talking about morality in the first place." [51] The other feature of Hart's "Review" is his intention to establish a "rule of recognition In my book [The Concept of Law] I insisted that behind every legislative authority (even the supreme legislature of a legal system) there must be rules specifying the identity and qualification of the legislators and what they must do in order to make laws. ... I used the expression 'the rule of recognition' in expounding my version of the common theory that a municipal legal system is a structure of 'open-textured' rules which has at its foundations a rule which is legally ultimate in the sense that it provides a set of criteria by which in the last resort the validity of subordinate rules of the system is assured. This rule is not to be characterized as either legally valid or invalid - though it may be the subject of moral criticism, historical or sociological explanation, and other forms of inquiry. [52] In reading Hart's discussion of the "rule of recognition" (though sometimes "rules of recognition") there is an apparent gap that singularly torpedoes his entire position. He traces the day to day activity back to, and finds their validity in, a general rule which itself conforms to the criteria for legality of the subordinate laws[53] but, as he admits in The Concept of Law, [54] "is also unlike them in that there is no rule providing criteria for the assessment of its own legal validity." [55] Moreover, this "legally ultimate rule differs both from the ordinary subordinate rulers of the legal system and from ordinary social conventions or customs." Now, whilst the reason for adopting a 'rule of recognition' is to avoid[56] what Hart calls the "gunman situation writ large"[57] his account descends precisely into this abyss. At no point does he overcome Fuller's characterization of legal authority in his scheme as "Who's Boss around here anyway?" The entire tradition that government is sanctioned by the governed is lost on Hart, with merely an identification of legal authority with Power. Yet, for a 'rule of recognition' to be a useful tool, some way of deriving the legitimacy of a 'legal authority' is essential, and this account is absent in Hart. In contrast, Fuller's notion of law anchored in custom and morality is far more plausible, especially at times when State law collapses. [58] In fact, as Benson suggests, there is in Fuller's view an "implicit constitution" which "emanates from reciprocity, as does the recognition of duty (and, therefore, law) in general." [59] _________________________________________________________________ III CUSTOMARY LAW AND THE LAW MERCHANT: ELABORATING FULLER'S PROCEDURAL NATURAL LAW THEORY The previous section argued that Hart's conception of law is inadequate either to explain or to define the enterprise of law. Indeed, his emphasis upon the ability to punish as the defining feature of a legal system is humbled by the lowly tennis club's capacity to fine, suspend or expel members non-compliant to its rules. The identification of law and Power is thus misguided and obstructive. Moreover, because law is an enterprise it must necessarily be concerned about what ought to be as well as about what is. Typically, when individuals in dispute ask their lawyers 'what is the law?' they look not to the present but to the future; they want to know how the legal confusion will be resolved. In this endeavour it is clear that law-making must be animated by moral concerns, especially the challenge to excellence. But it should be stressed that: Every step and every movement of the multitude, even in what are termed enlightened ages, are made with equal blindness to the future; and nations stumble upon establishments, which are indeed the result of human action, but not the execution of any human design. [1] If' as Menger observed, even law can emerge without conscious design[2] then a procedural natural law theory, relying merely upon self-interest, might be shown (both historically and theoretically) to recognise and protect rights. (A) The Medieval Law Merchant[3] Emerging during the eleventh and twelfth centuries from the self-interested concerns of merchants, the law Merchant represents what Fuller calls a "language of interaction"[4] facilitating trade. The Roman Empire's collapse had been followed by a near extinction of commercial activities, only being revived by an expansion of agricultural productivity, which freed labour to engage in industrial manufacturing in the newly emergent towns. This synergistic relationship, between the revival of commerce and the improvement in agriculture, marks a greater level of specialisation and thus increasing economic activity. Trade, specialisation, and the emergence of money, the most marketable commodity, by substituting indirect exchange for barter, increases productivity and individual welfare. Similarly in law, with the fracturing of the Roman Empire there existed no uniform body of law; towns, regions, and countries all had different legal systems. Indeed, within any town there was likely to be a profusion of legal jurisdictions: ecclesiastical, manorial, or civil. With growing returns to be made from commerce crossing different legal systems, a way of resolving disputes became increasingly valuable. (B) Customary Law Merchants are, of necessity, engaged in purposeful activity orientated towards others. As Hayek observes, business practices generate, over time, expectations in the minds of others; expectations "reasonably formed because they corresponded to the practices on which the everyday conduct of the members of the group was based." [5] Furthermore, The significance of customs here is that they give rise to expectations that guide people's actions, and what will be regarded as binding will therefore be those practices that everybody counts on being observed and which thereby condition the success of most activities. [6] Because customary law is rooted in "the existence of 'social mores' defining rules of conduct"[7] it is necessarily associated with the basic rules establishing civil association, i.e. it is dependent upon the morality of duty. Three conditions, Fuller suggests, underpin duty. "First, the relationship of reciprocity out of which the duty arises must result from a voluntary agreement between the parties affected; they themselves 'create' the duty. Second, the reciprocal performances must in some sense be equal in value."[8] Because it is nonsensical to consider equality as exact identity some common unit of measurement, into which differences can be subsumed, is necessary. And, " [t]hird, the relationships within the society must be sufficiently fluid so that the same duty you owe me today, I may owe you tomorrow in other words, the relationship of duty must in theory and in practice be reversible." [9] Based upon voluntary trade, a common unit of comparison (money), and the changing roles of individuals (as buyers and sellers), the market order is that form of social organisation that best accords to the morality of duty. For only in a society of traders" can Fuller's conception of duty, based upon reciprocity, be realised. (C) The Evolution of Customary Law The "dirty feet" characterisation of the law Merchant conveys the pressing urgency for resolving disputes and differences in business practices faced by merchants anxious to move on to their next market. Also present is their reluctance to use local laws: royal courts typically considered void contracts specifying interest, whilst few local legal authorities were technically competent in the are of commercial practice. [10] A new procedure or ruling, satisfying both parties, was likely to be copied and adopted by other merchants in both their initial contracts or during arbitration. As the circumstances to trade altered so new rules displaced older and less efficient practices. Gradually, there evolved a universal body of rules, the law Merchant,. that in "turn was a prerequisite for the rapid development of trade",[11] and which, importantly, was not based upon compulsion by the State (or private individuals). Fuller notes that, as opposed to the hierarchical or "vertical" order imposed by State law, customary law and the law Merchant are examples of "horizontal forms of order".[12] Customary law is superior to not only statute but also judge made law. Whereas a bad statute or, less commonly, bad judicial decision affects the entire legal system, through enforcement and precedent, customary legal systems quickly abandon bad, unjust, or inefficient rules and limit damage to only a portion of the extended order. [13] Benson finds that, with respect to the law Merchant, a number of legal innovations were adopted "because they promoted speed and informality in commerce and reduced transactions costs"[14] whereas though "royal law, such as the Common law in England, was developing during this same period, and while supporters of the common law take pride in its rationality and progressiveness, the fact is that this state produced law as enforced by the king's courts simply did not adapt and change as fast as the rapidly changing system required." [15] Precisely this point is emphasised by Fuller: From the standpoint of the internal morality of law, for example, it is desirable that laws remain stable through time. But it is obvious that changes in circumstances, or changes in men's consciences, may demand changes in the substantive aims of law, and sometimes disturbingly frequent changes. [16] (D) Consent - The Foundation of Customary Law At the heart of any system of customary law lies the voluntary support of all participants. Unlike State law, marked by the coercive subjugation of citizens beneath leviathan, customary law is a cooperative enterprise. Profitable relationships are jeopardized by the adversarial nature of State courts[17] and its delays. Thus, speedy resolution, under a mutually agreed upon arbitrator, is attractive even to merchants who had not originally stipulated resort to arbitration in their contracts. The conceptual choice of conflict or conversation still holds in this instance. By avoiding arbitration (conversation) a recalcitrant merchant thereby embraces either the hostility engendered by State courts or the violation of contractual agreements, i.e. property rights, which, in the case of agricultural products, with high perishability, would be tantamount to theft or wanton destruction. [18] Since both parties agreed to arbitration both voluntarily accept the arbitrator's resolution. Damages for rights violations, in the form of torts, would take the form of economic restitution. Benson explains that: these courts' decisions were accepted by winners and losers alike because they were backed by the threat of ostracism by the merchant community at large a very effective boycott sanction. A merchant who broke an agreement or refused to accept a court ruling would not be a merchant for long because his fellow merchants ultimately controlled his goods. The threat of a boycott of all future trade "proved, if anything, more effective than physical coercion".[19] (E) Customary Law as Procedural Natural Law Emerging out of the exigencies of trade the law Merchant reflected the concerns of the merchants themselves. Private property and the sanctity of contracts were both recognised by participatory merchant courts. Moreover, an element of fairness coloured the law Merchant both in holding that "[f]raud, duress or other abuses of the will or knowledge of either party in an exchange"[20] invalidated contracts whilst, as Berman notes, "even an exchange which is entered into willingly and knowingly must not impose on either side costs that are excessively disproportionate to the benefits to he obtained; nor may such exchange be unduly disadvantageous to third parties or society generally." [21] Indeed, Fuller notes, this principle of fairness in exchange is bound up in the very idea of reciprocity. [22] Few, if any, would knowingly enter a trade or "would voluntarily recognize a legal system that was not expected to treat him fairly".[23] At the start of this section it was suggested that Fuller's procedural natural law theory would give rise to exactly those institutions characterizing substantive natural law. As I have attempted to demonstrate, the law Merchant, a spontaneously evolved legal system, bears out Fuller's thesis. Confirming this is Benson's contention that customary legal systems (such as the law Merchant) generally exhibit the following features: (1) a predominant concern for individual rights and private property; (2) laws enforced by the victims backed by reciprocal agreements; (3) standard adjudicative procedures established to avoid violence; (4) offences treated as torts punishable by economic restitution; (5) strong incentives for the guilty to yield to prescribed punishment due to threat of social ostracism; and (6) legal change via an evolutionary process of developing customs and norms. [24] IV Conclusion Fuller closes The Morality of Law reaffirming the need to open communication between men; without such spreading of knowledge the extended order cannot function and man's existence will, ultimately, be threatened, This laissez-faire vision of international free trade arises because the morality of aspiration offers more than good counsel and the challenge of excellence. It here speaks with the imperious voice we are accustomed to hear from the morality of duty. And if men will listen, that voice, unlike that of the morality of duty, can be heard across the boundaries and through the barriers that now separate men from one another. [1] The current public law crisis, [2] characterised by escalating crime rates and soaring State spending on 'law and order', has its very roots in "[t]he criminal justice system's neglect of crime victims. ... Crime victims must be involved in the pursuit and prosecution of criminals if a system of law and order is to be effective." [3] With no recourse to restitution victims are additionally burdened by the costs of detection, apprehension, trial, and detention of criminals. Piled upon this weakened structure are ever more burdenous statutes, regulations, and informal rules expanding the sphere of State action. As Gustave de Molinari predicted in 1849, State provision of law and order threatens the cooperative venture of society, [Y]ou forthwith see open up a large profession dedicated to arbitrariness and bad management. Justice becomes slow and costly, the police vexatious, individual liberty is no longer respected, the price of security is abusively inflated and inequitably apportioned, according to the power and influence of this or that class of consumers. ... In a word, all the abuses inherent in monopoly or in communism crop up. [4] Fuller's teachings offer an alternative account of the origin and function of law: his close identification of law with the market, a theme Benson explores in depth, suggests that the enterprise of law sits uncomfortably with legislation. But rather than condemn man to a Hobbesian war of each against all, Fuller's emphasis is upon often informal, customary legal systems to coordinate human interaction. The creation of law through a procedure of entrepreneurial discovery and competition, as I have shown, will recognise individual rights and preserve the voluntary social order. One idea, which Fuller insufficiently emphasised, now forms the basis of the burgeoning law and economics movement. Scarcity and competing claims generate economic incentives to establish property rights rather than resort to violence. [5] With technological solutions like barbed wire fences on the treeless prairies, or the creation of new legal rules, individuals beyond the reach of established governments still endeavoured to subject their conduct to the governance of rules. As the breakdown in State law becomes more apparent, so the customary legal systems typically characterising the open frontier[6] will more and more become the default of settled societies. [7] I will close this account with a statement from Harold J. Berman, a colleague of Fuller's, reflecting much of the richness of Lon Fuller's thought that I have sought to portray here: The conventional concept of law as a body of rules derived from the statutes and court decisions - reflecting a theory of the ultimate source of law in the will of the lawmaker ('the state') - is wholly inadequate to support a study of a transnational legal culture. To speak of the Western legal tradition is to postulate a concept of law, not as a body of rules, but as a process, an enterprise, in which rules have meaning only in the context of institutions and procedures, values, and ways of thought. [8] _________________________________________________________________ NOTES I would like to thank the following individuals for stimulating my original interest not only in law but in the issue of competitive legal structures: Dale Nance, Mario Rizzo, Randy Barnett, Ejan Mackaay, Kurt Leube, and Leonard Liggio, and other faculty and participants on various summer seminar programs run by the Institute for Humane Studies and the Cato Institute. Special thanks must go to Bruce L. Benson, whom I had the good fortune to meet at Aix-en-Provence in 1990 and who subsequently provided a number of papers and manuscripts relating to the subject matter herein. More importantly, Professor Benson's emphasis upon the work of Lon L. Fuller led me to the latter's scholarship. Lastly, I would like to thank Mr. Ian Gregory of the University of York for encouraging me to write upon Lon Fuller and the enterprise of law. Responsibility for any errors remains mine alone. I INTRODUCTION 1. Lon L. Fuller, The Morality of Law, (New Haven and London: Yale University Press, 1969; [1964]): p. 106. 2. Robert S. Summers, "Professor Fuller on Morality and Law", 18 Journal of Legal Education 1 (1966); reprinted in More Essays in Legal Philosophy: General Assessments of Legal Philosophies, selected and edited by Robert S. Summers, (Oxford: Basil Blackwell, 1971): pp.101-130, p. l17. 3. See Robert S. Summers, Lon L. Fuller, (London: Edward Arnold (Publishers) Ltd., 1984): p. vii, where Summers notes that his earlier published recantation had pleased Fuller, appearing as it did shortly before the latter's death. 4. See "A Reply To Critics" (in The Morality of Law, pp.187 242, p.188) where Fuller outlines what he sees as the crucial rounds of that battle. Yet, even there, Fuller intimates that Hart's Holmes lecture (delivered at Harvard Law School in April 1957, and published as "Positivism and the Separation of Law and Morals", 71 Harvard Law Review 593-629 (1958); reprinted in Joel Feinberg and Hyman Gross, ed., Philosophy of Law, Fourth Edition, (Belmont, Calif.: Wadsworth Publishing Company, 1991; [1975]): pp.63-81), which he regards as round one', was actually grounded in an attempt "to defend legal positivism against criticisms made by myself and others". 5. From H. L. A. Hart's "Review of The Morality of Law", Harvard Law Review, Vol.78, (1965), pp. 1281-1296: pp.12951296. S 6. The best general introduction to the literature on the Law Merchant is Bruce L. Benson's "The Spontaneous Evolution of Commercial Law", Southern Economic Journal 55 (Jan 1989): pp. 644-661. Importantly, Benson adopts an approach that builds upon Fuller's thought. 7. Bruno Leoni, "The law as Individual Claim", developed from lectures given at the Freedom School Phrontistery in Colorado Springs, Colorado, December 2-6, 1963; reprinted in Freedom and the Law, expanded 3rd edition, (Indianapolis: Liberty Fund, 1991; [1961]), pp.189-203: p.202. See also Tom G. Palmer and Leonard P. Liggio, "Freedom and the Law: A Comment on Professor Aranson's Article", Harvard Journal of Law and Public Policy, Vol.11, No.3, pp.713-725. 8. Fuller, The Morality of Law, p.23-24. 9. Ibid, p.24. 10. Ibid, p.25-26. See ibid, note 19, Chapter 1, p.24-25 for references to Pashukanis's ideas. Nolan's argument appears to flounder when he considers Fuller's discussion of Pashukanis: "He is thus forced into the bizarre position where he can quote with apparent approval the Soviet theorist Eugene Pashukanis (Nolan, Is Law As It Ought To Be?: Or Can We Make Any Sense of Lon L. Fuller's Concept of The Internal Morality of Law?, (unpublished manuscript, circulated to the Political Theory Workshop, University of York, 16/03/93, 7.3Opm): p.20.) I argue, below, that it is because Nolan is not wholly at home in market process economic theory that he misrepresents Fuller and collapses his own paper into confusion, ibid, pp.20-21. 11. Fuller, The Morality Of Law, p.24. 12. In this respect, of law and institutions like private property, contract and money serving as 'orientation maps' for human activity (in the market process) see Richard M. Ebeling's discussion of the ideas of Alfred Schutz, "Cooperation In Anonymity", Critical Review, Fall 1987, pp. 50-61. 13. Fuller, The Morality of Law, p.9. It should be noted that Fuller's analysis is in sharp contrast to Karl Marx's view of alienation expressed in the latter's Paris Manuscripts, Economic and Philosophical Manuscripts of 1844. In contradistinction to Marx, Fuller quotes (pp.26-27), at length, the economist Philip Wicksteed upon how the market order, of the division of labour, draws individuals together. 14. Ibid, p.9. 15. Leoni, Freedom and The Law, esp. chap. 5, "Freedom and legislation". 16. See Robert S. Summers, Lon L. Fuller, chap. 1, pp.1-15, esp. p.7-9, for a brief overview of Fuller's life. 17. F. A. Hayek, Law Legislation and Liberty, Vol.1, "Rules and Order", (London and Henley: Routledge and Kegan Paul, 1973): esp. chap. 1, "Reason and Evolution", pp.8-34. 18. Ibid, p.14. 19. Fuller, The Morality Of Law, p.91. 20. ibid, p.91. 21. Ibid, p.91. 22. Ibid, p.91. 23. Ibid, p.106. See also Tom W. Bell, "Polycentric Law", Humane Studies Review, Vol.7, No.1, (Winter, 1991/92): pp. 1-10, and Bruce L. Benson, The Enterprise of Law: Justice Without the State, (San Francisco: Pacific Research Institute, 1990). II THE MORALITY OF LAW (A) The Two Moralities 1. Fuller, The Morality Of Law, p.162. 2. Ibid, p.162. 3. Ibid, p.5. 4. Ibid,p.5-6. 5. See Adam Smith, The Theory of Moral Sentiments, 1, 442; cited by Fuller, ibid, p.6. 6. Fuller, The Morality of Law, p.30. 7. Ibid, p.13. 8. Ibid, p.9. 9. Ibid, p.9. 10. I'm not entirely happy with this characterization but feel that this best expresses the social requirement to observe law and to what extent may dereliction or fault attract legal censure. Perhaps Ronald Dworkin's phrase "Law's Empire" would be more appropriate to the idea that I'm trying to present here. 11. Fuller, The Morality Of Law, pp.9-10, p.10. 12. Ibid, p.10. 13. Ibid, p.10. 14. Ibid, p.11. 15. Ibid, p.11. 16. Ibid, p.11. 17. Ibid, p.12. This point is also forcefully made by Professor Kurt Leube in his lecture Justice, Rule of Law and Legal Positivism given at the Universite' d'Ete', Aix-en-Provence, France, August 1991. 18. An excellent discussion of this distinction is to be found in Loren Lomasky's Persons, Rights and the Moral Community, (New York and Oxford: Oxford University Press, 1987): esp. chap. 5, pp.84-110. 19. Herbert Spencer, The Principles of Ethics, (1892-93; reprinted, with an Introduction by Tibor R. Machan, Indianapolis: Liberty Press, 1978): Vol.11, pp. 61-62; see also discussion of Spencer in Ralph Raico, Classical Liberalism in the Twentieth Century, (Fairfax, Virginia: Institute for Humane Studies, 1986/7). 20. "But it is well-nigh impossible to preserve lasting peace in a society in which the rights and duties of the respective classes are different. Whoever denies rights to a part of the population must always be prepared for a united attack by the disenfranchised on the privileged. Class privileges must disappear so that the conflict over them may cease." Ludwig von Mises, The Free and Prosperous Commonwealth, (Princeton: D. van Nostrand, 1962), p.28; cited by William Baumgarth, "Ludwig von Mises and the Justification of the Liberal Order" in Laurence S. Moss ed., The Economics of Ludwig von Mises: Towards a Critical Reappraisal, (Kansas City: Sheed and Ward Inc., Institute for Humane Studies, 1976): pp.79-99, pp. 90-91. See also Mises's "The Clash of Group Interests", reprinted in Richard M. Ebeling ed., Money, Method, and the Market Process: Essays by Ludwig von Mises, (Norwell, Massachusetts: KIuwer Academic Publishers, Praxaeology Press of the Ludwig von Mises Institute, 1990): pp.202-214. 21. Fuller, The Morality Of Law, p.12. 22. Nolan, Is Law As It Ought To Be? , p.21. 23. Ibid, p.21. 24. Mises, The Free and Prosperous Commonwealth, pp. 85-86; cited by Baumgarth, "Ludwig von Mises and the Justification of the Liberal Order", pp.96-97. See also the economic arguments supporting this proposition in F. A. Hayek, "The Use of Knowledge in Society", (Menlo Park, California: Institute for Humane Studies, 1977; [revised and reprinted from The American Economic Review, Vol.35, No.4, Sept. 1945]); and his The Fatal Conceit: The Errors of Socialism, Vol. 1, of W. W. Bartley III ed., The Collected Works of F A. Hayek, (Chicago: University of Chicago Press, 1988). See also Richard Ebeling's "Introduction", to Money Method, and the Market Process: Essays by Ludwig von Mises, for references to Mises' arguments in this connection. These economic arguments are also taken up by Ayn Rand in her essay "The Anti-Industrial Revolution", printed in her The New Left: The Anti-Industrial Revolution, (New York: New American Library, Inc., 1975; [1971]): pp.127-151. 25. Nolan, Is Law As It Ought To Be?, p.21. (B) The Internal Morality of Law 26. Emphasised by Hart in concluding his "Review", p.1295. 27. Ibid, p.1295. 28. Fuller, The Morality of Law, pp.245-253. 29. Driving the moral urgency of Fuller's fictitious example was the very real problem faced in the wake of the overthrow of the Nazi regime at the end of World War II. Today, similar demands are being voiced across Eastern Europe for the "stooges" of the secret police forces of the, now deposed, Communist ruling elites to be brought to trial. 30. The idea of lynching, or at least popular justice, seems overlooked as a solution to some extent by Fuller. Yet, as he explains, [i]f we accept the view that the central purpose of law is to furnish baselines for human interaction, it then becomes apparent why the existence of enacted law as an effectively functioning system depends upon the establishment of stable interactional expectancies between lawgiver and subject. On the one hand, the lawgiver must be able to anticipate what the citizenry as a whole will accept as law and generally observe the body of rules he has promulgated. On the other hand, the legal subject must be able to anticipate that government will itself abide by its own declared rules. ... A gross failure in the realisation of either of these anticipations - of government toward citizens and of citizens toward government - can have the result that the most carefully drafted code will fail to become a functioning system of law. (Fuller, The Principles of Social Order: Selected Essays of Lon L. Fuller, edited with an introduction by Kenneth I. Winston, (Durham, N. C.: Duke University Press, 1981): pp.235-236; cited by Benson, The Enterprise of Law, pp. 320-321.) (emphasis added by Benson.) Now, with the collapse of the regimes in Eastern Europe, apart from their massively dislocated economies, their main problem is the perverted legal systems. A sufficient breakdown "must - if we are to judge the matter with any rationality at alI - release men from those duties that had as their only reason for being, maintaining a pattern of social interaction that has now been destroyed." (Fuller, The Morality of Law, p.22.) It is in this context that a review of the role of vigilance committees in similar historical situations is most useful. With the collapse of public law, Benson, in "Violence and Vigilante Justice in the American West" (Appendix to Chap. 12, "The legal monopoly on Coercion", The Enterprise of Law, pp.312-321.), documents how much was achieved by widely supported vigilante committees that were remarkable in their restraint and respect for due process and other procedural concerns. Animating them was the desire to preserve the social bond by enforcing minimum conceptions of social duty (in particular they expelled from California individuals convicted of crimes elsewhere and convicts from Australia) and in striking at political corruption. 31. Fuller, The Morality of Law, p.39. 32. See his excellent "Five Minutes of legal Philosophy", translated by Stanley L. Paulson; reprinted in Feinberg and Gross, Philosophy of Law, pp. 103-104; (published originally in Rhein-Neckar-Zeitung, 12/8/45; republished in the 8th edition of Radbruch's Rechtsphilosophie, ed. by Erik Wolf and Hans-Peter Schneider (Stuttgart: K. F. Koehler Verlag, 1973) pp. 327-29). Other references to Radbruch's ideas can be found in Fuller's "Positivism and Fidelity to the Law - A Reply to Professor Hart", 71 Harvard Law Review 630-72 (1958); (reprinted in Feinberg and Gross, Philosophy of Law, pp.81-102, in notes 19 and 25, p.102. In his article Fuller also notes that all three (Radbruch, Hart, and himself) favoured retrospective legislation to solve the "Problem of the Grudge Informer", but for necessarily different reasons. 33. Fuller, The Morality Of Law, p.41. 34. Ibid, p.43. 35. Albert Jay Nock, "The State", the Freeman, June 13 and June 20, 1923, (reprinted in Charles H...Hamilton ed., The State of the Union: Essays in Social Criticism by Albert Jay Nock, (Indianapolis: Liberty Press, 1991): pp.222-9). 36. See my paper, The Right to Revolution: Toleration, Liberty and the State in the Thought of John Locke's Thought and the Early Liberals, Libertarian Heritage No.11, Libertarian Alliance, London, 1994, for an extensive elaboration of this theme. 37. For, "[i]f we view the law as providing guideposts for human interaction, we shall be able to see that any infringement of the demands of legality tends to undermine men's confidence in, and their respect for, law generally." (Fuller, The Morality of Law, p.222.) 38. Ibid, p.94. 39. Summarised by Fuller, ibid. See note 10, "Introduction", above for references. 40. Ibid, p.25. (C) The Positivist Quest and Hart's Review 41. This section draws upon Summers' excellent discussion in "The Differentiation of law from non-law", chap. 4, Lon L. Fuller, pp. 42-61. 42. Lon L. Fuller, The Law in Quest of Itself, (Evanston, III.: Northwestern University Press, 1940; [Boston, Massachusetts: Beacon Press, 1966]); cited by Summers, ibid, p.42. 43. F. A. Hayek, The Counter-Revolution, of Science: Studies in the abuse of Reason, (Indianapolis: Liberty Press, 1979; [Glencoe, Ill.: The Free Press, 1952]). 44. Hart, "Review", pp.1281-1296. 45. Summers, Lon L. Fuller, p.42. 46. Hart, "Review", p.1281. 47. Ibid, p.1284. 48. Ibid, p.1285. 49. Ibid, p.1286. 50. Ibid, p.1287. 51. Fuller, The Morality Of Law, p.203. 52. Hart, "Review", pp.1292-93. 53. Ibid, pp.1292-93. 54. H. L. A. Hart, The Concept of Law, (Oxford: Oxford University Press, 1961). 55. Ibid; excerpted and reprinted in Feinberg and Gross, Philosophy Of Law, p.61. 56. Hart, "Review", p.1293. 57. Hart, "Separation of Law and Morals"; reprinted in Feinberg and Gross, Philosophy Of Law, p.67. 58. See note 30, "The Morality of Law", above for references to vigilantism. 59. Benson, The Enterprise of Law, p.322; sec also Fuller, The Principles of Social Order, p.195. III CUSTOMARY LAW AND THE LAW MERCHANT: ELABORATING FULLER'S PROCEDURAL NATURAL LAW THEORY 1. Adam Ferguson, An Essay on Civil Society, (Edinburgh: Edinburgh University Press, 1966): p.122; cited by Norman P. Barry in his bibliographical essay, "The Tradition of Spontaneous Order", Literature of Liberty: A Review of Contemporary Liberal Thought, Vol. V No.2, Summer 1982, pp.7-58: p. 24. 2. For, "[l]anguage, religion, law, even the state itself, and to mention a few economic social phenomena, the phenomena of markets, of competition, of money, and numerous other social structures are already met with in epochs of history where we cannot properly speak of purposeful activity of the community as such directed at establishing them. Nor can we speak of such activity on the part of the rulers." (Carl Menger, Investigations into the Method of the Social Sciences with Special Reference to Economics, with a new introduction by Lawrence H. White, ed., Louis Schneider, Trans., Francis J. Nock, (New York and London: New York University Press, 1985; [1883]): p.146.) (A) The Medieval Law Merchant 3. See Bruce L. Benson's "The Spontaneous Evolution of Commercial Law" and his The Enterprise of Law for a historical and theoretical discussion of the Law Merchant. 4. Lon L. Fuller, The Principles of Social Order, p.213; cited by Benson, ibid, p.643. (B) Customary Law 5. Hayek, Law, Legislation and Liberty, Vol.1, pp.96-97. 6. Ibid, Vol.1, p.97. 10 7. Benson, "The Spontaneous Evolution of Commercial Law", p. 645. Fuller, The Morality of Law, p.23. 8. Fuller, The Morality of Law, p. 23 9. Ibid,p.23. (C) The Evolution of Customary Law 10. See Benson, "The Spontaneous Evolution of Commercial Law", p.650. 11. Ibid, p.648. 12. Fuller, The Morality Of Law, p.233. 13. For further discussion see Hayek, Law, Legislation and Liberty; Leoni, Freedom and the Law; Palmer and Liggio, "Freedom and the Law: A Comment on Professor Aranson's Article"; and especially Benson, The Enterprise of Law, in this respect. As Leoni, in Freedom and the Law, remarks: No free-trade system can actually work if it is not rooted in a legal and political system that helps citizens to counteract interference with their business on the part of other people, including the authorities. But a characteristic feature of free-trade systems seems also to be that they are compatible, and probably compatible only, with such legal and political Systems as have little or no recourse to legislation, at least as far as private life and business are concerned. On the other hand, socialist systems cannot continue to exist without the help of legislation. (p.103.) 14. Benson, "The Spontaneous Evolution of Commercial Law", p. 650. 15. Ibid, p.650. 16. Fuller; The Morality of Law, pp.44-45. (D) Consent - The Foundation of Customary Law 17. Benson, "The Spontaneous Evolution of Commercial Law", p. 657. 18. Benson notes that a similar situation prompted the emergence of the "rent-a-judge" system in California: An 1872 California statute states that individuals in a dispute have the right to have a full court hearing before any referee they might choose. In 1980 there was a 70,000 case public court backlog in California with a median pretrial delay of 50 and one-months. Thus, it is not too surprising that two lawyers who wanted a complex business case settled quickly 'rediscovered' the statute; they found a retired judge with expertise in the area of the dispute, paid him at attorney's fee rates and saved their clients a tremendous amount of time and expense. (Benson, ibid, p.657.) See also in this respect, Gary Pruitt, "California's Rent-a-Judge Justice", Journal of Contemporary Studies, Spring 1982, pp. 49-57; cited and discussed by Benson, ibid, pp. 657-658. 19. William C. Wooldridge, Uncle Sam, the Monopoly Man, (New Rochelle, New York: Arlington House, 1970); cited by Benson, "The Spontaneous Evolution of Commercial Law", p. 649. (E) Customary Law as Procedural Natural Law 20. Benson, ibid, p.649. 21. Harold J. Berman, Law and Revolution: The Formation of Western Legal Tradition, (Cambridge, Massachusetts: Harvard University Press, 1983): p.343; cited by Benson, ibid, p.649. 22. Fuller; The Morality of Law, p.23. 23. Benson, "The Spontaneous Evolution of Commercial Law", p. 649. 24. Benson, The Enterprise of Law, p.21; from Benson, "Enforcement of Private property Rights in primitive Societies: Law Without Government", Journal of Libertarian Studies, 9 (Winter, 1989): pp. 1-26; wording from Tom W. Bell, "Polycentric Law", p.4. Two items I did not have to hand when writing this paper, which are especially relevant to Fuller's conception of customary law, were: Lon L. Fuller, Anatomy of The Law, New York, Frederick A. Praeger, 1968, see especially Part II, "The Sources of Law", pp. 43-119. Lon L. Fuller, "Human Interaction and the Law", in Principles of Social Order, pp.211-246. IV CONCLUSION 1. Fuller, The Morality Of Law, p.186. 2. See Benson, "Comment: The Lost Victim and the Failure of the Public Law Experiment" and The Enterprise of Law; and Leonard P. Liggio, "The Market for Rules, Privatization, and the Crisis of the Theory of Public Goods", George Mason University Law Review, Vol.11, No.2, (Winter, 1988): pp. 139-150; from a symposium: Constitutional Protections of Economic Activity: How They Promote Individual Freedom. 3. Benson, "Comment: The Lost Victim and the Failure of the Public Law Experiment", p.399. 4. Gustave de Molinari, The Production of Security, p.277 (J. Mullock trans. 1977); cited by Benson, ibid, p.419. 5. See Terry L. Anderson and P. J. Hill, "An American Experiment in Anarcho-Capitalism: The Not so Wild, Wild West", Journal of Libertarian Studies, Vol.3, No.1, (1979): pp.9-29. 6. Ibid; Benson, The Enterprise of Law, "Legal Evolution in Primitive Societies", Journal of Institutional and Theoretical Economics (JITE), Vol.144, No.5, (December, 1988): pp. 772-788, "Enforcement of Private property Rights in Primitive Societies: Law Without Government", Journal of Libertarian Studies, 9 (Winter, 1989): pp.1-26, "An Evolutionary Contractarian View of Primitive Law: The Institutions and Incentives Arising Under Customary Indian Law", The Review of Austrian Economics, Vol.5, No.1, (1991): pp.65-89; (author's proof copy); and Tom W. Bell, "Polycentric Law". 7. Benson, "The Spontaneous Evolution of Commercial Law"; and Leonard P. Liggio, "The Market for Rules, Privatization, and the Crisis of the Theory of Public Goods". 8. Harold J. Berman, Law and Revolution, p.11; cited by Liggio, ibid, p.147. BIBLIOGRAPHY Anderson, Terry L. and Hill, P.1., "An American Experiment in Anarcho-Capitalism: The Not so Wild, Wild West", Journal of Libertarian Studies, Vol.3, No.1, (1979): pp.9-29. Barry, Norman P., "The Tradition of Spontaneous Order", Literature of Liberty: A Review of Contemporary Liberal Thought Vol. V No.2, Summer 1982, pp.7-58. Baumgarth, William, "Ludwig von Mises and the Justification of the Liberal Order" in Laurence S. Moss ed., The Economics of Ludwig von Mises: Towards a Critical Reappraisal, (Kansas City: Sheed and Ward Inc., Institute for Humane Studies, 1976): pp.79-99. Bell, Tom W., "Polycentric Law", Humane Studies Review, Vol.7, No.1, (Winter, 1991192): pp.1-10. Benson, Bruce L., "Comment: The Lost Victim and Other Failures of the Public Law Experiment", Harvard Journal of Law and Public Policy, Vol.9, No.2, (Spring, 1986): pp. 299-427. _____, "legal Evolution in Primitive Societies", Journal of Institutional and Theoretical Economics (JITE), Vol.144, No. 5, (December, 1988): pp.772-788. _____, "The Spontaneous Evolution of Commercial Law", Southern Economic Journal 55 (Jan 1989): pp. 644-661 _____, "Enforcement of Private Property Rights in Primitive Societies: Law Without Government", Journal of Libertarian Studies, 9 (Winter, 1989): pp.1-26. _____, The Enterprise of Law: Justice without the State, (San Francisco: Pacific Research Institute, 1990). 11 _____, Customary Law and The Free Market; Law Without the State, (lecture given at the 13th Universite' d'Et6 de Ia Nouvelle Economie, Aix-en-Provence, France, August 27th - September 1st, 1990). _____,"An Evolutionary Contractarian View of Primitive Law: The Institutions and Incentives Arising Under Customary Indian Law", The Review of Austrian Economics, Vol.5, No. 1, (1991): pp. 65-89; (author's proof copy). Berman, Harold J., Law and Revolution: The Formation of Western Legal Tradition, (Cambridge, Massachusetts: Harvard University Press, 1983). Ebeling, Richard M., "Cooperation In Anonymity", Critical Review, Fall 1987, pp. 50-61. Fuller, Lon L., "The Case of the Speluncean Explorers", Harvard Law Review, Vol.62, (1949): pp.616-645; reprinted in Feinberg and Gross, Philosophy of Law, pp.530-545. _____"Positivism and Fidelity to the Law - A Reply to Professor Hart", 71 Harvard Law Review 630-72(1958); reprinted in Feinberg and Gross, Philosophy of Law, pp.81-102. _____,The Morality of Law, (New Haven and London: Yale University Press, 1969; [1964]). _____,Legal Fictions, (Stanford, California: Stanford University Press, 1967) _____, The Principles of Social Order: Selected Essays of Lon L. Fuller, edited with an introduction by Kenneth I. Winston, (Durham, North Carolina: Duke University Press, 2nd printing 1982; [1981]). Hart, H. L. A., "Positivism and the Separation of Law and Morals", 71 Harvard Law Review 593-629 (1958); reprinted in Joel Feinberg and Hyman Gross ed., Philosophy of Law, Fourth Edition, (Belmont, California: Wadsworth Publishing Company, 1991; [1975]): pp.63-81). _____, The Concept of Law, (Oxford: Oxford University Press, 1961); excerpted and reprinted in Feinberg and Gross, Philosophy Of Law, pp. 48-62. _____, "Review of The Morality of Law", Harvard Law Review, Vol.78, (1965), pp.1281-1296: pp.1295-1296. Hayek, F. A., "The Use of Knowledge in Society", (Menlo Park, California: Institute for Humane Studies, 1977; [revised and reprinted from The American Economic Review, Vol.35, No. 4, Sept. 1945]). _____,The Counter-Revolution of Science: Studies in the abuse of Reason, (Indianapolis: Liberty Press, 1979; [Glencoe, Illinois: The Free Press, 1952]). _____,Law, Legislation and Liberty, Vol.1, "Rules and Order", (London and Henley: Routledge and Kegan Paul, 1973). _____,The Fatal Conceit: The Errors of Socialism, Vol.1, of W. W. Bartley III ed., The Collected Works of F. A. Hayek, (Chicago: University of Chicago Press, 1988). Leoni, Bruno, "The Law as Individual Claim", developed from lectures given at the Freedom School Phrontistery in Colorado Springs, Colorado, December 2-6, 1963; reprinted in Freedom and the Law, expanded 3rd edition, (Indianapolis: Liberty Fund, 1991; [1961]), pp.189-203. _____,Freedom and the Law, expanded 3rd edition, (Indianapolis: Liberty Fund, 1991; [1961]). Leube, Kurt, Justice; Rule Of Law and Legal Positivism, (lecture given at the 14th Universite d'Ete' de Ia Nouvelle Economic, Aix-en-Provence, France, August, 1991). Liggio, Leonard P., "The Market for Rules, Privatisation, and the Crisis of the Theory of Public Goods", George Mason University Law Review, Vol.11, No.2, (Winter, 1988): pp.139-150. Lomasky, Loren E., Persons, Rights, and the Moral Community, (New York and Oxford: Oxford University Press, 1987). MacCormick, Neil, H. L. A. Hart, (London: Edward Arnold, 1981). MacLeod-Cullinane, Barry, The Right to Revolution: Toleration, Liberty and the State in the Thought of John Locke and the Early Liberals (Libertarian Heritage No.11, London: Libertarian Alliance, 1994). Menger, Carl, Investigations into the Method of the Social Sciences with Special Reference to Economics, with a new introduction by Lawrence H. White, ed., Louis Schneider, Trans., Francis J. Neck, (New York and London: New York University Press, 1985; [1883]). Mises, Ludwig von, The Free and Prosperous Commonwealth, (Princeton: D. van Nostrand, 1962). ____, "The Clash of Group Interests", reprinted in Richard M. Ebeling ed., Money Method, and the Market Process: Essays by Ludwig von Mises, (Norwell, Massachusetts: KIuwer Academic Publishers, Praxaeology Press of the Ludwig von Mises Institute, 1990): pp.202-214. Nock, Albert Jay, "The State", the Freeman, June 13 and June 20, 1923, (reprinted in Charles H. Hamilton ed., The State of the Union: Essays in Social Criticism by Albert Jay Nock; (Indianapolis: Liberty Press, 1991): pp.222-9. Nolan, Jeremy, Is Law As It Ought To Be? 0r, Can We Make Any Sense of Lon L. Fuller's Concept of The Internal Morality of Law? , (unpublished manuscript, circulated to the Political Theory Workshop, University of York, 16103193, 7.3Opm). Osterfeld, David, "Anarchism and the Public Goods Issue: Law; Courts, and the Police", Journal of Libertarian Studies, Vol. 9, No.1, (Winter, 1989): pp. 48-68. Palmer, Tom G. and Liggio, Leonard P., "Freedom and the Law: A Comment on Professor Aranson's Article", Harvard Journal of Law and Public Policy, Vol.11, No.3, pp.713-725. Radbruch, Gustav, "Five Minutes of legal Philosophy", translated by Stanley L. Paulson; reprinted in Feinberg and Gross, Philosophy of Law, pp.103-104; (published originally in Rhein-Neckar-Zeitung, 12/8/45; republished in the 8th edition of Radbruch's Rechtsphilosophie, ed. by Erik Wolf and Hans-Peter Schneider (Stuttgart: K. F Koehler Verlag, 1973) pp.327-29). Raico, Ralph, Classical Liberalism in the Twentieth Century, (Fairfax, Virginia: Institute for Humane Studies, 198617.). Rand, Ayn, "The Anti-Industrial Revolution", The New Left: The Anti-Industrial Revolution, (New York: New American Library, Inc., 1975; [1971]): pp.127-151. Spencer, Herbert, The Principles of Ethics, (1892-93; reprinted, with an Introduction by Tibor R. Machan, Indianapolis: Liberty Press, 1978): Vol.11. Summers, Robert S., "Professor Fuller on Morality and Law", 18 Journal of Legal Education 1 (1966); reprinted in More Essays in Legal Philosophy: General Assessments of Legal Philosophies, Selected and Edited by Robert S. Summers, (Oxford: Basil Blackwell, 1971): pp.101-130. _____, Lon L. Fuller, (London: Edward Arnold (Publishers) Ltd., 1984). -------------------------- Harvard Law Review Vol. 62, No. 4, February 1949 ? 1949 by The Harvard Law Review Association Cambridge, Mass., U.S.A. http://www.nullapoena.de/stud/explorers.html THE CASE OF THE SPELUNCEAN EXPLORERS by LON L. FULLER IN THE SUPREME COURT OF NEWGARTH, 4300 The defendants, having been indicted for the crime of murder, were convicted and sentenced to be hanged by the Court of General Instances of the County of Stowfield. They bring a petition of error before this Court. The facts sufficiently appear in the opinion of the Chief Justice. TRUEPENNY, C. J. The four defendants are members of the Speluncean Society, an organization of amateurs interested in the exploration of caves. Early in May of 4299 they, in the company of Roger Whetmore, then also a member of the Society, penetrated into the interior of a limestone cavern of the type found in the Central Plateau of this Commonwealth. While they were in a position remote from the entrance to the cave, a landslide occurred. Heavy boulders fell in such a manner as to block completely the only known opening to the cave. When the men discovered their predicament they settled themselves near the obstructed entrance to wait until a rescue party should remove the detritus that prevented them from leaving their underground prison. On the failure of Whetmore and the defendants to return to their homes, the Secretary of the Society was notified by their families. It appears that the explorers had left indications at the headquarters of the Society concerning the location of the cave they proposed to visit. A rescue party was promptly dispatched to the spot. The task of rescue proved one of overwhelming difficulty. It was necessary to supplement the forces of the original party by repeated increments of men and machines, which had to be conveyed at great expense to the remote and isolated region in which the cave was located. A huge temporary camp of workmen, engineers, geologists, and other experts was established. The work of removing the obstruction was several times frustrated by fresh landslides. In one of these, ten of the workmen engaged in clearing the entrance were killed. The treasury of the Speluncean Society was soon exhausted in the rescue effort, and the sum of eight hundred thousand frelars, raised partly by popular subscription and partly by legislative grant, was expended before the imprisoned men were rescued. Success was finally achieved on the thirty-second day after the men entered the cave. Since it was known that the explorers had carried with them only scant provisions, and since it was also known that there was no animal or vegetable matter within the cave on which they might subsist, anxiety was early felt that they might meet death by starvation before access to them could be obtained. On the twentieth day of their imprisonment it was learned for the first time that they had taken with them into the cave a portable wireless machine capable of both sending and receiving messages. A similar machine was promptly installed in the rescue camp and oral communication established with the unfortunate men within the mountain. They asked to be informed how long a time would be required to release them. The engineers in charge of the project answered that at least ten days would be required even if no new landslides occurred. The explorers then asked if any physicians were present, and were placed in communication with a committee of medical experts. The imprisoned men described their condition and the rations they had taken with them, and asked for a medical opinion whether they would be likely to live without food for ten days longer. The chairman of the committee of physicians told them that there was little possibility of this. The wireless machine within the cave then remained silent for eight hours. When communication was re-established the men asked to speak again with the physicians. The chairman of the physicians' committee was placed before the apparatus, and Whetmore, speaking on behalf of himself and the defendants, asked whether they would be able to survive for ten days longer if they consumed the flesh of one of their number. The physicians' chairman reluctantly answered this question in the affirmative. Whetmore asked whether it would be advisable for them to cast lots to determine which of them should be eaten. None of the physicians present was willing to answer the question. Whetmore then asked if there were among the party a judge or other official of the government who would answer this question. None of those attached to the rescue camp was willing to assume the role of advisor in this matter. He then asked if any minister or priest would answer their question, and none was found who would do so. Thereafter no further messages were received from within the cave, and it was assumed (erroneously, it later appeared) that the electric batteries of the explorers' wireless machine had become exhausted. When the imprisoned men were finally released it was learned that on the twenty-third day after their entrance into the cave Whetmore had been killed and eaten by his companions. From the testimony of the defendants, which was accepted by the jury, it appears that it was Whetmore who first proposed that they might find the nutriment without which survival was impossible in the flesh of one of their own number. It was also Whetmore who first proposed the use of some method of casting lots, calling the attention of the defendants to a pair of dice he happened to have with him. The defendants were at first reluctant to adopt so desperate a procedure, but after the conversations by wireless related above, they finally agreed on the plan proposed by Whetmore. After much discussion of the mathematical problems involved, agreement was finally reached on a method of determining the issue by the use of the dice. Before the dice were cast, however, Whetmore declared that he withdrew from the arrangement, as he had decided on reflection to wait for another week before embracing an expedient so frightful and odious. The others charged him with a breach of faith and proceeded to cast the dice. When it came Whetmore's turn, the dice were cast for him by one of the defendants, and he was asked to declare any objections he might have to the fairness of the throw. He stated that he had no such objections. The throw went against him, and he was then put to death and eaten by his companions. After the rescue of the defendants, and after they had completed a stay in a hospital where they underwent a course of treatment for malnutrition and shock, they were indicted for the murder of Roger Whetmore. At the trial, after the testimony had been concluded, the foreman of the jury (a lawyer by profession) inquired of the court whether the jury might not find a special verdict, leaving it to the court to say whether on the facts as found the defendants were guilty. After some discussion, both the Prosecutor and counsel for the defendants indicated their acceptance of this procedure, and it was adopted by the court. In a lengthy special verdict the jury found the facts as I have related them above, and found further that if on these facts the defendants were guilty of the crime charged against them, then they found the defendants guilty. On the basis of this verdict, the trial judge ruled that the defendants were guilty of murdering Roger Whetmore. The judge then sentenced them to be hanged, the law of our Commonwealth permitting him no discretion with respect to the penalty to be imposed. After the release of the jury, its members joined in a communication to the Chief Executive asking that the sentence be commuted to an imprisonment of six months. The trial judge addressed a similar communication to the Chief Executive. As yet no action with respect to these pleas has been taken, as the Chief Executive is apparently awaiting our disposition of this petition of error. It seems to me that in dealing with this extraordinary case the jury and the trial judge followed a course that was not only fair and wise, but the only course that was open to them under the law. The language of our statute is well known: "Whoever shall willfully take the life of another shall be punished by death." N. C. S. A. (N. S.) ? 12-A. This statute permits of no exception applicable to this case, however our sympathies may incline us to make allowance for the tragic situation in which these men found themselves. In a case like this the principle of executive clemency seems admirably suited to mitigate the rigors of the law, and I propose to my colleagues that we follow the example of the jury and the trial judge by joining in the communications they have addressed to the Chief Executive. There is every reason to believe that these requests for clemency will be heeded, coming as they do from those who have studied the case and had an opportunity to become thoroughly acquainted with all its circumstances. It is highly improbable that the Chief Executive would deny these requests unless he were himself to hold hearings at least as extensive as those involved in the trial below, which lasted for three months. The holding of such hearings (which would virtually amount to a retrial of the case) would scarcely be compatible with the function of the Executive as it is usually conceived. I think we may therefore assume that some form of clemency will be extended to these defendants. If this is done, then justice will be accomplished without impairing either the letter or spirit of our statutes and without offering any encouragement for the disregard of law. FOSTER, J. I am shocked that the Chief Justice, in an effort to escape the embarrassments of this tragic case, should have adopted, and should have proposed to his colleagues, an expedient at once so sordid and so obvious. I believe something more is on trial in this case than the fate of these unfortunate explorers; that is the law of our Commonwealth. If this Court declares that under our law these men have committed a crime, then our law is itself convicted in the tribunal of common sense, no matter what happens to the individuals involved in this petition of error. For us to assert that the law we uphold and expound compels us to a conclusion we are ashamed of, and from which we can only escape by appealing to a dispensation resting within the personal whim of the Executive, seems to me to amount to an admission that the law of this Commonwealth no longer pretends to incorporate justice. For myself, I do not believe that our law compels the monstrous conclusion that these men are murderers. I believe, on the contrary, that it declares them to be innocent of any crime. I rest this conclusion on two independent grounds, either of which is of itself sufficient to justify the acquittal of these defendants. The first of these grounds rests on a premise that may arouse opposition until it has been examined candidly. I take the view that the enacted or positive law of this Commonwealth, including all of its statutes and precedents, is inapplicable to this case, and that the case is governed instead by what ancient writers in Europe and America called "the law of nature." This conclusion rests on the proposition that our positive law is predicated on the possibility of men's coexistence in society. When a situation arises in which the coexistence of men becomes impossible, then a condition that underlies all of our precedents and statutes has ceased to exist. When that condition disappears, then it is my opinion that the force of our positive law disappears with it. We are not accustomed to applying the maxim cessante ratione legis, cessat et ipsa lex to the whole of our enacted law, but I believe that this is a case where the maxim should be so applied. The proposition that all positive law is based on the possibility of men's coexistence has a strange sound, not because the truth it contains is strange, but simply because it is a truth so obvious and pervasive that we seldom have occasion to give words to it. Like the air we breathe, it so pervades our environment that we forget that it exists until we are suddenly deprived of it. Whatever particular objects may be sought by the various branches of our law, it is apparent on reflection that all of them are directed toward facilitating and improving men's coexistence and regulating with fairness and equity the relations of their life in common. When the assumption that men may live together loses its truth, as it obviously did in this extraordinary situation where life only became possible by the taking of life, then the basic premises underlying our whole legal order have lost their meaning and force. Had the tragic events of this case taken place a mile beyond the territorial limits of our Commonwealth, no one would pretend that our law was applicable to them. We recognize that jurisdiction rests on a territorial basis. The grounds of this principle are by no means obvious and are seldom examined. I take it that this principle is supported by an assumption that it is feasible to impose a single legal order upon a group of men only if they live together within the confines of a given area of the earth's surface. The premise that men shall coexist in a group underlies, then, the territorial principle, as it does all of law. Now I contend that a case may be removed morally from the force of a legal order, as well as geographically. If we look to the purposes of law and government, and to the premises underlying ourpositive law, these men when they made their fateful decision were as remote from our legal order as if they had been a thousand miles beyond our boundaries. Even in a physical sense, their underground prison was separated from our courts and writ-servers by a solid curtain of rock that could be removed only after the most extraordinary expenditures of time and effort. I conclude, therefore, that at the time Roger Whetmore's life was ended by these defendants, they were, to use the quaint language of nineteenth-century writers, not in a "state of civil society" but in a "state of nature." This has the consequence that the law applicable to them is not the enacted and established law of this Commonwealth, but the law derived from those principles that were appropriate to their condition. I have no hesitancy in saying that under those principles they were guiltless of any crime. What these men did was done in pursuance of an agreement accepted by all of them and first proposed by Whetmore himself. Since it was apparent that their extraordinary predicament made inapplicable the usual principles that regulate men's relations with one another, it was necessary for them to draw, as it were, a new charter of government appropriate to the situation in which they found themselves. It has from antiquity been recognized that the most basic principle of law or government is to be found in the notion of contract or agreement. Ancient thinkers, especially during the period from 1600 to 1900, used to base government itself on a supposed original social compact. Skeptics pointed out that this theory contradicted the known facts of history, and that there was no scientific evidence to support the notion that any government was ever founded in the manner supposed by the theory. Moralists replied that, if the compact was a fiction from a historical point of view, the notion of compact or agreement furnished the only ethical justification on which the powers of government, which include that of taking life, could be rested. The powers of government can only be justified morally on the ground that these are powers that reasonable men would agree upon and accept if they were faced with the necessity of constructing anew some order to make their life in common possible. Fortunately, our Commonwealth is not bothered by the perplexities that beset the ancients. We know as a matter of historical truth that our government was founded upon a contract or free accord of men. The archeological proof is conclusive that in the first period following the Great Spiral the survivors of that holocaust voluntarily came together and drew up a charter of government. Sophistical writers have raised questions as to the power of those remote contractors to bind future generations, but the fact remains that our government traces itself back in an unbroken line to that original charter. If, therefore, our hangmen have the power to end men's lives, if our sheriffs have the power to put delinquent tenants in the street, if our police have the power to incarcerate the inebriated reveler, these powers find their moral justification in that original compact of our forefathers. If we can find no higher source for our legal order, what higher source should we expect these starving unfortunates to find for the order they adopted for themselves? I believe that the line of argument I have just expounded permits of no rational answer. I realize that it will probably be received with a certain discomfort by many who read this opinion, who will be inclined to suspect that some hidden sophistry must underlie a demonstration that leads to so many unfamiliar conclusions. The source of this discomfort is, however, easy to identify. The usual conditions of human existence incline us to think of human life as an absolute value, not to be sacrificed under any circumstances. There is much that is fictitious about this conception even when it is applied to the ordinary relations of society. We have an illustration of this truth in the very case before us. Ten workmen were killed in the process of removing the rocks from the opening to the cave. Did not the engineers and government officials who directed the rescue effort know that the operations they were undertaking were dangerous and involved a serious risk to the lives of the workmen executing them? If it was proper that these ten lives should be sacrificed to save the lives of five imprisoned explorers, why then are we told it was wrong for these explorers to carry out an arrangement which would save four lives at the cost of one? Every highway, every tunnel, every building we project involves a risk to human life. Taking these projects in the aggregate, we can calculate with some precision how many deaths the construction of them will require; statisticians can tell you the average cost in human lives of a thousand miles of a four-lane concrete highway. Yet we deliberately and knowingly incur and pay this cost on the assumption that the values obtained for those who survive outweigh the loss. If these things can be said of a society functioning above ground in a normal and ordinary manner, what shall we say of the supposed absolute value of a human life in the desperate situation in which these defendants and their companion Whetmore found themselves? This concludes the exposition of the first ground of my decision. My second ground proceeds by rejecting hypothetically all the premises on which I have so far proceeded. I concede for purposes of argument that I am wrong in saying that the situation of these men removed them from the effect of our positive law, and I assume that the Consolidated Statutes have the power to penetrate five hundred feet of rock and to impose themselves upon these starving men huddled in their underground prison. Now it is, of course, perfectly clear that these men did an act that violates the literal wording of the statute which declares that he who "shall willfully take the life of another" is a murderer. But one of the most ancient bits of legal wisdom is the saying that a man may break the letter of the law without breaking the law itself. Every proposition of positive law, whether contained in a statute or a judicial precedent, is to be interpreted reasonably, in the light of its evident purpose. This is a truth so elementary that it is hardly necessary to expatiate on it. Illustrations of its application are numberless and are to be found in every branch of the law. In Commonwealth v. Staymore the defendant was convicted under a statute making it a crime to leave one's car parked in certain areas for a period longer than two hours. The defendant had attempted to remove his car, but was prevented from doing so because the streets were obstructed by a political demonstration in which he took no part and which he had no reason to anticipate. His conviction was set aside by this Court, although his case fell squarely within the wording of the statute. Again, in Fehler v. Neegas there was before this Court for construction a statute in which the word "not" had plainly been transposed from its intended position in the final and most crucial section of the act. This transposition was contained in all the successive drafts of the act, where it was apparently overlooked by the draftsmen and sponsors of the legislation. No one was able to prove how the error came about, yet it was apparent that, taking account of the contents of the statute as a whole, an error had been made, since a literal reading of the final clause rendered it inconsistent with everything that had gone before and with the object of the enactment as stated in its preamble. This Court refused to accept a literal interpretation of the statute, and in effect rectified its language by reading the word "not" into the place where it was evidently intended to go. The statute before us for interpretation has never been applied literally. Centuries ago it was established that a killing in self-defense is excused. There is nothing in the wording of the statute that suggests this exception. Various attempts have been made to reconcile the legal treatment of self-defense with the words of the statute, but in my opinion these are all merely ingenious sophistries. The truth is that the exception in favor of self-defense cannot be reconciled with the words of the statute, but only with its purpose. The true reconciliation of the excuse of self-defense with the statute making it a crime to kill another is to be found in the following line of reasoning. One of the principal objects underlying any criminal legislation is that of deterring men from crime. Now it is apparent that if it were declared to be the law that a killing in self-defense is murder such a rule could not operate in a deterrent manner. A man whose life is threatened will repel his aggressor, whatever the law may say. Looking therefore to the broad purposes of criminal legislation, we may safely declare that this statute was not intended to apply to cases of self-defense. When the rationale of the excuse of self-defense is thus explained, it becomes apparent that precisely the same reasoning is applicable to the case at bar. If in the future any group of men ever find themselves in the tragic predicament of these defendants, we may be sure that their decision whether to live or die will not be controlled by the contents of our criminal code. Accordingly, if we read this statute intelligently it is apparent that it does not apply to this case. The withdrawal of this situation from the effect of the statute is justified by precisely the same considerations that were applied by our predecessors in office centuries ago to the case of self-defense. There are those who raise the cry of judicial usurpation whenever a court, after analyzing the purpose of a statute, gives to its words a meaning that is not at once apparent to the casual reader who has not studied the statute closely or examined the objectives it seeks to attain. Let me say emphatically that I accept without reservation the proposition that this Court is bound by the statutes of our Commonwealth and that it exercises its powers in subservience to the duly expressed will of the Chamber of Representatives. The line of reasoning I have applied above raises no question of fidelity to enacted law, though it may possibly raise a question of the distinction between intelligent and unintelligent fidelity. No superior wants a servant who lacks the capacity to read between the lines. The stupidest housemaid knows that when she is told "to peel the soup and skim the potatoes" her mistress does not mean what she says. She also knows that when her master tells her to "drop everything and come running" he has overlooked the possibility that she is at the moment in the act of rescuing the baby from the rain barrel. Surely we have a right to expect the same modicum of intelligence from the judiciary. The correction of obvious legislative errors or oversights is not to supplant the legislative will, but to make that will effective. I therefore conclude that on any aspect under which this case may be viewed these defendants are innocent of the crime of murdering Roger Whetmore, and that the conviction should be set aside. TATTING, J. In the discharge of my duties as a justice of this Court, I am usually able to dissociate the emotional and intellectual sides of my reactions, and to decide the case before me entirely on the basis of the latter. In passing on this tragic case I find that my usual resources fail me. On the emotional side I find myself torn between sympathy for these men and a feeling of abhorrence and disgust at the monstrous act they committed. I had hoped that I would be able to put these contradictory emotions to one side as irrelevant, and to decide the case on the basis of a convincing and logical demonstration of the result demanded by our law. Unfortunately, this deliverance has not been vouchsafed me. As I analyze the opinion just rendered by my brother Foster, I find that it is shot through with contradictions and fallacies. Let us begin with his first proposition: these men were not subject to our law because they were not in a "state of civil society" but in a "state of nature." I am not clear why this is so, whether it is because of the thickness of the rock that imprisoned them, or because they were hungry, or because they had set up a "new charter of government" by which the usual rules of law were to be supplanted by a throw of the dice. Other difficulties intrude themselves. If these men passed from the jurisdiction of our law to that of "the law of nature," at what moment did this occur? Was it when the entrance to the cave was blocked, or when the threat of starvation reached a certain undefined degree of intensity, or when the agreement for the throwing of the dice was made? These uncertainties in the doctrine proposed by my brother are capable of producing real difficulties. Suppose, for example, one of these men had had his twenty-first birthday while he was imprisoned within the mountain. On what date would we have to consider that he had attained his majority - when he reached the age of twenty-one, at which time he was, by hypothesis, removed from the effects of our law, or only when he was released from the cave and became again subject to what my brother calls our "positive law"? These difficulties may seem fanciful, yet they only serve to reveal the fanciful nature of the doctrine that is capable of giving rise to them. But it is not necessary to explore these niceties further to demonstrate the absurdity of my brother's position. Mr. Justice Foster and I are the appointed judges of a court of the Commonwealth of Newgarth, sworn and empowered to administer the laws of that Commonwealth. By what authority do we resolve ourselves into a Court of Nature? If these men were indeed under the law of nature, whence comes our authority to expound and apply that law? Certainly we are not in a state of nature. Let us look at the contents of this code of nature that my brother proposes we adopt as our own and apply to this case. What a topsy-turvy and odious code it is! It is a code in which the law of contracts is more fundamental than the law of murder. It is a code under which a man may make a valid agreement empowering his fellows to eat his own body. Under the provisions of this code, furthermore, such an agreement once made is irrevocable, and if one of the parties attempts to withdraw, the others may take the law into their own hands and enforce the contract by violence - for though my brother passes over in convenient silence the effect of Whetmore's withdrawal, this is the necessary implication of his argument. The principles my brother expounds contain other implications that cannot be tolerated. He argues that when the defendants set upon Whetmore and killed him (we know not how, perhaps by pounding him with stones) they were only exercising the rights conferred upon them by their bargain. Suppose, however, that Whetmore had had concealed upon his person a revolver, and that when he saw the defendants about to slaughter him he had shot them to death in order to save his own life. My brother's reasoning applied to these facts would make Whetmore out to be a murderer, since the excuse of self-defense would have to be denied to him. If his assailants were acting rightfully in seeking to bring about his death, then of course he could no more plead the excuse that he was defending his own life than could a condemned prisoner who struck down the executioner lawfully attempting to place the noose about his neck. All of these considerations make it impossible for me to accept the first part of my brother's argument. I can neither accept his notion that these men were under a code of nature which this Court was bound to apply to them, nor can I accept the odious and perverted rules that he would read into that code. I come now to the second part of my brother's opinion, in which he seeks to show that the defendants did not violate the provisions of N. C. S. A. (N. S.) ? 12-A. Here the way, instead of being clear, becomes for me misty and ambiguous, though my brother seems unaware of the difficulties that inhere in his demonstrations. The gist of my brother's argument may be stated in the following terms: No statute, whatever its language, should be applied in a way that contradicts its purpose. One of the purposes of any criminal statute is to deter. The application of the statute making it a crime to kill another to the peculiar facts of this case would contradict this purpose, for it is impossible to believe that the contents of the criminal code could operate in a deterrent manner on men faced with the alternative of life or death. The reasoning by which this exception is read into the statute is, my brother observes, the same as that which is applied in order to provide the excuse of self-defense. On the face of things this demonstration seems very convincing indeed. My brother's interpretation of the rationale of the excuse of self-defense is in fact supported by a decision of this court, Commonwealth v. Parry, a precedent I happened to encounter in my research on this case. Though Commonwealth v. Parry seems generally to have been overlooked in the texts and subsequent decisions, it supports unambiguously the interpretation my brother has put upon the excuse of self-defense. Now let me outline briefly, however, the perplexities that assail me when I examine my brother's demonstration more closely. It is true that a statute should be applied in the light of its purpose, and that one of the purposes of criminal legislation is recognized to be deterrence. The difficulty is that other purposes are also ascribed to the law of crimes. It has been said that one of its objects is to provide an orderly outlet for the instinctive human demand for retribution. Commonwealth v. Scape. It has also been said that its object is the rehabilitation of the wrongdoer. Commonwealth v. Makeover. Other theories have been propounded. Assuming that we must interpret a statute in the light of its purpose, what are we to do when it has many purposes or when its purposes are disputed? A similar difficulty is presented by the fact that although there is authority for my brother's interpretation of the excuse of self-defense, there is other authority which assigns to that excuse a different rationale. Indeed, until I happened on Commonwealth v. Parry I had never heard of the explanation given by my brother. The taught doctrine of our law schools, memorized by generations of law students, runs in the following terms: The statute concerning murder requires a "willful" act. The man who acts to repel an aggressive threat to his own life does not act "willfully," but in response to an impulse deeply ingrained in human nature. I suspect that there is hardly a lawyer in this Commonwealth who is not familiar with this line of reasoning, especially since the point is a great favorite of the bar examiners. Now the familiar explanation for the excuse of self-defense just expounded obviously cannot be applied by analogy to the facts of this case. These men acted not only "willfully" but with great deliberation and after hours of discussing what they should do. Again we encounter a forked path, with one line of reasoning leading us in one direction and another in a direction that is exactly the opposite. This perplexity is in this case compounded, as it were, for we have to set off one explanation, incorporated in a virtually unknown precedent of this Court, against another explanation, which forms a part of the taught legal tradition of our law schools, but which, so far as I know, has never been adopted in any judicial decision. I recognize the relevance of the precedents cited by my brother concerning the displaced "not" and the defendant who parked overtime. But what are we to do with one of the landmarks of our jurisprudence, which again my brother passes over in silence? This is Commonwealth v. Valjean. Though the case is somewhat obscurely reported, it appears that the defendant was indicted for the larceny of a loaf of bread, and offered as a defense that he was in a condition approaching starvation. The court refused to accept this defense. If hunger cannot justify the theft of wholesome and natural food, how can it justify the killing and eating of a man? Again, if we look at the thing in terms of deterrence, is it likely that a man will starve to death to avoid a jail sentence for the theft of a loaf of bread? My brother's demonstrations would compel us to overrule Commonwealth v. Valjean, and many other precedents that have been built on that case. Again, I have difficulty in saying that no deterrent effect whatever could be attributed to a decision that these men were guilty of murder. The stigma of the word "murderer" is such that it is quite likely, I believe, that if these men had known that their act was deemed by the law to be murder they would have waited for a few days at least before carrying out their plan. During that time some unexpected relief might have come. I realize that this observation only reduces the distinction to a matter of degree, and does not destroy it altogether. It is certainly true that the element of deterrence would be less in this case than is normally involved in the application of the criminal law. There is still a further difficulty in my brother Foster's proposal to read an exception into the statute to favor this case, though again a difficulty not even intimated in his opinion. What shall be the scope of this exception? Here the men cast lots and the victim was himself originally a party to the agreement. What would we have to decide if Whetmore had refused from the beginning to participate in the plan? Would a majority be permitted to overrule him? Or, suppose that no plan were adopted at all and the others simply conspired to bring about Whetmore's death, justifying their act by saying that he was in the weakest condition. Or again, that a plan of selection was followed but one based on a different justification than the one adopted here, as if the others were atheists and insisted that Whetmore should die because he was the only one who believed in an afterlife. These illustrations could be multiplied, but enough have been suggested to reveal what a quagmire of hidden difficulties my brother's reasoning contains. Of course I realize on reflection that I may be concerning myself with a problem that will never arise, since it is unlikely that any group of men will ever again be brought to commit the dread act that was involved here. Yet, on still further reflection, even if we are certain that no similar case will arise again, do not the illustrations I have given show the lack of any coherent and rational principle in the rule my brother proposes? Should not the soundness of a principle be tested by the conclusions it entails, without reference to the accidents of later litigational history? Still, if this is so, why is it that we of this Court so often discuss the question whether we are likely to have later occasion to apply a principle urged for the solution of the case before us? Is this a situation where a line of reasoning not originally proper has become sanctioned by precedent, so that we are permitted to apply it and may even be under an obligation to do so? The more I examine this case and think about it, the more deeply I become involved. My mind becomes entangled in the meshes of the very nets I throw out for my own rescue. I find that almost every consideration that bears on the decision of the case is counterbalanced by an opposing consideration leading in the opposite direction. My brother Foster has not furnished to me, nor can I discover for myself, any formula capable of resolving the equivocations that beset me on all sides. I have given this case the best thought of which I am capable. I have scarcely slept since it was argued before us. When I feel myself inclined to accept the view of my brother Foster, I am repelled by a feeling that his arguments are intellectually unsound and approach mere rationalization. On the other hand, when I incline toward upholding the conviction, I am struck by the absurdity of directing that these men be put to death when their lives have been saved at the cost of the lives of ten heroic workmen. It is to me a matter of regret that the Prosecutor saw fit to ask for an indictment for murder. If we had a provision in our statutes making it a crime to eat human flesh, that would have been a more appropriate charge. If no other charge suited to the facts of this case could be brought against the defendants, it would have been wiser, I think, not to have indicted them at all. Unfortunately, however, the men have been indicted and tried, and we have therefore been drawn into this unfortunate affair. Since I have been wholly unable to resolve the doubts that beset me about the law of this case, I am with regret announcing a step that is, I believe, unprecedented in the history of this tribunal. I declare my withdrawal from the decision of this case. KEEN, J. I should like to begin by setting to one side two questions which are not before this Court. The first of these is whether executive clemency should be extended to these defendants if the conviction is affirmed. Under our system of government, that is a question for the Chief Executive, not for us. I therefore disapprove of that passage in the opinion of the Chief Justice in which he in effect gives instructions to the Chief Executive as to what he should do in this case and suggests that some impropriety will attach if these instructions are not heeded. This is a confusion of governmental functions - a confusion of which the judiciary should be the last to be guilty. I wish to state that if I were the Chief Executive I would go farther in the direction of clemency than the pleas addressed to him propose. I would pardon these men altogether, since I believe that they have already suffered enough to pay for any offense they may have committed. I want it to be understood that this remark is made in my capacity as a private citizen who by the accident of his office happens to have acquired an intimate acquaintance with the facts of this case. In the discharge of my duties as judge, it is neither my function to address directions to the Chief Executive, nor to take into account what he may or may not do, in reaching my own decision, which must be controlled entirely by the law of this Commonwealth. The second question that I wish to put to one side is that of deciding whether what these men did was "right" or "wrong," "wicked" or "good." That is also a question that is irrelevant to the discharge of my office as a judge sworn to apply, not my conceptions of morality, but the law of the land. In putting this question to one side I think I can also safely dismiss without comment the first and more poetic portion of my brother Foster's opinion. The element of fantasy contained in the arguments developed there has been sufficiently revealed in my brother Tatting's somewhat solemn attempt to take those arguments seriously. The sole question before us for decision is whether these defendants did, within the meaning of N. C. S. A. (N. S.) ? 12-A, willfully take the life of Roger Whetmore. The exact language of the statute is as follows: "Whoever shall willfully take the life of another shall be punished by death." Now I should suppose that any candid observer, content to extract from these words their natural meaning, would concede at once that these defendants did "willfully take the life" of Roger Whetmore. Whence arise all the difficulties of the case, then, and the necessity for so many pages of discussion about what ought to be so obvious? The difficulties, in whatever tortured form they may present themselves, all trace back to a single source, and that is a failure to distinguish the legal from the moral aspects of this case. To put it bluntly, my brothers do not like the fact that the written law requires the conviction of these defendants. Neither do I, but unlike my brothers I respect the obligations of an office that requires me to put my personal predilections out of my mind when I come to interpret and apply the law of this Commonwealth. Now, of course, my brother Foster does not admit that he is actuated by a personal dislike of the written law. Instead he develops a familiar line of argument according to which the court may disregard the express language of a statute when something not contained in the statute itself, called its "purpose," can be employed to justify the result the court considers proper. Because this is an old issue between myself and my colleague, I should like, before discussing his particular application of the argument to the facts of this case, to say something about the historical background of this issue and its implications for law and government generally. There was a time in this Commonwealth when judges did in fact legislate very freely, and all of us know that during that period some of our statutes were rather thoroughly made over by the judiciary. That was a time when the accepted principles of political science did not designate with any certainty the rank and function of the various arms of the state. We all know the tragic issue of that uncertainty in the brief civil war that arose out of the conflict between the judiciary, on the one hand, and the executive and the legislature, on the other. There is no need to recount here the factors that contributed to that unseemly struggle for power, though they included the unrepresentative character of the Chamber, resulting from a division of the country into election districts that no longer accorded with the actual distribution of the population, and the forceful personality and wide popular following of the then Chief Justice. It is enough to observe that those days are behind us, and that in place of the uncertainty that then reigned we now have a clear-cut principle, which is the supremacy of the legislative branch of our government. From that principle flows the obligation of the judiciary to enforce faithfully the written law, and to interpret that law in accordance with its plain meaning without reference to our personal desires or our individual conceptions of justice. I am not concerned with the question whether the principle that forbids the judicial revision of statutes is right or wrong, desirable or undesirable; I observe merely that this principle has become a tacit premise underlying the whole of the legal and governmental order I am sworn to administer. Yet though the principle of the supremacy of the legislature has been accepted in theory for centuries, such is the tenacity of professional tradition and the force of fixed habits of thought that many of the judiciary have still not accommodated themselves to the restricted role which the new order imposes on them. My brother Foster is one of that group; his way of dealing with statutes is exactly that of a judge living in the 3900's. We are all familiar with the process by which the judicial reform of disfavored legislative enactments is accomplished. Anyone who has followed the written opinions of Mr. Justice Foster will have had an opportunity to see it at work in every branch of the law. I am personally so familiar with the process that in the event of my brother's incapacity I am sure I could write a satisfactory opinion for him without any prompting whatever, beyond being informed whether he liked the effect of the terms of the statute as applied to the case before him. The process of judicial reform requires three steps. The first of these is to divine some single "purpose" which the statute serves. This is done although not one statute in a hundred has any such single purpose, and although the objectives of nearly every statute are differently interpreted by the different classes of its sponsors. The second step is to discover that a mythical being called "the legislator," in the pursuit of this imagined "purpose," overlooked something or left some gap or imperfection in his work. Then comes the final and most refreshing part of the task, which is, of course, to fill in the blank thus created. Quod erat faciendum. My brother Foster's penchant for finding holes in statutes reminds one of the story told by an ancient author about the man who ate a pair of shoes. Asked how he liked them, he replied that the part he liked best was the holes. That is the way my brother feels about statutes; the more holes they have in them the better he likes them. In short, he doesn't like statutes. One could not wish for a better case to illustrate the specious nature of this gap-filling process than the one before us. My brother thinks he knows exactly what was sought when men made murder a crime, and that was something he calls "deterrence." My brother Tatting has already shown how much is passed over in that interpretation. But I think the trouble goes deeper. I doubt very much whether our statute making murder a crime really has a "purpose" in any ordinary sense of the term. Primarily, such a statute reflects a deeply-felt human conviction that murder is wrong and that something should be done to the man who commits it. If we were forced to be more articulate about the matter, we would probably take refuge in the more sophisticated theories of the criminologists, which, of course, were certainly not in the minds of those who drafted our statute. We might also observe that men will do their own work more effectively and live happier lives if they are protected against the threat of violent assault. Bearing in mind that the victims of murders are often unpleasant people, we might add some suggestion that the matter of disposing of undesirables is not a function suited to private enterprise, but should be a state monopoly. All of which reminds me of the attorney who once argued before us that a statute licensing physicians was a good thing because it would lead to lower life insurance rates by lifting the level of general health. There is such a thing as overexplaining the obvious. If we do not know the purpose of ? 12-A, how can we possibly say there is a "gap" in it? How can we know what its draftsmen thought about the question of killing men in order to eat them? My brother Tatting has revealed an understandable, though perhaps slightly exaggerated revulsion to cannibalism. How do we know that his remote ancestors did not feel the same revulsion to an even higher degree? Anthropologists say that the dread felt for a forbidden act may be increased by the fact that the conditions of a tribe's life create special temptations toward it, as incest is most severely condemned among those whose village relations make it most likely to occur. Certainly the period following the Great Spiral was one that had implicit in it temptations to anthropophagy. Perhaps it was for that very reason that our ancestors expressed their prohibition in so broad and unqualified a form. All of this is conjecture, of course, but it remains abundantly clear that neither I nor my brother Foster knows what the "purpose" of ? 12-A is. Considerations similar to those I have just outlined are also applicable to the exception in favor of self-defense, which plays so large a role in the reasoning of my brothers Foster and Tatting. It is of course true that in Commonwealth v. Parry an obiter dictum justified this exception on the assumption that the purpose of criminal legislation is to deter. It may well also be true that generations of law students have been taught that the true explanation of the exception lies in the fact that a man who acts in self-defense does not act "willfully," and that the same students have passed their bar examinations by repeating what their professors told them. These last observations I could dismiss, of course, as irrelevant for the simple reason that professors and bar examiners have not as yet any commission to make our laws for us. But again the real trouble lies deeper. As in dealing with the statute, so in dealing with the exception, the question is not the conjectural purpose of the rule, but its scope. Now the scope of the exception in favor of self-defense as it has been applied by this Court is plain: it applies to cases of resisting an aggressive threat to the party's own life. It is therefore too clear for argument that this case does not fall within the scope of the exception, since it is plain that Whetmore made no threat against the lives of these defendants. The essential shabbiness of my brother Foster's attempt to cloak his remaking of the written law with an air of legitimacy comes tragically to the surface in my brother Tatting's opinion. In that opinion Justice Tatting struggles manfully to combine his colleague's loose moralisms with his own sense of fidelity to the written law. The issue of this struggle could only be that which occurred, a complete default in the discharge of the judicial function. You simply cannot apply a statute as it is written and remake it to meet your own wishes at the same time. Now I know that the line of reasoning I have developed in this opinion will not be acceptable to those who look only to the immediate effects of a decision and ignore the long-run implications of an assumption by the judiciary of a power of dispensation. A hard decision is never a popular decision. Judges have been celebrated in literature for their sly prowess in devising some quibble by which a litigant could be deprived of his rights where the public thought it was wrong for him to assert those rights. But I believe that judicial dispensation does more harm in the long run than hard decisions. Hard cases may even have a certain moral value by bringing home to the people their own responsibilities toward the law that is ultimately their creation, and by reminding them that there is no principle of personal grace that can relieve the mistakes of their representatives. Indeed, I will go farther and say that not only are the principles I have been expounding those which are soundest for our present conditions, but that we would have inherited a better legal system from our forefathers if those principles had been observed from the beginning. For example, with respect to the excuse of self-defense, if our courts had stood steadfast on the language of the statute the result would undoubtedly have been a legislative revision of it. Such a revision would have drawn on the assistance of natural philosophers and psychologists, and the resulting regulation of the matter would have had an understandable and rational basis, instead of the hodgepodge of verbalisms and metaphysical distinctions that have emerged from the judicial and professorial treatment. These concluding remarks are, of course, beyond any duties that I have to discharge with relation to this case, but I include them here because I feel deeply that my colleagues are insufficiently aware of the dangers implicit in the conceptions of the judicial office advocated by my brother Foster. I conclude that the conviction should be affirmed. HANDY, J. I have listened with amazement to the tortured ratiocinations to which this simple case has given rise. I never cease to wonder at my colleagues' ability to throw an obscuring curtain of legalisms about every issue presented to them for decision. We have heard this afternoon learned disquisitions on the distinction between positive law and the law of nature, the language of the statute and the purpose of the statute, judicial functions and executive functions, judicial legislation and legislative legislation. My only disappointment was that someone did not raise the question of the legal nature of the bargain struck in the cave - whether it was unilateral or bilateral, and whether Whetmore could not be considered as having revoked an offer prior to action taken thereunder. What have all these things to do with the case? The problem before us is what we, as officers of the government, ought to do with these defendants. That is a question of practical wisdom, to be exercised in a context, not of abstract theory, but of human realities. When the case is approached in this light, it becomes, I think, one of the easiest to decide that has ever been argued before this Court. Before stating my own conclusions about the merits of the case, I should like to discuss briefly some of the more fundamental issues involved - issues on which my colleagues and I have been divided ever since I have been on the bench. I have never been able to make my brothers see that government is a human affair, and that men are ruled, not by words on paper or by abstract theories, but by other men. They are ruled well when their rulers understand the feelings and conceptions of the masses. They are ruled badly when that understanding is lacking. Of all branches of the government, the judiciary is the most likely to lose its contact with the common man. The reasons for this are, of course, fairly obvious. Where the masses react to a situation in terms of a few salient features, we pick into little pieces every situation presented to us. Lawyers are hired by both sides to analyze and dissect. Judges and attorneys vie with one another to see who can discover the greatest number of difficulties and distinctions in a single set of facts. Each side tries to find cases, real or imagined, that will embarrass the demonstrations of the other side. To escape this embarrassment, still further distinctions are invented and imported into the situation. When a set of facts has been subjected to this kind of treatment for a sufficient time, all the life and juice have gone out of it and we have left a handful of dust. Now I realize that wherever you have rules and abstract principles lawyers are going to be able to make distinctions. To some extent the sort of thing I have been describing is a necessary evil attaching to any formal regulation of human affairs. But I think that the area which really stands in need of such regulation is greatly overestimated. There are, of course, a few fundamental rules of the game that must be accepted if the game is to go on at all. I would include among these the rules relating to the conduct of elections, the appointment of public officials, and the term during which an office is held. Here some restraint on discretion and dispensation, some adherence to form, some scruple for what does and what does not fall within the rule, is, I concede, essential. Perhaps the area of basic principle should be expanded to include certain other rules, such as those designed to preserve the free civilmoign system. But outside of these fields I believe that all government officials, including judges, will do their jobs best if they treat forms and abstract concepts as instruments. We should take as our model, I think, the good administrator, who accommodates procedures and principles to the case at hand, selecting from among the available forms those most suited to reach the proper result. The most obvious advantage of this method of government is that it permits us to go about our daily tasks with efficiency and common sense. My adherence to this philosophy has, however, deeper roots. I believe that it is only with the insight this philosophy gives that we can preserve the flexibility essential if we are to keep our actions in reasonable accord with the sentiments of those subject to our rule. More governments have been wrecked, and more human misery caused, by the lack of this accord between ruler and ruled than by any other factor that can be discerned in history. Once drive a sufficient wedge between the mass of people and those who direct their legal, political, and economic life, and our society is ruined. Then neither Foster's law of nature nor Keen's fidelity to written law will avail us anything. Now when these conceptions are applied to the case before us, its decision becomes, as I have said, perfectly easy. In order to demonstrate this I shall have to introduce certain realities that my brothers in their coy decorum have seen fit to pass over in silence, although they are just as acutely aware of them as I am. The first of these is that this case has aroused an enormous public interest, both here and abroad. Almost every newspaper and magazine has carried articles about it; columnists have shared with their readers confidential information as to the next governmental move; hundreds of letters-to-the-editor have been printed. One of the great newspaper chains made a poll of public opinion on the question, "What do you think the Supreme Court should do with the Speluncean explorers?" About ninety per cent expressed a belief that the defendants should be pardoned or let off with a kind of token punishment. It is perfectly clear, then, how the public feels about the case. We could have known this without the poll, of course, on the basis of common sense, or even by observing that on this Court there are apparently four-and-a-half men, or ninety per cent, who share the common opinion. This makes it obvious, not only what we should do, but what we must do if we are to preserve between ourselves and public opinion a reasonable and decent accord. Declaring these men innocent need not involve us in any undignified quibble or trick. No principle of statutory construction is required that is not consistent with the past practices of this Court. Certainly no layman would think that in letting these men off we had stretched the statute any more than our ancestors did when they created the excuse of self-defense. If a more detailed demonstration of the method of reconciling our decision with the statute is required, I should be content to rest on the arguments developed in the second and less visionary part of my brother Foster's opinion. Now I know that my brothers will be horrified by my suggestion that this Court should take account of public opinion. They will tell you that public opinion is emotional and capricious, that it is based on half-truths and listens to witnesses who are not subject to cross-examination. They will tell you that the law surrounds the trial of a case like this with elaborate safeguards, designed to insure that the truth will be known and that every rational consideration bearing on the issues of the case has been taken into account. They will warn you that all of these safeguards go for naught if a mass opinion formed outside this framework is allowed to have any influence on our decision. But let us look candidly at some of the realities of the administration of our criminal law. When a man is accused of crime, there are, speaking generally, four ways in which he may escape punishment. One of these is a determination by a judge that under the applicable law he has committed no crime. This is, of course, a determination that takes place in a rather formal and abstract atmosphere. But look at the other three ways in which he may escape punishment. These are: (1) a decision by the Prosecutor not to ask for an indictment; (2) an acquittal by the jury; (3) a pardon or commutation of sentence by the executive. Can anyone pretend that these decisions are held within a rigid and formal framework of rules that prevents factual error, excludes emotional and personal factors, and guarantees that all the forms of the law will be observed? In the case of the jury we do, to be sure, attempt to cabin their deliberations within the area of the legally relevant, but there is no need to deceive ourselves into believing that this attempt is really successful. In the normal course of events the case now before us would have gone on all of its issues directly to the jury. Had this occurred we can be confident that there would have been an acquittal or at least a division that would have prevented a conviction. If the jury had been instructed that the men's hunger and their agreement were no defense to the charge of murder, their verdict would in all likelihood have ignored this instruction and would have involved a good deal more twisting of the letter of the law than any that is likely to tempt us. Of course the only reason that didn't occur in this case was the fortuitous circumstance that the foreman of the jury happened to be a lawyer. His learning enabled him to devise a form of words that would allow the jury to dodge its usual responsibilities. My brother Tatting expresses annoyance that the Prosecutor did not, in effect, decide the case for him by not asking for an indictment. Strict as he is himself in complying with the demands of legal theory, he is quite content to have the fate of these men decided out of court by the Prosecutor on the basis of common sense. The Chief Justice, on the other hand, wants the application of common sense postponed to the very end, though like Tatting, he wants no personal part in it. This brings me to the concluding portion of my remarks, which has to do with executive clemency. Before discussing that topic directly, I want to make a related observation about the poll of public opinion. As I have said, ninety per cent of the people wanted the Supreme Court to let the men off entirely or with a more or less nominal punishment. The ten per cent constituted a very oddly assorted group, with the most curious and divergent opinions. One of our university experts has made a study of this group and has found that its members fall into certain patterns. A substantial portion of them are subscribers to "crank" newspapers of limited circulation that gave their readers a distorted version of the facts of the case. Some thought that "Speluncean" means "cannibal" and that anthropophagy is a tenet of the Society. But the point I want to make, however, is this: although almost every conceivable variety and shade of opinion was represented in this group, there was, so far as I know, not one of them, nor a single member of the majority of ninety per cent, who said, "I think it would be a fine thing to have the courts sentence these men to be hanged, and then to have another branch of the government come along and pardon them." Yet this is a solution that has more or less dominated our discussions and which our Chief Justice proposes as a way by which we can avoid doing an injustice and at the same time preserve respect for law. He can be assured that if he is preserving anybody's morale, it is his own, and not the public's, which knows nothing of his distinctions. I mention this matter because I wish to emphasize once more the danger that we may get lost in the patterns of our own thought and forget that these patterns often cast not the slightest shadow on the outside world. I come now to the most crucial fact in this case, a fact known to all of us on this Court, though one that my brothers have seen fit to keep under the cover of their judicial robes. This is the frightening likelihood that if the issue is left to him, the Chief Executive will refuse to pardon these men or commute their sentence. As we all know, our Chief Executive is a man now well advanced in years, of very stiff notions. Public clamor usually operates on him with the reverse of the effect intended. As I have told my brothers, it happens that my wife's niece is an intimate friend of his secretary. I have learned in this indirect, but, I think, wholly reliable way, that he is firmly determined not to commute the sentence if these men are found to have violated the law. No one regrets more than I the necessity for relying in so important a matter on information that could be characterized as gossip. If I had my way this would not happen, for I would adopt the sensible course of sitting down with the Executive, going over the case with him, finding out what his views are, and perhaps working out with him a common program for handling the situation. But of course my brothers would never hear of such a thing. Their scruple about acquiring accurate information directly does not prevent them from being very perturbed about what they have learned indirectly. Their acquaintance with the facts I have just related explains why the Chief Justice, ordinarily a model of decorum, saw fit in his opinion to flap his judicial robes in the face of the Executive and threaten him with excommunication if he failed to commute the sentence. It explains, I suspect, my brother Foster's feat of levitation by which a whole library of law books was lifted from the shoulders of these defendants. It explains also why even my legalistic brother Keen emulated Pooh-Bah in the ancient comedy by stepping to the other side of the stage to address a few remarks to the Executive "in my capacity as a private citizen." (I may remark, incidentally, that the advice of Private Citizen Keen will appear in the reports of this court printed at taxpayers' expense.) I must confess that as I grow older I become more and more perplexed at men's refusal to apply their common sense to problems of law and government, and this truly tragic case has deepened my sense of discouragement and dismay. I only wish that I could convince my brothers of the wisdom of the principles I have applied to the judicial office since I first assumed it. As a matter of fact, by a kind of sad rounding of the circle, I encountered issues like those involved here in the very first case I tried as Judge of the Court of General Instances in Fanleigh County. A religious sect had unfrocked a minister who, they said, had gone over to the views and practices of a rival sect. The minister circulated a handbill making charges against the authorities who had expelled him. Certain lay members of the church announced a public meeting at which they proposed to explain the position of the church. The minister attended this meeting. Some said he slipped in unobserved in a disguise; his own testimony was that he had walked in openly as a member of the public. At any rate, when the speeches began he interrupted with certain questions about the affairs of the church and made some statements in defense of his own views. He was set upon by members of the audience and given a pretty thorough pommeling, receiving among other injuries a broken jaw. He brought a suit for damages against the association that sponsored the meeting and against ten named individuals who he alleged were his assailants. When we came to the trial, the case at first seemed very complicated to me. The attorneys raised a host of legal issues. There were nice questions on the admissibility of evidence, and, in connection with the suit against the association, some difficult problems turning on the question whether the minister was a trespasser or a licensee. As a novice on the bench I was eager to apply my law school learning and I began studying these question closely, reading all the authorities and preparing well-documented rulings. As I studied the case I became more and more involved in its legal intricacies and I began to get into a state approaching that of my brother Tatting in this case. Suddenly, however, it dawned on me that all these perplexing issues really had nothing to do with the case, and I began examining it in the light of common sense. The case at once gained a new perspective, and I saw that the only thing for me to do was to direct a verdict for the defendants for lack of evidence. I was led to this conclusion by the following considerations. The melee in which the plaintiff was injured had been a very confused affair, with some people trying to get to the center of the disturbance, while others were trying to get away from it; some striking at the plaintiff, while others were apparently trying to protect him. It would have taken weeks to find out the truth of the matter. I decided that nobody's broken jaw was worth that much to the Commonwealth. (The minister's injuries, incidentally, had meanwhile healed without disfigurement and without any impairment of normal faculties.) Furthermore, I felt very strongly that the plaintiff had to a large extent brought the thing on himself. He knew how inflamed passions were about the affair, and could easily have found another forum for the expression of his views. My decision was widely approved by the press and public opinion, neither of which could tolerate the views and practices that the expelled minister was attempting to defend. Now, thirty years later, thanks to an ambitious Prosecutor and a legalistic jury foreman, I am faced with a case that raises issues which are at bottom much like those involved in that case. The world does not seem to change much, except that this time it is not a question of a judgment for five or six hundred frelars, but of the life or death of four men who have already suffered more torment and humiliation than most of us would endure in a thousand years. I conclude that the defendants are innocent of the crime charged, and that the conviction and sentence should be set aside. TATTING, J. I have been asked by the Chief Justice whether, after listening to the two opinions just rendered, I desire to reexamine the position previously taken by me. I wish to state that after hearing these opinions I am greatly strengthened in my conviction that I ought not to participate in the decision of this case. The Supreme Court being evenly divided, the conviction and sentence of the Court of General Instances is affirmed. It is ordered that the execution of the sentence shall occur at 6 a.m., Friday, April 2, 4300, at which time the Public Executioner is directed to proceed with all convenient dispatch to hang each of the defendants by the neck until he is dead. POSTSCRIPT Now that the court has spoken its judgment, the reader puzzled by the choice of date may wish to be reminded that the centuries which separate us from the year 4300 are roughly equal to those that have passed since the Age of Pericles. There is probably no need to observe that the Speluncean Case itself is intended neither as a work of satire nor as a prediction in any ordinary sense of the term. As for the judges who make up Chief Justice Truepenny's court, they are, of course, as mythical as the facts and precedents with which they deal. The reader who refuses to accept this view, and who seeks to trace out contemporary resemblances where none is intended or contemplated, should be warned that he is engaged in a frolic of his own, which may possibly lead him to miss whatever modest truths are contained in the opinions delivered by the Supreme Court of Newgarth. The case was constructed for the sole purpose of bringing into a common focus certain divergent philosophies of law and government. These philosophies presented men with live questions of choice in the days of Plato and Aristotle. Perhaps they will continue to do so when our era has had its say about them. If there is any element of prediction in the case, it does not go beyond a suggestion that the questions involved are among the permanent problems of the human race. ----------------- http://www.earlham.edu/~peters/writing/cse.htm The Case of the Speluncean Explorers: Nine New Opinions Peter Suber, Philosophy Department, Earlham College About the book Ordering the book Preface and Introduction to the book Errata Timeline of events in the cave Other new opinions on the case Assignment ideas for teachers Why read this book? One timely reason is to get beyond sloganeering about "judicial activism" and "activist judges". The book is a succinct and even-handed way to understand what the debate is about. It doesn't tell you what to think, but illustrates the contending positions and lets you think for yourself. It will show you how judges with different moral and political beliefs interpret written law, how they use precedents, how they conceive the proper role of judges, how they conceive the relationship between law and morality, and how they defend their judicial practices against criticism. It anchors all of this in a Supreme Court hearing of a gripping, concrete case on which real people disagree. (Challenge: Take any view of how judges should interpret law, especially any view that makes it sound easy, and try it out on this case. How well can it respect the facts and law? How well can it answer the objections from judges who take other views? How well does it deliver justice?) The book has no jargon and assumes no prior knowledge of law or legal philosophy. Another reason to read it: It's just fun. About the book The Case of the Speluncean Explorers: Nine New Opinions. Routledge, 1998. Reprinted, 2002. The book uses a famous fictitious legal case to illustrate nine contemporary philosophies of law. It presupposes no knowledge of law or philosophy of law, and should be a painless, even enjoyable introduction to legal philosophy. The famous fictitious legal case was created by Lon Fuller in his article, "The Case of the Speluncean Explorers," Harvard Law Review, vol. 62, no. 4 (1949) pp. 616-645. The case tells the story of a group of spelunkers (cave-explorers) in the Commonwealth of Newgarth, trapped in a cave by a landslide. As they approach the point of starvation, they make radio contact with the rescue team. Engineers on the team estimate that the rescue will take another 10 days. The men describe their physical condition to physicians at the rescue camp and ask whether they can survive another 10 days without food. The physicians think this very unlikely. Then the spelunkers ask whether they could survive another 10 days if they killed and ate a member of their party. The physicians reluctantly answer that they would. Finally, the men ask whether they ought to hold a lottery to determine whom to kill and eat. No one at the rescue camp is willing to answer this question. The men turn off their radio, and some time later hold a lottery, kill the loser, and eat him. When they are rescued, they are prosecuted for murder, which in Newgarth carries a mandatory death penalty. Are they guilty? Should they be executed? Fuller wrote five Supreme Court opinions on the case which explore the facts from the perspectives of profoundly different legal principles. The result is a focused and concrete illustration of the range of Anglo-American legal philosophy at mid-century. My nine new opinions attempt to bring this picture up to date with our own more diverse and turbulent jurisprudence half a century later. My contract with Routledge prevents me from putting the text of the book on the web. However, I have put the Preface and Introduction online, and over time I may put more auxiliary content on this web site than I have now. -------------------------------------------------------------------------------- Ordering the book You can buy the book from Amazon.com in paperback or hardback. -------------------------------------------------------------------------------- Errata in the first edition Page v, line 2, for "May" read "Mary" Page 40, line 29, for "with with" read "with" Page 48, line 19, for "New York" read "New New York" Page 51, line 11, for "and are exercising" read "exercising" Page 60, line 19, for "signd the second contract" read "signed the social contract" Page 60, line 34, for "in any." read "in any case." Page 66, line 7, for "would know" read "could know" Page 90, line 6, for "when it is disguised" read "when disguised" Page 96, bottom line, for "prevents them thinking" read "prevents them from thinking" Page 97, line 3, for "Commonwealth and Valjean" read "Commonwealth v. Valjean" All of these have been corrected in the August 2002 reprint. If you think you have found other errata, I would be grateful if you'd drop me a line. Please let me know which edition you're using. -------------------------------------------------------------------------------- Timeline of the events in the cave The timeline is more subtle than it might appear on first reading Justice Truepenny's statement of the facts. I've sorted it out here for those who are interested. But for those who aren't, let me assure you that you may skip this section with impunity. These subtleties are not material to the holding on virtually any theory of the facts or law. Day 0. The men enter the cave. Day x. The landslide occurs. We are never told how many days after Day 0 this occurs. However, we can deduce that it is at most 3. See the notes below. Day x + 20. Radio contact is established (p. 7, line 31). Day 23. The men hold the lottery and kill Whetmore (p. 8, line 27). Note that this is Day 23, not Day x + 23. Whetmore would have waited another week to hold the lottery (p. 8, line 40). To know just how long the men had gone without food at the time of the lottery and killing, we'd have to know both (1) the value of x and (2) how long it took them after Day 0 to exhaust the provisions they carried in with them. Unfortunately we don't know either of these key facts, but see the notes below. Day x + 30. Earliest estimated rescue date (p. 8, line 5). On Day x + 20 (p. 7, line 31), the engineers predicted at least a 10 day rescue (p. 8, line 5). The doctors predicted that the men could live at least to this day if they ate one of their companions (p. 8, line 17), and would almost certainly not live to this day without some additional food (p. 8, line 11). Day 32. The men are rescued (p. 7, line 25). Note that this is Day 32, not Day x + 32. Notes on the timeline. Radio contact is established on Day x + 20, and Whetmore is killed on Day 23 (not Day x + 23). If x were greater than 3, then the radio contact would have occurred after the killing, which we know is false. Hence x must be less than or equal to 3. If x = 0, the minimum, then the rescue was two days slower than the engineers predicted (12 days rather than 10). If x = 3, the maximum, then the rescue was one day faster than predicted (9 days rather than 10). Either way, the men were rescued 9 days after the killing. Another way to put this: Assume that the doctors were right that from the day of radio contact the men could not have lived 10 more days without food, and assume that the men had not killed a companion to eat. If x = 0, then the men would have starved to death before being rescued, but if x = 3, then they would have been rescued before starving to death. Is there any textual evidence to help us decide whether x is 0, 1, 2, or 3? I don't see anything explicit. But here's a possibility. On the day of the killing, Day 23, Whetmore wanted to wait another week before killing anyone. Why a week exactly, especially if they were already close to death by starvation? If x = 0, then the predicted date of rescue (Day x + 30) would be exactly one week from the date on which Whetmore wanted to wait a week. By contrast, if x = 3, then waiting a week would still put them three days short of the predicted date of rescue. Hence, if Whetmore picked a week thinking of the predicted rescue, then that suggests that x = 0. -------------------------------------------------------------------------------- Other new opinions on the case I am not the first to write new opinions on the Case of the Speluncean Explorers. Here are the other published opinions that I know of, in chronological order. If you know of any others, please drop me a line. Anthony D'Amato wrote three new opinions for the Stanford Law Review, vol. 32 (1980) pp. 467-85. Naomi Cahn, John Calmore, Mary Coombs, Dwight Greene, Geoffrey Miller, Jeremy Paul, and Laura Stein wrote one opinion each for the George Washington Law Review, vol. 61 (1993) pp. 1754-1811. Paul Butler, Alan Dershowitz, Frank Easterbrook, Alex Kozinski, Cass Sunstein, and Robin West wrote one opinion each for the Harvard Law Review, vol. 112 (1999) pp. 1834-1923. David Shapiro wrote an introduction to the collection of new opinions. My book came out in November of 1998. I wasn't able to see any of these six opinions and, due to the lag time between journal submissions and publication, I assume that these scholars were not able to see my book. -------------------------------------------------------------------------------- Assignment ideas for teachers The classic, basic assignment is to decide the case. Come to a verdict and support your verdict with argument. Are the spelunkers guilty of murder or not? There are only two possibilities, guilty and not guilty, so don't get too creative. Or rather, save your creativity for your argument. Inventing a third verdict does not take the constraints of law seriously and would make the case both easier and less interesting. Variations on the law Here are some variations on the basic assignment, in roughly increasing order of difficulty and sophistication. Decide the case morally, not legally. Ignore the law. Did these men do anything wrong? Or decide the case under the law as it ought to be, not under the law as it is. Variation. Once you decide the morality of the case, how can we make the law reflect this morality better? How should we change the law to make clear that this sort of act is (or is not) murder? Decide the case under the law of the Commonwealth of Newgarth. This includes at least the murder statute and the precedents. Does "the law" include anything else? That's for you to decide. Assume that you are a Newgarth Supreme Court Justice who has taken an oath to uphold and apply the laws of the Commonwealth of Newgarth. If you feel a tension between your personal morality and the laws of Newgarth, then don't put the law aside in order to give effect to your moral convictions unless you think a good judge would do so. If you think your oath of office permits this indulgence of your personal morality, then show that it does. Focus on supporting your verdict with strong arguments, not on undermining the arguments for the opposite verdict (although that comes in another variation below). Decide the case under the law, support your verdict with arguments, and show the weaknesses in the arguments for the other verdict. There are many arguments on both sides, so there will be much work to do no matter which verdict you chose. However, if time is tight, simplify as follows. If you vote guilty, then how do you answer the necessity defense? If you vote not guilty, how do you interpret the statute? If you acquit, can you rebut the arguments of each judge who voted to convict? If you convict, can you rebut the arguments of each judge who voted to acquit? Decide the case under the law of the state where you live. At first consult the murder statute where you live and forget the cases interpreting it. The statute probably contains degrees of murder and homicide, standard defenses (excuses and justifications), and some sentencing discretion for the judge. All these departures from the Newgarth situation give you room to explore issues you didn't have to explore when you used Newgarth law. If you have time and ambition, also use the case law of your jurisdiction. This version of the assignment is difficult insofar as it requires legal research outside Fuller's essay. But it is easier than some of the variations above because it doesn't force you to pick from a narrow range of options and justify your difficult selection. If you have read other cases on necessity, self-defense, or murder, assume that they are binding precedents in Newgarth. Take the force of those that are most applicable, distinguish those that are least applicable. On September 27, 2001, President George W. Bush authorized the U.S. Air Force to shoot down commercial airliners containing passengers if the airliners (1) appear to threaten cities, (2) refuse to change their course, and (3) there is no time to contact the president for specific orders. Two weeks earlier, of course, terrorists hijacked four commercial airliners containing passengers and deliberately crashed three of them into buildings in New York and Washington, killing more than 3,000 people. See this Washington Post story for more detail on President Bush's decision to authorize what Justice Tally might call the preventive killing of innocents. Imagine that the President of Newgarth had made a similar decision and imagine that it had been challenged in court. How would you decide the President's case in light of your decision in the case of the speluncean explorers? Assume that the President of Newgarth won her case and the ruling was available as a precedent. What would it imply for the case of the speluncean explorers? Variations on the facts Decide the case under the real facts, as described by Chief Justice Truepenny, or decide it under one of the what-if scenarios below. What if the men had not used a lottery but killed Whetmore because he was the only one with no family, or the only one who believed in an afterlife? (This variation was suggested by Justice Foster.) What if Whetmore had not revoked his consent, but was the willing victim based on the result of the lottery? What if Whetmore had not said that the dice throw made on his behalf was a fair one? What if Whetmore had tried to defend himself but the survivors had succeeded in killing him anyway? What if Whetmore had defended himself by killing (say) Smith? As a result, the party ate Smith instead of Whetmore. (For this to work, we must suppose that Whetmore, at least, was prosecuted for murdering Smith.) What if a Newgarth judge had been on the scene at the rescue camp and had told the spelunkers by radio that the act they were contemplating would be considered murder under Newgarth law? What if the judge on the scene had said that the killing would not be considered murder? What if a priest instead of a judge had given an opinion by radio? (You decide what the priest would have said.) What if the men had brought much more food into the cave with them, and survived much longer on it, but still had the same radio conversation with the outside world about their prospects for survival? What if Whetmore was clearly the one who would have died first of natural causes? What if he clearly was not? What if the men were rescued earlier than the engineers predicted? What if later? (Fuller doesn't give us the facts to decide this question; see the timeline above.) What if no one had died in the rescue attempt? What if the rescue party were financed entirely by the Speluncean Club, with no funds from the Commonwealth of Newgarth? What if Newgarth did not trace the authority of its laws back to a social contract? What if Newgarth had no history of civil war inspired by judicial usurpation of the legislative functions? What if the exception for self-defense were not an ancient judicial invention, but an express part of the murder statute as written by the legislature? What if the men were not trapped in a cave, but in a collapsed building after an earthquake? What if they were not spelunkers pursuing a risky sport, but workers at work? What if at the time of the killing the batteries in the spelunkers' radio were actually dead? (See Justice Bond's opinion for the possible relevance of this fact.) Variations in format Decide the case or explore the issues in one of the following ways. Write a judicial opinion. Imagine that you have taken an oath to uphold the laws of Newgarth. Don't appeal to your personal morality unless the law and your oath allow you to do so. In my view, this is the best first assignment on the case, especially if the various judges (students in the course) can compare their views and discuss their differences before and after they write their opinions. Variation. Write such a judicial opinion at the beginning of the semester. Then write another near the end of the semester. After the second one is turned in, discuss how the material in the course made you a better judge better able to deal with the difficulties of this hard case. As you read new legal philosophers during the course, pause to ask (among other discussion questions) how he/she would decide the case. What considerations would he/she add to our deliberations that we haven't seen before? Hold a discussion in which the group takes on the role of a committee of district attorneys (prosecutors) deciding whether to prosecute the spelunkers for murder. Hold a discussion in which the group takes on the role of the jury. Hence your job is to reach consensus, if you can. You are not bound by the constraints that bind judges and you have the power, if not also the right, of nullification. Hold a discussion on the seven precedents concocted by Fuller (Staymore, Fehler, the ancient and nameless precedent creating the exception for self-defense, Parry, Scape, Makeover, and Valjean). Taking each precedent in turn, ask whether it supports the defense more than the prosecution, or vice versa. How would the other side distinguish it, if it could? How do all the precedents, taken together, affect your judgment? Hold a discussion on the written opinions rather than on the facts and law directly. Which Justice deals most sensitively with the statute and precedents? Which Justice best lives up to the oath of office? Which Justice has the most persuasive opinion? Variation. Break a class into small groups and have each group try to reach consensus on each of these three questions. Groups that cannot do so before the hour is up should vote. Bring the names of the 'winning' Justices to the next class for a plenary discussion. Pick one of the factual variations from the section above. After discussion, have the group vote on it. Regard that vote as a precedent for the next variation. After you have a handful of new precedents in this way, return to the original facts. How do the new precedents affect your judgment? Assume that the Supreme Court divided, as Fuller showed. The jury verdict stands and the men are soon to be hanged. Hold a discussion in which the group takes on the role of a panel established to advise the executive on whether to grant clemency. (This variation was used by Anthony D'Amato in his Stanford Law Review article, vol. 32, 1980, pp. 467-485.) Assume that the Supreme Court divided and the defendants were hanged. Since 90% of the public favored acquittal or clemency, members of the Newgarth Parliament are feeling pressure to amend or replace the murder statute. Hold a discussion in which the group takes on the role of the legislative sub-committee charged with drafting the new statute. There are at least three issues: (1) What sort of intent or mens rea should we require for murder? (2) What defenses (excuses and justification) to the charge of murder should we recognize? and (3) What punishments other than death should we allow? -------------------------------------------------------------------------------- Peter Suber, Department of Philosophy, Earlham College, Richmond, Indiana, 47374, U.S.A. peters at earlham.edu. Copyright From checker at panix.com Wed May 18 22:51:30 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:51:30 -0400 (EDT) Subject: [Paleopsych] Ha'va'd LR: The Case of the Speluncean Explorers: A Fiftieth Anniversay Symposium Message-ID: Ha'va'd LR: The Case of the Speluncean Explorers: A Fiftieth Anniversay Symposium 1999 June, 112 Harv. L. Rev. 1834 FOREWORD: A CAVE DRAWING FOR THE AGES NAME: David L. Shapiro * Lon L. Fuller. * BIO: * William Nelson Cromwell Professor of Law, Harvard University. * A reprint of Lon L. Fuller, The Case of the Speluncean Explorers, 62 Harv. L. Rev. 616 (1949). Copyrighted by The Harvard Law Review Association, 1949. * Carter Professor of General Jurisprudence, Harvard Law School. * Circuit Judge, United States Court of Appeals for the Ninth Circuit. Judge Kozinski has no stomach for spelunking; he prefers tamer sports like snowboarding, bungee jumping and paintball. * Karl N. Llewellyn Distinguished Service Professor, University of Chicago, Law School and Department of Political Science. * Alan M. Dershowitz, Felix Frankfurter Professor of Law, Harvard University. * Judge, United States Court of Appeals for the Seventh Circuit; Senior Lecturer, The Law School, the University of Chicago. SUMMARY: ... And when one of his justices wanted to argue that it can be easy to tell that a speaker's precise language contains a slip of the tongue or an overgeneralized command, the justice pointed to the ability of the "stupidest housemaid" to interpret her employer's words in light of their purpose - thus perhaps revealing some assumptions about the nature of the employer-employee relationship, especially when the employee is a domestic. ... We learn of other important matters as well - that ten members of the rescue party died in the course of their efforts, that Newgarth's Chief Executive was well known for his hard-nosed attitude toward clemency, and that there were significant precedents on the books, addressing such issues as the availability of self-defense as a justification for killing (despite the failure of the legislature to mention it), the willingness of Newgarth's courts to construe statutes to avoid absurd results, and the application of the anti-theft law to one who stole bread (Valjean) because he was starving and could not afford to buy it. ... TEXT: [*1834] I. Introduction n1 When I was a student in law school, my two favorite law review articles were Henry Hart's famous dialogue n2 and Lon Fuller's presentation of the case of the speluncean explorers. n3 They still are. Why is that so? Perhaps one never quite gets over the joy of discovering a fine work of art or literature when one is young. (I still revere War and Peace, which captivated me as a college sophomore, even though I can't get past the first hundred pages any more.) But I think more is involved here. The wonderful essays by Hart and Fuller each combine a timely consideration of contemporaneous debates with a timeless quality that continues to entice students and scholars to think and write about them some half a century later - and will doubtless engage our successors well into the next millennium. Moreover, each of the essays takes a form that I have always admired and that seems especially suited to the exploration of such basic questions as the nature of our federal union and the nature of law itself: an exchange of views in which competing positions are stated as forcefully as the author knows how. Indeed, an author's ability to make compelling statements of contrasting views is, for me, a powerful signal of the author's worth as a scholar. Small wonder then that when I was invited to contribute a foreword to this revisiting of Fuller's great work, I felt both flattered and [*1835] intimidated. If good wine needs no bush, n4 and the lily is not made more beautiful by being gilded, then what could I hope to add to such an extraordinary achievement? No more, perhaps, than some thoughts on just why Fuller's piece has proved so durable and so provocative, and some effort to connect its insights with those of our contributors to this celebration. II. Fuller's Achievement To be sure, Fuller, like Hart and just about everyone else, was only mortal, and he could not wholly escape the context of the times in which he wrote. Hart was necessarily dealing with the state of constitutional doctrine as it then stood, n5 and casually followed the custom of the times by using the term "wetback" when referring to a Mexican who had illegally entered the country across the Rio Grande. n6 As for Fuller, his hypothetical case was staffed by justices who were all male, and though we have little to go on, may also have been all white and all from relatively affluent backgrounds. n7 After all, judges predomi [*1836] nantly had those qualities when Fuller wrote. And when one of his justices wanted to argue that it can be easy to tell that a speaker's precise language contains a slip of the tongue or an overgeneralized command, the justice pointed to the ability of the "stupidest housemaid" to interpret her employer's words in light of their purpose n8 - thus perhaps revealing some assumptions about the nature of the employer-employee relationship, especially when the employee is a domestic. Other examples doubtless abound, as they do in the work of every writer. But Fuller was still able to write a piece that will endure - one that posed eternal dilemmas in a remarkably lucid and accessible fashion. Let me count the ways. First, while the hypothetical - about the dilemma facing those who must kill one of their number or all die of starvation - drew loosely on two famous cases, n9 Fuller made his own case more difficult and challenging through a variety of devices. He moved the setting to Newgarth, a jurisdiction of which we know little except for a few matters that leak out of the opinions - for example, that it has precedents, statutes, judges (all male in this case), a chief executive with the power to pardon, and housemaids who may sometimes be stupid. And to confirm the limits of our knowledge, the time of the relevant events is in the fifth millennium. n10 With respect to the facts themselves, Fuller enriched the knowledge of the defendants and increased the dilemmas of the case in wondrous [*1837] ways. For example, his trapped explorers find out that they cannot be rescued in less than ten days and are assured by experts that their chances of survival for ten days are slim to none unless they eat one of their members. Then, most intriguing of all, they all agree to draw lots (actually, to throw dice), to determine who shall be sacrificed, but before the lottery, Whetmore tries unsuccessfully to pull out of the agreement. Predictably (Fuller was never a candidate for a Booker Prize), Whetmore turns out to be the loser when the dice are cast for him by another, and he is killed and eaten. The others survive and are prosecuted for murder under a statute providing, in its entirety, "Whoever shall willfully take the life of another shall be punished by death." n11 We learn of other important matters as well - that ten members of the rescue party died in the course of their efforts, that Newgarth's Chief Executive was well known for his hard-nosed attitude toward clemency, and that there were significant precedents on the books, addressing such issues as the availability of self-defense as a justification for killing (despite the failure of the legislature to mention it), the willingness of Newgarth's courts to construe statutes to avoid absurd results, and the application of the anti-theft law to one who stole bread (Valjean) because he was starving and could not afford to buy it. In sum, as one who has often faltered in the effort to construct a flawless hypothetical, I think that Fuller's comes about as close to perfection as one can get. Second, Fuller's opinions for his five justices managed to express an extraordinary range of views, and to do so with vigor and power. Truepenny, the Chief Justice, plays the role of narrator (a bit like the butler who comes on stage in Act I of a drawing-room comedy to dust the furniture and tell the audience what happened before the curtain went up). But he goes on, briefly but eloquently, to express two important viewpoints: first, that statutory language governs when it is free from ambiguity (as he claims it is in this case); and second, that institutionally, the role of mercy-giver in the criminal context belongs not to the judiciary but to the executive in the exercise of the pardon power. n12 Chief Justice Truepenny is followed by Justice Foster, who strongly disagrees that the conviction must be affirmed, and in doing so puts forward two separate (but perhaps related) n13 arguments: the defendants, when they acted, were "in a "state of nature,'" as much outside the laws of Newgarth as if they were on the moon, and under the principles applicable in such a state (in other words, the principles of "natural law"), they were guiltless; n14 and, in any event, and in a more [*1838] traditional vein, the murder statute must be interpreted in accordance with its purpose - namely, deterrence. n15 That purpose, he concludes, would no more be served by upholding a conviction on the facts at bar than in the case of the recognized justification of self-defense. n16 Justice Foster is then powerfully attacked in the two opinions that follow. Justice Tatting derides Justice Foster's first argument - questioning when one can decide that an actor has crossed over into a state of nature and how the court acquires its authority to apply natural law - and then heaps similar scorn on Justice Foster's "purposive" analysis, in part by insisting that purposes are both difficult to ascertain and, usually, multiple. n17 The justification of self-defense is readily distinguished, the Valjean precedent is invoked, and then Justice Tatting, baffled by the difficulty of the case and resentful of the decision to prosecute these hapless defendants under a statute providing a mandatory death penalty, decides to withdraw. n18 Justice Keen, a man of similar views but made of sterner stuff, votes to affirm. He insists that his own view of the morality or immorality of the acts charged is irrelevant, and that the court must recognize the supremacy of the legislature by applying the statute as written - not by rewriting it as the justices would like it to read through the dodge of ascertaining some fancied "purpose" or filling some nonexistent "gap." n19 He even suggests that the courts may have erred years earlier in recognizing the justification of self-defense instead of leaving it to the legislature, if it wished, to spell out the precise contours of such a defense. n20 Finally, Justice Handy, the realist-pragmatist, scoffing at the learned debates among the other justices, insists that the justices must follow their own common sense and the popular will - in this case, evidenced by a poll showing that ninety percent of the people want to let the defendants off with little or no punishment - and reverse the convictions. n21 He suggests achieving this result by using whatever legalistic device seems most adaptable ("handy"?) to the occasion - in this instance, Justice Foster's second rationale. n22 Given a chance to reconsider his withdrawal, Justice Tatting sticks to his guns (if that is an apt metaphor for a coward), and partly as a [*1839] result of his refusal to face up to his responsibility as a judge, the convictions are affirmed by an equally divided vote. Thus, Fuller managed in these five opinions to introduce just about every dispute about the nature of law and the role of judges. As Justice Handy notes before launching into his realist critique, the prior opinions have explored the clash between natural law and positivism, have examined a range of approaches to statutory interpretation, and have raised fundamental questions about the roles and limits of our legal institutions. A third virtue of Fuller's essay is that if one were unfamiliar with his other works, one would be hard-pressed to identify his own preferred approach, although he is perhaps too cynical about the techniques of the realists (as embodied in Justice Handy's opinion) to be readily identified with that school of thought. In fact, Fuller's other works reveal an affinity for both aspects of Justice Foster's approach. n23 Indeed, Fuller's view of the significance of purposive analysis in interpreting statutes gave rise in later years to the "legal process" theories of Professors Hart and Sacks, n24 and yet his criticisms of that approach in the opinions of Justices Tatting and Keen are so trenchant that future scholars have been able to add little to their arguments. As noted earlier, this ability - to recognize and articulate the weaknesses in one's own theories - constitutes, in my view, a hallmark of true scholarship. Fourth, Fuller not only used his "quintalogue" to explore some of the burning issues of his own day, especially the effort to resolve the challenges that natural law and positivism posed for each other; he also hit upon a technique for articulating these problems that has succeeded in engaging students and teachers ever since - witness this second symposium on the case in the last six years. Moreover, as I try to show, the scholars that have followed him may have cast some further light, but the real illumination still comes from Fuller himself. Finally, Fuller did all this in a remarkably compact form. Although it is not unusual for a present-day article on an obscure problem of, say, bankruptcy law to stretch out for a hundred dreary pages, Fuller's five opinions consumed only thirty. And the style was not only lucid and accessible; it was also lively and witty throughout. Erwin Griswold, a man of simple tastes and direct speech, caught the essence of Fuller's gift for avoiding pretense and obscurity when he once introduced Fuller as "the only jurisprude I can understand." [*1840] III. Later Analyses and Onslaughts A The first extensive return to Fuller's cave appeared as two articles in the George Washington Law Review in 1993. The articles were entitled The Case of the Speluncean Explorers: Twentieth-Century Statutory Interpretation in a Nutshell n25 and The Case of the Speluncean Explorers: Contemporary Proceedings. n26 Professor William Eskridge, the architect of the project, contributed an introductory analysis of Fuller's work, and his essay was followed by seven new opinions in the case authored by a range of academics. Most of Eskridge's introductory analysis consisted of a scholarly exegesis of Fuller's piece, which Eskridge described as representing "a moment in the Anglo-American debate over the role of equity and natural law in statutory interpretation," n27 and also as a harbinger of the "Legal Process" approach more fully developed in later years by Professors Hart and Sacks. n28 Eskridge also noted the skill with which Justices Tatting and Keen question both the legitimacy and appropriateness of Justice Foster's use of natural law in his first argument and of "purposivism" in his second. Then, as part of his introduction of the opinions that follow, Eskridge notes that the world of the case "and of its Justices - and Lon Fuller's world - [is one] in which the only actors who matter are male, white, affluent, and heterosexual." n29 Eskridge's introduction is knowledgeable, informative, and generally respectful of Fuller's insights. My view, which is already apparent and which may not quite jibe with his, is that Fuller's essay is much more than a document of historical importance - that it transcends a moment in legal history, or even several moments, and that (granting that it cannot wholly escape the tacit assumptions and understandings of its day) will continue to fascinate and provoke its readers as long as it remains available. In the seven opinions that followed Eskridge's introduction, perhaps the most notable feature is that not one new justice voted simply to affirm the conviction and sentence; rather, three voted to reverse the conviction; two voted to remand for further factual inquiry relating to - or for jury determination of - guilt or innocence; one voted to re [*1841] mand solely on the question of the appropriate sentence; and one voted to reverse the sentence of execution. n30 That no one voted to affirm both the conviction and sentence is, in part, a result of Professor Eskridge's selection of judges. As he acknowledges, three were selected as representatives of feminist theory and two as representatives of critical race theory. n31 Of the remaining two, one advocated a "purposive" analysis reminiscent of that espoused by Justice Foster, n32 while the other appeared to speak for the neo-realist, "critical legal studies" approach presaged by Justice Handy. n33 The three advocates of feminist theory were Naomi Cahn, Mary Coombs, and Laura Stein. Professor Cahn voted to remand for further development of the facts in order effectively to "integrate" the ethics of "care and justice." n34 Professor Coombs noted that it was too easy for judges to identify themselves with the "privileged" male defendants who found themselves facing death; she then concluded (perhaps in part because of her fear of judicial bias) that, since the record did not establish an effective waiver of trial by jury by the defendants themselves, the case should be retried in order to obtain a jury verdict on the ultimate question of guilt. n35 Professor Stein, after speculating on the possible impact of the decision on the disempowered (specifically including battered women), concluded that much would be lost by executing these defendants and that as one who "willfully would not give [the defendants] guidance beforehand," she was "estopped from judging with hindsight." n36 As for the two representatives of critical race theory, Professor John Calmore concluded in light of his own narrative of the history of racial [*1842] and religious persecution on the planet Newgarth that the entire Newgarthian criminal justice system was suspect "because we on Newgarth live under circumstances of racial oppression," n37 and Professor Dwight Greene, viewing the criminal law as a "legal trap[ ] ... for the less privileged," decided to affirm the conviction because he knew that the "affluent, all-Caucasoid, male panel" would not overturn the Valjean case, and one could not (under a theory of neutral principles?) find murder justifiable in order to survive when theft was punished under similar circumstances. n38 This is not the place to analyze each of these approaches in detail. Suffice it to say that while I think there is much to be learned from the neo-realists, the feminists, and the critical race theorists, I do not count myself among any of these schools, and I am troubled by each of their conclusions in the context of Fuller's case. Some have simply refused to accept the case as stated and have used the opportunity to make up a story of their own and then act on the basis of that story. (Fuller might well respond, as I often do in class, that "It's my hypothetical.") Others seem to me to have copped out - Tatting-like - by imagining that more facts might help or by insisting on a trial by jury of the ultimate issues. n39 In sum, the opinions rendered in the 1993 Symposium may represent much more of a relatively brief moment in legal history, and much less of a timeless consideration of a fundamental dilemma than Fuller's original. In any event, Fuller's work emerges, in my view, neither bloodied nor bowed. B We come then to the present symposium. This time, the editors have sought to obtain a broader range of views. Their success in this effort is indicated by the closely divided vote. As I count, the vote to affirm the conviction is 3-3, n40 with one of the three who voted for affirmance voting at the same time to invalidate the mandatory death penalty and to remand for further hearing on the issue of the appropriate sentence. Since, unlike Justice Tatting, I have not been assigned a judicial role, I could not break the tie if I wanted to - and I don't. Thus the defendants will have to serve time, but they may not have to [*1843] face the tribulations of death row, and worse. (I assume that there is no higher tribunal to which a further appeal would lie.) A look at the six opinions reveals some surprises and many insights. But once again, I find myself concluding that the foundation for all that has followed was laid by Fuller in his thirty pages, and that while much of the subsequent filigree is entrancing, and sometimes brilliant, both the groundwork and the structure above it can be found in Fuller's pages. To begin with the justices who voted to affirm, Alex Kozinski (a federal court of appeals judge in real life) takes the "textualist" route blazed by Chief Justice Truepenny and Justice Keen, and also embraces the institutional view espoused by Chief Justice Truepenny in his reference to the possibility of relief in the "political arena." n41 Adding to Fuller's arguments, Kozinski points out that we cannot be sure that the defendants took the wisest course - perhaps they should have waited for one of their number to expire before diving into their questionable repast - and that judges should not engage in lawmaking by disregarding the plain language of a statute. For example, he asks, should the courts permit an indictment and conviction for killing a dog ("canicide") on the theory that the drafters of the statute have left a gap that needs to be filled? n42 This opinion is an eloquent statement of the textualist view, but it raises some concerns. Should the courts regard themselves only as messengers when applying the broad language of a statute to a particular problem as long as the words used are "plain"? Should it matter that the legislature, in the light of centuries of experience, may have come to expect the process of interpretation to comprise elements of both agency (the court as applier of the legislature's mandates) and partnership (the court as fine tuner of the legislature's general, and sometimes overly general, proscriptions and commands)? To take the case at hand, Kozinski manages to sidestep the problem posed for him by the earlier precedent (in Fuller's hypothetical), recognizing a "common law" justification of self-defense. He does so by invoking other statutory provisions, apparently not on the books when Fuller wrote, that "define justifiable homicide" and then by chiding the defendants for not invoking these previously unknown statutes, "doubtless be [*1844] cause cause they do not apply." n43 And his "canicide" example n44 is especially ironic in view of the statutory language proscribing the killing of "another." Another what? n45 Living thing? Homo sapiens? The question may not be answerable without an analysis of legislative purpose - with whatever materials are at hand. The next vote to affirm, cast by Cass Sunstein, may come as a bit of a surprise to some, but the opinion is in fact a masterful application of Sunstein's view, developed elsewhere in his writings, that it is possible to reach a result on the basis of what he has described as an incomplete theory - one that reasons by analogy and does not resolve the most fundamental issues of the nature of law. n46 While recognizing the virtues of a plain meaning approach (and indeed placing a good deal of reliance on that aspect of the case), as well as of a purposive analysis, Sunstein at the same time points out their weaknesses and limitations. n47 For him, the problem is best approached by a comparison of the facts to the prototypical case at which the statute is aimed (the killing of an innocent for selfish purposes) and to its polar opposite (a killing to prevent the destruction of life by a wrongdoer). Following this analogy, Sunstein concludes that this killing should not be held justifiable, especially because Whetmore made a timely effort to pull out of the agreement. This analysis, in my view, is both stunning in its own right and an illuminating example of Sunstein's broader approach to the resolution of legal problems. But I can't resist noting that its elements were, at least to some degree, present in the opinions of Fuller's justices, including the critiques of textualism and purposivism, n48 the distinction of the justifications that had been recognized in the past, n49 and the relevance of Whetmore's effort to get out of the lottery before the dice were thrown. n50 Robin West casts the third vote for affirmance of the conviction. After rehearsing (with some new insights) the arguments of Justices [*1845] Tatting and Keen that a statute of this kind has multiple, sometimes conflicting and sometimes unknowable, purposes, Professor West focuses on the distinction between the case at bar and the classic justification of self-defense. n51 She joins with Sunstein in noting that it is one thing to resist aggression and quite another deliberately to take an innocent life in order to save the lives of others. (In the course of this discussion, she analogizes Whetmore's plight to that of a woman who cannot be required to sacrifice her own life to save that of the fetus within her.) n52 Finally, she concludes that the mandatory death penalty cannot withstand constitutional assault because it fails to permit consideration of mitigating circumstances. n53 At least when it comes to punishment, she insists, we need not "bifurcate" justice and mercy. n54 Once again, the seeds of these powerful arguments were planted by Fuller in his critique of purposive analysis, in his distinction of the case of self-defense against an aggressor, and in his suggestion (in Justice Handy's opinion) that a formalistic separation of institutional roles - leaving questions of "mercy" to the executive branch - was a dubious exercise. n55 Of course, Fuller did not have the benefit (if that's what it is) of our Supreme Court's later pronouncements on the validity of the death penalty, n56 or of its decisions dealing with the constitutionality of limitations or prohibitions on abortion. Indeed, it is far from clear that Newgarth has a constitution that bears any resemblance to ours n57 or that our Supreme Court's highly controversial and somewhat meandering interpretations of the Constitution on these issues should serve as a model. And in any event, West's use of the abortion analogy is a puzzling one since it could, in my view, be turned completely around. Perhaps instead of analogizing Whetmore to the woman who may not be sacrificed to save the life of the fetus within her, we might more appropriately draw the analogy between the mother and the defendants. After all, just as the greater good may consist in allowing the sentient mother to preserve her health or life by sacrificing an unborn child, so the greater good may be achieved by [*1846] sacrificing one innocent to preserve the lives of many (at least if fair procedures are followed). When we turn to those who would reverse the convictions, Frank Easterbrook's vote and rationale may come as something of a surprise to those who associate him with the "textualist" approach. In concluding that this case does not fall within the broad language of the statute, Easterbrook (a once and continuing academic and a federal appellate judge in real life) emphasizes such matters as historical context, the common law function of the courts in developing defenses to criminal charges, and the role of the courts not just as agents but as partners of the legislature in fitting new statutes into the "normal operation" of the legal system. n58 His thoughtful distinction between the Valjean case and the case of the starving mountaineer is presented as part of a "utilitarian" analysis of the justification of necessity. n59 Following this analysis, he concludes that acting behind a veil of ignorance, five explorers willing to take the risks associated with a dangerous expedition would rationally agree in advance to a cannibalistic arrangement that reduced the risk of death by starvation by eighty percent. (He analogizes such an agreement to the use of a connecting rope by mountain climbers.) n60 Easterbrook's departure from the textualist orthodoxy in this case is not that surprising, given the sophistication of his approach to statutory construction and the particular nature of this statute. While much legislation represents a carefully-wrought compromise between conflicting forces - a compromise that might be perverted or even wrecked by a refusal to adhere to the text - this criminal statute is surely more sensibly viewed as an over-general prohibition enacted by a legislature that, at least implicitly, contemplated the necessity of judicial fine-tuning. n61 Nor should Easterbrook's view of the utilitarian nature of the "necessity" defense, which is, I believe, a major contribution to our thinking about the problem of the case, come as a surprise to those familiar with his academic work. Once again, though, the approach was heralded in Fuller's piece when Justice Foster (in the "natural law" part of his argument) said: If it was proper that these ten lives [of members of the rescue party] should be sacrificed to save the lives of five imprisoned explorers, why [*1847] then are we told it was wrong for these explorers to carry out an arrangement which would save four lives at the cost of one? n62 That Fuller regarded this analysis as most relevant to a "natural law" thesis, while Easterbrook sees it as an appropriate tool of statutory interpretation, is revealing. West insists that the prohibition of murder is about "rights," in particular the right of the innocent not to be assaulted or killed, n63 while Easterbrook views the issue of justification in terms of the net cost or benefit to those affected. n64 If Easterbrook is right, don't we have to worry about how far the many can go at the expense of the few? And why is it irrelevant that on the "actual" facts (of the hypothetical), Whetmore tried to pull out before the drawing - a point not mentioned by Easterbrook? In view of Whetmore's decision, wouldn't it have been both fairer and at least as sound from a cost-benefit standpoint to exclude him from the drawing and from the meal that followed? Another vote to reverse is cast by Alan Dershowitz, writing under the pseudonym of Justice De Bunker. n65 Professor Dershowitz, embellishing Fuller's hypothetical, posits a religious war in the third millennium that culminated, at least for the vast majority of survivors, in the abandonment of both religious precepts and any notions of natural law. n66 Having eliminated one horn of Fuller's dilemma, Dershowitz proceeds - in the first part of his analysis - to decide for the defendants on the basis of his own preference (which he hopes will appeal to others) for allowing all conduct that is not explicitly prohibited by law. Since the murder statute, in his view, does not address the situation at bar, his preference, derived from his libertarian principles, furnishes a basis for his vote to reverse. n67 While Dershowitz is surely entitled to choose the positivist road, it is a bit unfair to Fuller's hypothetical to eliminate the clash with natural law principles by assuming that society rejected the concept of natural law a thousand years earlier. And as to allowing whatever is not prohibited, it is hard to quarrel with that view as a general approach to interpretation - in truth, I find it very attractive - but I'm not sure that it is helpful in this case. To be sure, there were two widely noted cases several thousand years earlier (in other jurisdictions), but both resulted in convictions under a general statute like this one. n68 To the extent those decisions have any relevance, why isn't the conviction of the defendants in those cases an indication that if the [*1848] Newgarth legislature was aware of the problem, it was quite content with the way it had been treated in the past? n69 Indeed, if Dershowitz's reading of the statute is correct, can it be taken to prohibit a murder of a kidnap victim when the ransom is not paid, or for that matter, any killing under facts not specified in the statute itself? n70 Perhaps aware of the difficulties of his "interpretation" of the Newgarth murder statute, Dershowitz goes on in what looks like an alternative rationale to make an argument based on "necessity." n71 This argument bears a strong resemblance to Easterbrook's utilitarian calculus, and as I have tried to suggest and others have forcefully argued, such an argument has both virtues and shortcomings. The most important of the shortcomings, in my view, is that it poses agonizing problems for a system of law that seeks in general to protect the innocent from being sacrificed by others for the greater good. In any event, I remain unpersuaded by Dershowitz's concluding effort to tie together the two strands of his argument by noting that "our legislature has not explicitly spoken to this specific problem [of the nature and scope of a "necessity" defense]." n72 All of which brings us to Paul Butler's opinion. Already known for his article in the Yale Law Journal, advocating that black jurors practice nullification when black defendants are charged with non-violent crimes, n73 Professor Butler has decided to do himself at least one better. Seizing on Justice Foster's use of a "stupid[ ] housemaid" n74 to make a point about purposive construction, Butler writes an opinion from the perspective of that housemaid - and writes it in a style that is a curi [*1849] ous mixture of Butler's version of ebonics, four-letter words, thoroughly Bluebooked legal citations, and rather elegant phrases like "Having determined no moral culpability in the defendants' actions," n75 and "In the last part of the twentieth century, ... Negroes ... were difficult and expensive to rehabilitate and it was pleasurable to punish them." n76 The thrusts of his opinion are that no crime deserving of moral condemnation has been committed, n77 that there is no sense in killing someone to prove that killing is wrong, n78 and that in any event, there is no true rule of law because "the Supreme Court of Newgarth ain't never gone choose law to favor the poor and colored folks ... at least not to the point that the rich white folks' richness and whiteness is threatened." n79 And in the peroration, the housemaid is beguiled by the irony that after "sacrificing the lives of people of color for centuries," Newgarth has come to the point where "white folks sacrifice white lives for the greater good." n80 Granted (as Fuller recognizes in invoking the image of Jean Valjean), n81 the law may appear even-handed when in fact - as Anatole France so brilliantly put it n82 - it frequently treats the poor more harshly than the rich, and the poor in this country have often been people of color, especially blacks. Granted too, Butler's prose has an attention-grabbing, if disconcerting, shock value. n83 The question still remains whether - by operating on the assumption that Newgarth in the fifth millennium is like twentieth-century society at its worst, and on the more patronizing assumption that a hypothetical "stupid housemaid" is black - Butler has treated Fuller's case with respect, or has simply used it as a platform to sound a brassier version of a note he has played before. Butler's challenge to the whole concept of "legal reasoning" echoes the criticism of the legal realists in the early decades of this century and of their intellectual successors in the critical legal studies movement of more recent times. Fuller himself, who valued the rule of law, may have gone overboard in suggesting, through Justice Handy, that the alternative is to take the popular pulse and act ac [*1850] cordingly. n84 But I am left wondering whether Butler's critique carries us beyond these earlier contributions. As Kozinski states forcefully in his opinion, Butler's approach contains its own puzzling inconsistencies and leaves us in the dark about how a better world might apply the rule of law in a case like this. The difficulty may well lie in Butler's insistence on viewing the explorers' case as a parable about race and class. A Jewish colleague of mine - one of the participants in this project who shall remain nameless - told me years ago that when he was young, he would come home from a baseball game and announce proudly to his grandmother that "The Dodgers won!," to which his grandmother would reply, "So, is it good for the Jews?" Probably not, but it wasn't bad for them either. * * * In raising some questions about the new opinions assembled for this project, I do not mean either to deny the many insights in these opinions or even remotely to suggest that I could have done better. I am quite sure that I could not. But - as the reader must be tired of reading by now - I am convinced that one proposition is established by the continuing debate: Lon Fuller has posed a problem as challenging for those who worry about the law and legal institutions as is the origin and ultimate fate of the universe for astronomers. [*1851] THE CASE OF THE SPELUNCEAN EXPLORERS* In the Supreme Court of Newgarth, 4300 The defendants, having been indicted for the crime of murder, were convicted and sentenced to be hanged by the Court of General Instances of the County of Stowfield. They bring a petition of error before this Court. The facts sufficiently appear in the opinion of the Chief Justice. Truepenny, C. J. The four defendants are members of the Speluncean Society, an organization of amateurs interested in the exploration of caves. Early in May of 4299 they, in the company of Roger Whetmore, then also a member of the Society, penetrated into the interior of a limestone cavern of the type found in the Central Plateau of this Commonwealth. While they were in a position remote from the entrance to the cave, a landslide occurred. Heavy boulders fell in such a manner as to block completely the only known opening to the cave. When the men discovered their predicament they settled themselves near the obstructed entrance to wait until a rescue party should remove the detritus that prevented them from leaving their underground prison. On the failure of Whetmore and the defendants to return to their homes, the Secretary of the Society was notified by their families. It appears that the explorers had left indications at the headquarters of the Society concerning the location of the cave they proposed to visit. A rescue party was promptly dispatched to the spot. The task of rescue proved one of overwhelming difficulty. It was necessary to supplement the forces of the original party by repeated increments of men and machines, which had to be conveyed at great expense to the remote and isolated region in which the cave was located. A huge temporary camp of workmen, engineers, geologists, and other experts was established. The work of removing the obstruction was several times frustrated by fresh landslides. In one of these, ten of the workmen engaged in clearing the entrance were killed. The treasury of the Speluncean Society was soon exhausted in the rescue effort, and the sum of eight hundred thousand frelars, raised partly by popular subscription and partly by legislative grant, was expended before the imprisoned men were rescued. Success was finally achieved on the thirty-second day after the men entered the cave. Since it was known that the explorers had carried with them only scant provisions, and since it was also known that there was no animal or vegetable matter within the cave on which they might subsist, anxiety was early felt that they might meet death by starvation before ac [*1852] cess to them could be obtained. On the twentieth day of their imprisonment it was learned for the first time that they had taken with them into the cave a portable wireless machine capable of both sending and receiving messages. A similar machine was promptly installed in the rescue camp and oral communication established with the unfortunate men within the mountain. They asked to be informed how long a time would be required to release them. The engineers in charge of the project answered that at least ten days would be required even if no new landslides occurred. The explorers then asked if any physicians were present, and were placed in communication with a committee of medical experts. The imprisoned men described their condition and the rations they had taken with them, and asked for a medical opinion whether they would be likely to live without food for ten days longer. The chairman of the committee of physicians told them that there was little possibility of this. The wireless machine within the cave then remained silent for eight hours. When communication was re-established the men asked to speak again with the physicians. The chairman of the physicians' committee was placed before the apparatus, and Whetmore, speaking on behalf of himself and the defendants, asked whether they would be able to survive for ten days longer if they consumed the flesh of one of their number. The physicians' chairman reluctantly answered this question in the affirmative. Whetmore asked whether it would be advisable for them to cast lots to determine which of them should be eaten. None of the physicians present was willing to answer the question. Whetmore then asked if there were among the party a judge or other official of the government who would answer this question. None of those attached to the rescue camp was willing to assume the role of advisor in this matter. He then asked if any minister or priest would answer their question, and none was found who would do so. Thereafter no further messages were received from within the cave, and it was assumed (erroneously, it later appeared) that the electric batteries of the explorers' wireless machine had become exhausted. When the imprisoned men were finally released it was learned that on the twenty-third day after their entrance into the cave Whetmore had been killed and eaten by his companions. > From the testimony of the defendants, which was accepted by the jury, it appears that it was Whetmore who first proposed that they might find the nutriment without which survival was impossible in the flesh of one of their own number. It was also Whetmore who first proposed the use of some method of casting lots, calling the attention of the defendants to a pair of dice he happened to have with him. The defendants were at first reluctant to adopt so desperate a procedure, but after the conversations by wireless related above, they finally agreed on the plan proposed by Whetmore. After much discussion of the mathematical problems involved, agreement was finally reached on a method of determining the issue by the use of the dice. [*1853] Before the dice were cast, however, Whetmore declared that he withdrew from the arrangement, as he had decided on reflection to wait for another week before embracing an expedient so frightful and odious. The others charged him with a breach of faith and proceeded to cast the dice. When it came Whetmore's turn, the dice were cast for him by one of the defendants, and he was asked to declare any objections he might have to the fairness of the throw. He stated that he had no such objections. The throw went against him, and he was then put to death and eaten by his companions. After the rescue of the defendants, and after they had completed a stay in a hospital where they underwent a course of treatment for malnutrition and shock, they were indicted for the murder of Roger Whetmore. At the trial, after the testimony had been concluded, the foreman of the jury (a lawyer by profession) inquired of the court whether the jury might not find a special verdict, leaving it to the court to say whether on the facts as found the defendants were guilty. After some discussion, both the Prosecutor and counsel for the defendants indicated their acceptance of this procedure, and it was adopted by the court. In a lengthy special verdict the jury found the facts as I have related them above, and found further that if on these facts the defendants were guilty of the crime charged against them, then they found the defendants guilty. On the basis of this verdict, the trial judge ruled that the defendants were guilty of murdering Roger Whetmore. The judge then sentenced them to be hanged, the law of our Commonwealth permitting him no discretion with respect to the penalty to be imposed. After the release of the jury, its members joined in a communication to the Chief Executive asking that the sentence be commuted to an imprisonment of six months. The trial judge addressed a similar communication to the Chief Executive. As yet no action with respect to these pleas has been taken, as the Chief Executive is apparently awaiting our disposition of this petition of error. It seems to me that in dealing with this extraordinary case the jury and the trial judge followed a course that was not only fair and wise, but the only course that was open to them under the law. The language of our statute is well known: "Whoever shall willfully take the life of another shall be punished by death." N. C. S. A. (n. s.) 12-A. This statute permits of no exception applicable to this case, however our sympathies may incline us to make allowance for the tragic situation in which these men found themselves. In a case like this the principle of executive clemency seems admirably suited to mitigate the rigors of the law, and I propose to my colleagues that we follow the example of the jury and the trial judge by joining in the communications they have addressed to the Chief Executive. There is every reason to believe that these requests for clemency will be heeded, coming as they do from those who have studied the case and had an opportunity to become thoroughly acquainted [*1854] with all its circumstances. It is highly improbable that the Chief Executive would deny these requests unless he were himself to hold hearings at least as extensive as those involved in the trial below, which lasted for three months. The holding of such hearings (which would virtually amount to a retrial of the case) would scarcely be compatible with the function of the Executive as it is usually conceived. I think we may therefore assume that some form of clemency will be extended to these defendants. If this is done, then justice will be accomplished without impairing either the letter or spirit of our statutes and without offering any encouragement for the disregard of law. Foster, J. I am shocked that the Chief Justice, in an effort to escape the embarrassments of this tragic case, should have adopted, and should have proposed to his colleagues, an expedient at once so sordid and so obvious. I believe something more is on trial in this case than the fate of these unfortunate explorers; that is the law of our Commonwealth. If this Court declares that under our law these men have committed a crime, then our law is itself convicted in the tribunal of common sense, no matter what happens to the individuals involved in this petition of error. For us to assert that the law we uphold and expound compels us to a conclusion we are ashamed of, and from which we can only escape by appealing to a dispensation resting within the personal whim of the Executive, seems to me to amount to an admission that the law of this Commonwealth no longer pretends to incorporate justice. For myself, I do not believe that our law compels the monstrous conclusion that these men are murderers. I believe, on the contrary, that it declares them to be innocent of any crime. I rest this conclusion on two independent grounds, either of which is of itself sufficient to justify the acquittal of these defendants. The first of these grounds rests on a premise that may arouse opposition until it has been examined candidly. I take the view that the enacted or positive law of this Commonwealth, including all of its statutes and precedents, is inapplicable to this case, and that the case is governed instead by what ancient writers in Europe and America called "the law of nature." This conclusion rests on the proposition that our positive law is predicated on the possibility of men's coexistence in society. When a situation arises in which the coexistence of men becomes impossible, then a condition that underlies all of our precedents and statutes has ceased to exist. When that condition disappears, then it is my opinion that the force of our positive law disappears with it. We are not accustomed to applying the maxim cessante ratione legis, cessat et ipsa lex to the whole of our enacted law, but I believe that this is a case where the maxim should be so applied. [*1855] The proposition that all positive law is based on the possibility of men's coexistence has a strange sound, not because the truth it contains is strange, but simply because it is a truth so obvious and pervasive that we seldom have occasion to give words to it. Like the air we breathe, it so pervades our environment that we forget that it exists until we are suddenly deprived of it. Whatever particular objects may be sought by the various branches of our law, it is apparent on reflection that all of them are directed toward facilitating and improving men's coexistence and regulating with fairness and equity the relations of their life in common. When the assumption that men may live together loses its truth, as it obviously did in this extraordinary situation where life only became possible by the taking of life, then the basic premises underlying our whole legal order have lost their meaning and force. Had the tragic events of this case taken place a mile beyond the territorial limits of our Commonwealth, no one would pretend that our law was applicable to them. We recognize that jurisdiction rests on a territorial basis. The grounds of this principle are by no means obvious and are seldom examined. I take it that this principle is supported by an assumption that it is feasible to impose a single legal order upon a group of men only if they live together within the confines of a given area of the earth's surface. The premise that men shall coexist in a group underlies, then, the territorial principle, as it does all of law. Now I contend that a case may be removed morally from the force of a legal order, as well as geographically. If we look to the purposes of law and government, and to the premises underlying our positive law, these men when they made their fateful decision were as remote from our legal order as if they had been a thousand miles beyond our boundaries. Even in a physical sense, their underground prison was separated from our courts and writ-servers by a solid curtain of rock that could be removed only after the most extraordinary expenditures of time and effort. I conclude, therefore, that at the time Roger Whetmore's life was ended by these defendants, they were, to use the quaint language of nineteenth-century writers, not in a "state of civil society" but in a "state of nature." This has the consequence that the law applicable to them is not the enacted and established law of this Commonwealth, but the law derived from those principles that were appropriate to their condition. I have no hesitancy in saying that under those principles they were guiltless of any crime. What these men did was done in pursuance of an agreement accepted by all of them and first proposed by Whetmore himself. Since it was apparent that their extraordinary predicament made inapplicable the usual principles that regulate men's relations with one another, it was necessary for them to draw, as it were, a new charter of government appropriate to the situation in which they found themselves. [*1856] It has from antiquity been recognized that the most basic principle of law or government is to be found in the notion of contract or agreement. Ancient thinkers, especially during the period from 1600 to 1900, used to base government itself on a supposed original social compact. Skeptics pointed out that this theory contradicted the known facts of history, and that there was no scientific evidence to support the notion that any government was ever founded in the manner supposed by the theory. Moralists replied that, if the compact was a fiction from a historical point of view, the notion of compact or agreement furnished the only ethical justification on which the powers of government, which include that of taking life, could be rested. The powers of government can only be justified morally on the ground that these are powers that reasonable men would agree upon and accept if they were faced with the necessity of constructing anew some order to make their life in common possible. Fortunately, our Commonwealth is not bothered by the perplexities that beset the ancients. We know as a matter of historical truth that our government was founded upon a contract or free accord of men. The archeological proof is conclusive that in the first period following the Great Spiral the survivors of that holocaust voluntarily came together and drew up a charter of government. Sophistical writers have raised questions as to the power of those remote contractors to bind future generations, but the fact remains that our government traces itself back in an unbroken line to that original charter. If, therefore, our hangmen have the power to end men's lives, if our sheriffs have the power to put delinquent tenants in the street, if our police have the power to incarcerate the inebriated reveler, these powers find their moral justification in that original compact of our forefathers. If we can find no higher source for our legal order, what higher source should we expect these starving unfortunates to find for the order they adopted for themselves? I believe that the line of argument I have just expounded permits of no rational answer. I realize that it will probably be received with a certain discomfort by many who read this opinion, who will be inclined to suspect that some hidden sophistry must underlie a demonstration that leads to so many unfamiliar conclusions. The source of this discomfort is, however, easy to identify. The usual conditions of human existence incline us to think of human life as an absolute value, not to be sacrificed under any circumstances. There is much that is fictitious about this conception even when it is applied to the ordinary relations of society. We have an illustration of this truth in the very case before us. Ten workmen were killed in the process of removing the rocks from the opening to the cave. Did not the engineers and government officials who directed the rescue effort know that the operations they were undertaking were dangerous and involved a serious risk to the lives of the workmen executing them? If it was proper that [*1857] these ten lives should be sacrificed to save the lives of five imprisoned explorers, why then are we told it was wrong for these explorers to carry out an arrangement which would save four lives at the cost of one? Every highway, every tunnel, every building we project involves a risk to human life. Taking these projects in the aggregate, we can calculate with some precision how many deaths the construction of them will require; statisticians can tell you the average cost in human lives of a thousand miles of a four-lane concrete highway. Yet we deliberately and knowingly incur and pay this cost on the assumption that the values obtained for those who survive outweigh the loss. If these things can be said of a society functioning above ground in a normal and ordinary manner, what shall we say of the supposed absolute value of a human life in the desperate situation in which these defendants and their companion Whetmore found themselves? This concludes the exposition of the first ground of my decision. My second ground proceeds by rejecting hypothetically all the premises on which I have so far proceeded. I concede for purposes of argument that I am wrong in saying that the situation of these men removed them from the effect of our positive law, and I assume that the Consolidated Statutes have the power to penetrate five hundred feet of rock and to impose themselves upon these starving men huddled in their underground prison. Now it is, of course, perfectly clear that these men did an act that violates the literal wording of the statute which declares that he who "shall willfully take the life of another" is a murderer. But one of the most ancient bits of legal wisdom is the saying that a man may break the letter of the law without breaking the law itself. Every proposition of positive law, whether contained in a statute or a judicial precedent, is to be interpreted reasonably, in the light of its evident purpose. This is a truth so elementary that it is hardly necessary to expatiate on it. Illustrations of its application are numberless and are to be found in every branch of the law. In Commonwealth v. Staymore the defendant was convicted under a statute making it a crime to leave one's car parked in certain areas for a period longer than two hours. The defendant had attempted to remove his car, but was prevented from doing so because the streets were obstructed by a political demonstration in which he took no part and which he had no reason to anticipate. His conviction was set aside by this Court, although his case fell squarely within the wording of the statute. Again, in Fehler v. Neegas there was before this Court for construction a statute in which the word "not" had plainly been transposed from its intended position in the final and most crucial section of the act. This transposition was contained in all the successive drafts of the act, where it was apparently overlooked by the draftsmen and sponsors of the legislation. No one was able to prove how the error came about, yet it was apparent [*1858] that, taking account of the contents of the statute as a whole, an error had been made, since a literal reading of the final clause rendered it inconsistent with everything that had gone before and with the object of the enactment as stated in its preamble. This Court refused to accept a literal interpretation of the statute, and in effect rectified its language by reading the word "not" into the place where it was evidently intended to go. The statute before us for interpretation has never been applied literally. Centuries ago it was established that a killing in self-defense is excused. There is nothing in the wording of the statute that suggests this exception. Various attempts have been made to reconcile the legal treatment of self-defense with the words of the statute, but in my opinion these are all merely ingenious sophistries. The truth is that the exception in favor of self-defense cannot be reconciled with the words of the statute, but only with its purpose. The true reconciliation of the excuse of self-defense with the statute making it a crime to kill another is to be found in the following line of reasoning. One of the principal objects underlying any criminal legislation is that of deterring men from crime. Now it is apparent that if it were declared to be the law that a killing in self-defense is murder such a rule could not operate in a deterrent manner. A man whose life is threatened will repel his aggressor, whatever the law may say. Looking therefore to the broad purposes of criminal legislation, we may safely declare that this statute was not intended to apply to cases of self-defense. When the rationale of the excuse of self-defense is thus explained, it becomes apparent that precisely the same reasoning is applicable to the case at bar. If in the future any group of men ever find themselves in the tragic predicament of these defendants, we may be sure that their decision whether to live or die will not be controlled by the contents of our criminal code. Accordingly, if we read this statute intelligently it is apparent that it does not apply to this case. The withdrawal of this situation from the effect of the statute is justified by precisely the same considerations that were applied by our predecessors in office centuries ago to the case of self-defense. There are those who raise the cry of judicial usurpation whenever a court, after analyzing the purpose of a statute, gives to its words a meaning that is not at once apparent to the casual reader who has not studied the statute closely or examined the objectives it seeks to attain. Let me say emphatically that I accept without reservation the proposition that this Court is bound by the statutes of our Commonwealth and that it exercises its powers in subservience to the duly expressed will of the Chamber of Representatives. The line of reasoning I have applied above raises no question of fidelity to enacted law, though it may possibly raise a question of the distinction between intelligent and unintelligent fidelity. No superior wants a servant who lacks the ca [*1859] pacity to read between the lines. The stupidest housemaid knows that when she is told "to peel the soup and skim the potatoes" her mistress does not mean what she says. She also knows that when her master tells her to "drop everything and come running" he has overlooked the possibility that she is at the moment in the act of rescuing the baby from the rain barrel. Surely we have a right to expect the same modicum of intelligence from the judiciary. The correction of obvious legislative errors or oversights is not to supplant the legislative will, but to make that will effective. I therefore conclude that on any aspect under which this case may be viewed these defendants are innocent of the crime of murdering Roger Whetmore, and that the conviction should be set aside. Tatting, J. In the discharge of my duties as a justice of this Court, I am usually able to dissociate the emotional and intellectual sides of my reactions, and to decide the case before me entirely on the basis of the latter. In passing on this tragic case I find that my usual resources fail me. On the emotional side I find myself torn between sympathy for these men and a feeling of abhorrence and disgust at the monstrous act they committed. I had hoped that I would be able to put these contradictory emotions to one side as irrelevant, and to decide the case on the basis of a convincing and logical demonstration of the result demanded by our law. Unfortunately, this deliverance has not been vouchsafed me. As I analyze the opinion just rendered by my brother Foster, I find that it is shot through with contradictions and fallacies. Let us begin with his first proposition: these men were not subject to our law because they were not in a "state of civil society" but in a "state of nature." I am not clear why this is so, whether it is because of the thickness of the rock that imprisoned them, or because they were hungry, or because they had set up a "new charter of government" by which the usual rules of law were to be supplanted by a throw of the dice. Other difficulties intrude themselves. If these men passed from the jurisdiction of our law to that of "the law of nature," at what moment did this occur? Was it when the entrance to the cave was blocked, or when the threat of starvation reached a certain undefined degree of intensity, or when the agreement for the throwing of the dice was made? These uncertainties in the doctrine proposed by my brother are capable of producing real difficulties. Suppose, for example, one of these men had had his twenty-first birthday while he was imprisoned within the mountain. On what date would we have to consider that he had attained his majority - when he reached the age of twenty-one, at which time he was, by hypothesis, removed from the effects of our law, or only when he was released from the cave and became again subject to what my brother calls our "positive law"? These difficulties may seem fanciful, yet they only serve to reveal the fanciful nature of the doctrine that is capable of giving rise to them. [*1860] But it is not necessary to explore these niceties further to demonstrate the absurdity of my brother's position. Mr. Justice Foster and I are the appointed judges of a court of the Commonwealth of Newgarth, sworn and empowered to administer the laws of that Commonwealth. By what authority do we resolve ourselves into a Court of Nature? If these men were indeed under the law of nature, whence comes our authority to expound and apply that law? Certainly we are not in a state of nature. Let us look at the contents of this code of nature that my brother proposes we adopt as our own and apply to this case. What a topsy-turvy and odious code it is! It is a code in which the law of contracts is more fundamental than the law of murder. It is a code under which a man may make a valid agreement empowering his fellows to eat his own body. Under the provisions of this code, furthermore, such an agreement once made is irrevocable, and if one of the parties attempts to withdraw, the others may take the law into their own hands and enforce the contract by violence - for though my brother passes over in convenient silence the effect of Whetmore's withdrawal, this is the necessary implication of his argument. The principles my brother expounds contain other implications that cannot be tolerated. He argues that when the defendants set upon Whetmore and killed him (we know not how, perhaps by pounding him with stones) they were only exercising the rights conferred upon them by their bargain. Suppose, however, that Whetmore had had concealed upon his person a revolver, and that when he saw the defendants about to slaughter him he had shot them to death in order to save his own life. My brother's reasoning applied to these facts would make Whetmore out to be a murderer, since the excuse of self-defense would have to be denied to him. If his assailants were acting rightfully in seeking to bring about his death, then of course he could no more plead the excuse that he was defending his own life than could a condemned prisoner who struck down the executioner lawfully attempting to place the noose about his neck. All of these considerations make it impossible for me to accept the first part of my brother's argument. I can neither accept his notion that these men were under a code of nature which this Court was bound to apply to them, nor can I accept the odious and perverted rules that he would read into that code. I come now to the second part of my brother's opinion, in which he seeks to show that the defendants did not violate the provisions of N. C. S. A. (n. s.) 12-A. Here the way, instead of being clear, becomes for me misty and ambiguous, though my brother seems unaware of the difficulties that inhere in his demonstrations. The gist of my brother's argument may be stated in the following terms: No statute, whatever its language, should be applied in a way that contradicts its purpose. One of the purposes of any criminal stat [*1861] ute is to deter. The application of the statute making it a crime to kill another to the peculiar facts of this case would contradict this purpose, for it is impossible to believe that the contents of the criminal code could operate in a deterrent manner on men faced with the alternative of life or death. The reasoning by which this exception is read into the statute is, my brother observes, the same as that which is applied in order to provide the excuse of self-defense. On the face of things this demonstration seems very convincing indeed. My brother's interpretation of the rationale of the excuse of self-defense is in fact supported by a decision of this court, Commonwealth v. Parry, a precedent I happened to encounter in my research on this case. Though Commonwealth v. Parry seems generally to have been overlooked in the texts and subsequent decisions, it supports unambiguously the interpretation my brother has put upon the excuse of self-defense. Now let me outline briefly, however, the perplexities that assail me when I examine my brother's demonstration more closely. It is true that a statute should be applied in the light of its purpose, and that one of the purposes of criminal legislation is recognized to be deterrence. The difficulty is that other purposes are also ascribed to the law of crimes. It has been said that one of its objects is to provide an orderly outlet for the instinctive human demand for retribution. Commonwealth v. Scape. It has also been said that its object is the rehabilitation of the wrongdoer. Commonwealth v. Makeover. Other theories have been propounded. Assuming that we must interpret a statute in the light of its purpose, what are we to do when it has many purposes or when its purposes are disputed? A similar difficulty is presented by the fact that although there is authority for my brother's interpretation of the excuse of self-defense, there is other authority which assigns to that excuse a different rationale. Indeed, until I happened on Commonwealth v. Parry I had never heard of the explanation given by my brother. The taught doctrine of our law schools, memorized by generations of law students, runs in the following terms: The statute concerning murder requires a "willful" act. The man who acts to repel an aggressive threat to his own life does not act "willfully," but in response to an impulse deeply ingrained in human nature. I suspect that there is hardly a lawyer in this Commonwealth who is not familiar with this line of reasoning, especially since the point is a great favorite of the bar examiners. Now the familiar explanation for the excuse of self-defense just expounded obviously cannot be applied by analogy to the facts of this case. These men acted not only "willfully" but with great deliberation and after hours of discussing what they should do. Again we encounter a forked path, with one line of reasoning leading us in one direction and another in a direction that is exactly the opposite. This perplexity is in this case compounded, as it were, for we have to set off one ex [*1862] planation, incorporated in a virtually unknown precedent of this Court, against another explanation, which forms a part of the taught legal tradition of our law schools, but which, so far as I know, has never been adopted in any judicial decision. I recognize the relevance of the precedents cited by my brother concerning the displaced "not" and the defendant who parked overtime. But what are we to do with one of the landmarks of our jurisprudence, which again my brother passes over in silence? This is Commonwealth v. Valjean. Though the case is somewhat obscurely reported, it appears that the defendant was indicted for the larceny of a loaf of bread, and offered as a defense that he was in a condition approaching starvation. The court refused to accept this defense. If hunger cannot justify the theft of wholesome and natural food, how can it justify the killing and eating of a man? Again, if we look at the thing in terms of deterrence, is it likely that a man will starve to death to avoid a jail sentence for the theft of a loaf of bread? My brother's demonstrations would compel us to overrule Commonwealth v. Valjean, and many other precedents that have been built on that case. Again, I have difficulty in saying that no deterrent effect whatever could be attributed to a decision that these men were guilty of murder. The stigma of the word "murderer" is such that it is quite likely, I believe, that if these men had known that their act was deemed by the law to be murder they would have waited for a few days at least before carrying out their plan. During that time some unexpected relief might have come. I realize that this observation only reduces the distinction to a matter of degree, and does not destroy it altogether. It is certainly true that the element of deterrence would be less in this case than is normally involved in the application of the criminal law. There is still a further difficulty in my brother Foster's proposal to read an exception into the statute to favor this case, though again a difficulty not even intimated in his opinion. What shall be the scope of this exception? Here the men cast lots and the victim was himself originally a party to the agreement. What would we have to decide if Whetmore had refused from the beginning to participate in the plan? Would a majority be permitted to overrule him? Or, suppose that no plan were adopted at all and the others simply conspired to bring about Whetmore's death, justifying their act by saying that he was in the weakest condition. Or again, that a plan of selection was followed but one based on a different justification than the one adopted here, as if the others were atheists and insisted that Whetmore should die because he was the only one who believed in an afterlife. These illustrations could be multiplied, but enough have been suggested to reveal what a quagmire of hidden difficulties my brother's reasoning contains. Of course I realize on reflection that I may be concerning myself with a problem that will never arise, since it is unlikely that any group [*1863] of men will ever again be brought to commit the dread act that was involved here. Yet, on still further reflection, even if we are certain that no similar case will arise again, do not the illustrations I have given show the lack of any coherent and rational principle in the rule my brother proposes? Should not the soundness of a principle be tested by the conclusions it entails, without reference to the accidents of later litigational history? Still, if this is so, why is it that we of this Court so often discuss the question whether we are likely to have later occasion to apply a principle urged for the solution of the case before us? Is this a situation where a line of reasoning not originally proper has become sanctioned by precedent, so that we are permitted to apply it and may even be under an obligation to do so? The more I examine this case and think about it, the more deeply I become involved. My mind becomes entangled in the meshes of the very nets I throw out for my own rescue. I find that almost every consideration that bears on the decision of the case is counterbalanced by an opposing consideration leading in the opposite direction. My brother Foster has not furnished to me, nor can I discover for myself, any formula capable of resolving the equivocations that beset me on all sides. I have given this case the best thought of which I am capable. I have scarcely slept since it was argued before us. When I feel myself inclined to accept the view of my brother Foster, I am repelled by a feeling that his arguments are intellectually unsound and approach mere rationalization. On the other hand, when I incline toward upholding the conviction, I am struck by the absurdity of directing that these men be put to death when their lives have been saved at the cost of the lives of ten heroic workmen. It is to me a matter of regret that the Prosecutor saw fit to ask for an indictment for murder. If we had a provision in our statutes making it a crime to eat human flesh, that would have been a more appropriate charge. If no other charge suited to the facts of this case could be brought against the defendants, it would have been wiser, I think, not to have indicted them at all. Unfortunately, however, the men have been indicted and tried, and we have therefore been drawn into this unfortunate affair. Since I have been wholly unable to resolve the doubts that beset me about the law of this case, I am with regret announcing a step that is, I believe, unprecedented in the history of this tribunal. I declare my withdrawal from the decision of this case. Keen, J. I should like to begin by setting to one side two questions which are not before this Court. The first of these is whether executive clemency should be extended to these defendants if the conviction is affirmed. Under our system of government, that is a question for the Chief Executive, not for us. I therefore disapprove of that passage in the opinion of the Chief Justice in which he in effect gives instructions to the Chief Executive as to [*1864] what he should do in this case and suggests that some impropriety will attach if these instructions are not heeded. This is a confusion of governmental functions - a confusion of which the judiciary should be the last to be guilty. I wish to state that if I were the Chief Executive I would go farther in the direction of clemency than the pleas addressed to him propose. I would pardon these men altogether, since I believe that they have already suffered enough to pay for any offense they may have committed. I want it to be understood that this remark is made in my capacity as a private citizen who by the accident of his office happens to have acquired an intimate acquaintance with the facts of this case. In the discharge of my duties as judge, it is neither my function to address directions to the Chief Executive, nor to take into account what he may or may not do, in reaching my own decision, which must be controlled entirely by the law of this Commonwealth. The second question that I wish to put to one side is that of deciding whether what these men did was "right" or "wrong," "wicked" or "good." That is also a question that is irrelevant to the discharge of my office as a judge sworn to apply, not my conceptions of morality, but the law of the land. In putting this question to one side I think I can also safely dismiss without comment the first and more poetic portion of my brother Foster's opinion. The element of fantasy contained in the arguments developed there has been sufficiently revealed in my brother Tatting's somewhat solemn attempt to take those arguments seriously. The sole question before us for decision is whether these defendants did, within the meaning of N. C. S. A. (n. s.) 12-A, willfully take the life of Roger Whetmore. The exact language of the statute is as follows: "Whoever shall willfully take the life of another shall be punished by death." Now I should suppose that any candid observer, content to extract from these words their natural meaning, would concede at once that these defendants did "willfully take the life" of Roger Whetmore. Whence arise all the difficulties of the case, then, and the necessity for so many pages of discussion about what ought to be so obvious? The difficulties, in whatever tortured form they may present themselves, all trace back to a single source, and that is a failure to distinguish the legal from the moral aspects of this case. To put it bluntly, my brothers do not like the fact that the written law requires the conviction of these defendants. Neither do I, but unlike my brothers I respect the obligations of an office that requires me to put my personal predilections out of my mind when I come to interpret and apply the law of this Commonwealth. Now, of course, my brother Foster does not admit that he is actuated by a personal dislike of the written law. Instead he develops a familiar line of argument according to which the court may disregard the express language of a statute when something not contained in the [*1865] statute itself, called its "purpose," can be employed to justify the result the court considers proper. Because this is an old issue between myself and my colleague, I should like, before discussing his particular application of the argument to the facts of this case, to say something about the historical background of this issue and its implications for law and government generally. There was a time in this Commonwealth when judges did in fact legislate very freely, and all of us know that during that period some of our statutes were rather thoroughly made over by the judiciary. That was a time when the accepted principles of political science did not designate with any certainty the rank and function of the various arms of the state. We all know the tragic issue of that uncertainty in the brief civil war that arose out of the conflict between the judiciary, on the one hand, and the executive and the legislature, on the other. There is no need to recount here the factors that contributed to that unseemly struggle for power, though they included the unrepresentative character of the Chamber, resulting from a division of the country into election districts that no longer accorded with the actual distribution of the population, and the forceful personality and wide popular following of the then Chief Justice. It is enough to observe that those days are behind us, and that in place of the uncertainty that then reigned we now have a clear-cut principle, which is the supremacy of the legislative branch of our government. >From that principle flows the obligation of the judiciary to enforce faithfully the written law, and to interpret that law in accordance with its plain meaning without reference to our personal desires or our individual conceptions of justice. I am not concerned with the question whether the principle that forbids the judicial revision of statutes is right or wrong, desirable or undesirable; I observe merely that this principle has become a tacit premise underlying the whole of the legal and governmental order I am sworn to administer. Yet though the principle of the supremacy of the legislature has been accepted in theory for centuries, such is the tenacity of professional tradition and the force of fixed habits of thought that many of the judiciary have still not accommodated themselves to the restricted role which the new order imposes on them. My brother Foster is one of that group; his way of dealing with statutes is exactly that of a judge living in the 3900's. We are all familiar with the process by which the judicial reform of disfavored legislative enactments is accomplished. Anyone who has followed the written opinions of Mr. Justice Foster will have had an opportunity to see it at work in every branch of the law. I am personally so familiar with the process that in the event of my brother's incapacity I am sure I could write a satisfactory opinion for him without any prompting whatever, beyond being informed whether he liked the effect of the terms of the statute as applied to the case before him. [*1866] The process of judicial reform requires three steps. The first of these is to divine some single "purpose" which the statute serves. This is done although not one statute in a hundred has any such single purpose, and although the objectives of nearly every statute are differently interpreted by the different classes of its sponsors. The second step is to discover that a mythical being called "the legislator," in the pursuit of this imagined "purpose," overlooked something or left some gap or imperfection in his work. Then comes the final and most refreshing part of the task, which is, of course, to fill in the blank thus created. Quod erat faciendum. My brother Foster's penchant for finding holes in statutes reminds one of the story told by an ancient author about the man who ate a pair of shoes. Asked how he liked them, he replied that the part he liked best was the holes. That is the way my brother feels about statutes; the more holes they have in them the better he likes them. In short, he doesn't like statutes. One could not wish for a better case to illustrate the specious nature of this gap-filling process than the one before us. My brother thinks he knows exactly what was sought when men made murder a crime, and that was something he calls "deterrence." My brother Tatting has already shown how much is passed over in that interpretation. But I think the trouble goes deeper. I doubt very much whether our statute making murder a crime really has a "purpose" in any ordinary sense of the term. Primarily, such a statute reflects a deeply-felt human conviction that murder is wrong and that something should be done to the man who commits it. If we were forced to be more articulate about the matter, we would probably take refuge in the more sophisticated theories of the criminologists, which, of course, were certainly not in the minds of those who drafted our statute. We might also observe that men will do their own work more effectively and live happier lives if they are protected against the threat of violent assault. Bearing in mind that the victims of murders are often unpleasant people, we might add some suggestion that the matter of disposing of undesirables is not a function suited to private enterprise, but should be a state monopoly. All of which reminds me of the attorney who once argued before us that a statute licensing physicians was a good thing because it would lead to lower life insurance rates by lifting the level of general health. There is such a thing as overexplaining the obvious. If we do not know the purpose of 12-A, how can we possibly say there is a "gap" in it? How can we know what its draftsmen thought about the question of killing men in order to eat them? My brother Tatting has revealed an understandable, though perhaps slightly exaggerated revulsion to cannibalism. How do we know that his remote ancestors did not feel the same revulsion to an even higher degree? Anthropologists say that the dread felt for a forbidden act may be increased by the fact that the conditions of a tribe's life create special [*1867] temptations toward it, as incest is most severely condemned among those whose village relations make it most likely to occur. Certainly the period following the Great Spiral was one that had implicit in it temptations to anthropophagy. Perhaps it was for that very reason that our ancestors expressed their prohibition in so broad and unqualified a form. All of this is conjecture, of course, but it remains abundantly clear that neither I nor my brother Foster knows what the "purpose" of 12-A is. Considerations similar to those I have just outlined are also applicable to the exception in favor of self-defense, which plays so large a role in the reasoning of my brothers Foster and Tatting. It is of course true that in Commonwealth v. Parry an obiter dictum justified this exception on the assumption that the purpose of criminal legislation is to deter. It may well also be true that generations of law students have been taught that the true explanation of the exception lies in the fact that a man who acts in self-defense does not act "willfully," and that the same students have passed their bar examinations by repeating what their professors told them. These last observations I could dismiss, of course, as irrelevant for the simple reason that professors and bar examiners have not as yet any commission to make our laws for us. But again the real trouble lies deeper. As in dealing with the statute, so in dealing with the exception, the question is not the conjectural purpose of the rule, but its scope. Now the scope of the exception in favor of self-defense as it has been applied by this Court is plain: it applies to cases of resisting an aggressive threat to the party's own life. It is therefore too clear for argument that this case does not fall within the scope of the exception, since it is plain that Whetmore made no threat against the lives of these defendants. The essential shabbiness of my brother Foster's attempt to cloak his remaking of the written law with an air of legitimacy comes tragically to the surface in my brother Tatting's opinion. In that opinion Justice Tatting struggles manfully to combine his colleague's loose moralisms with his own sense of fidelity to the written law. The issue of this struggle could only be that which occurred, a complete default in the discharge of the judicial function. You simply cannot apply a statute as it is written and remake it to meet your own wishes at the same time. Now I know that the line of reasoning I have developed in this opinion will not be acceptable to those who look only to the immediate effects of a decision and ignore the long-run implications of an assumption by the judiciary of a power of dispensation. A hard decision is never a popular decision. Judges have been celebrated in literature for their sly prowess in devising some quibble by which a litigant could be deprived of his rights where the public thought it was wrong for him to assert those rights. But I believe that judicial dispensation does more harm in the long run than hard decisions. Hard cases may [*1868] even have a certain moral value by bringing home to the people their own responsibilities toward the law that is ultimately their creation, and by reminding them that there is no principle of personal grace that can relieve the mistakes of their representatives. Indeed, I will go farther and say that not only are the principles I have been expounding those which are soundest for our present conditions, but that we would have inherited a better legal system from our forefathers if those principles had been observed from the beginning. For example, with respect to the excuse of self-defense, if our courts had stood steadfast on the language of the statute the result would undoubtedly have been a legislative revision of it. Such a revision would have drawn on the assistance of natural philosophers and psychologists, and the resulting regulation of the matter would have had an understandable and rational basis, instead of the hodgepodge of verbalisms and metaphysical distinctions that have emerged from the judicial and professorial treatment. These concluding remarks are, of course, beyond any duties that I have to discharge with relation to this case, but I include them here because I feel deeply that my colleagues are insufficiently aware of the dangers implicit in the conceptions of the judicial office advocated by my brother Foster. I conclude that the conviction should be affirmed. Handy, J. I have listened with amazement to the tortured ratiocinations to which this simple case has given rise. I never cease to wonder at my colleagues' ability to throw an obscuring curtain of legalisms about every issue presented to them for decision. We have heard this afternoon learned disquisitions on the distinction between positive law and the law of nature, the language of the statute and the purpose of the statute, judicial functions and executive functions, judicial legislation and legislative legislation. My only disappointment was that someone did not raise the question of the legal nature of the bargain struck in the cave - whether it was unilateral or bilateral, and whether Whetmore could not be considered as having revoked an offer prior to action taken thereunder. What have all these things to do with the case? The problem before us is what we, as officers of the government, ought to do with these defendants. That is a question of practical wisdom, to be exercised in a context, not of abstract theory, but of human realities. When the case is approached in this light, it becomes, I think, one of the easiest to decide that has ever been argued before this Court. Before stating my own conclusions about the merits of the case, I should like to discuss briefly some of the more fundamental issues involved - issues on which my colleagues and I have been divided ever since I have been on the bench. I have never been able to make my brothers see that government is a human affair, and that men are ruled, not by words on paper or by [*1869] abstract theories, but by other men. They are ruled well when their rulers understand the feelings and conceptions of the masses. They are ruled badly when that understanding is lacking. Of all branches of the government, the judiciary is the most likely to lose its contact with the common man. The reasons for this are, of course, fairly obvious. Where the masses react to a situation in terms of a few salient features, we pick into little pieces every situation presented to us. Lawyers are hired by both sides to analyze and dissect. Judges and attorneys vie with one another to see who can discover the greatest number of difficulties and distinctions in a single set of facts. Each side tries to find cases, real or imagined, that will embarrass the demonstrations of the other side. To escape this embarrassment, still further distinctions are invented and imported into the situation. When a set of facts has been subjected to this kind of treatment for a sufficient time, all the life and juice have gone out of it and we have left a handful of dust. Now I realize that wherever you have rules and abstract principles lawyers are going to be able to make distinctions. To some extent the sort of thing I have been describing is a necessary evil attaching to any formal regulation of human affairs. But I think that the area which really stands in need of such regulation is greatly overestimated. There are, of course, a few fundamental rules of the game that must be accepted if the game is to go on at all. I would include among these the rules relating to the conduct of elections, the appointment of public officials, and the term during which an office is held. Here some restraint on discretion and dispensation, some adherence to form, some scruple for what does and what does not fall within the rule, is, I concede, essential. Perhaps the area of basic principle should be expanded to include certain other rules, such as those designed to preserve the free civilmoign system. But outside of these fields I believe that all government officials, including judges, will do their jobs best if they treat forms and abstract concepts as instruments. We should take as our model, I think, the good administrator, who accommodates procedures and principles to the case at hand, selecting from among the available forms those most suited to reach the proper result. The most obvious advantage of this method of government is that it permits us to go about our daily tasks with efficiency and common sense. My adherence to this philosophy has, however, deeper roots. I believe that it is only with the insight this philosophy gives that we can preserve the flexibility essential if we are to keep our actions in reasonable accord with the sentiments of those subject to our rule. More governments have been wrecked, and more human misery caused, by the lack of this accord between ruler and ruled than by any other factor that can be discerned in history. Once drive a sufficient wedge between the mass of people and those who direct their legal, [*1870] political, and economic life, and our society is ruined. Then neither Foster's law of nature nor Keen's fidelity to written law will avail us anything. Now when these conceptions are applied to the case before us, its decision becomes, as I have said, perfectly easy. In order to demonstrate this I shall have to introduce certain realities that my brothers in their coy decorum have seen fit to pass over in silence, although they are just as acutely aware of them as I am. The first of these is that this case has aroused an enormous public interest, both here and abroad. Almost every newspaper and magazine has carried articles about it; columnists have shared with their readers confidential information as to the next governmental move; hundreds of letters-to-the-editor have been printed. One of the great newspaper chains made a poll of public opinion on the question, "What do you think the Supreme Court should do with the Speluncean explorers?" About ninety per cent expressed a belief that the defendants should be pardoned or let off with a kind of token punishment. It is perfectly clear, then, how the public feels about the case. We could have known this without the poll, of course, on the basis of common sense, or even by observing that on this Court there are apparently four-and-a-half men, or ninety per cent, who share the common opinion. This makes it obvious, not only what we should do, but what we must do if we are to preserve between ourselves and public opinion a reasonable and decent accord. Declaring these men innocent need not involve us in any undignified quibble or trick. No principle of statutory construction is required that is not consistent with the past practices of this Court. Certainly no layman would think that in letting these men off we had stretched the statute any more than our ancestors did when they created the excuse of self-defense. If a more detailed demonstration of the method of reconciling our decision with the statute is required, I should be content to rest on the arguments developed in the second and less visionary part of my brother Foster's opinion. Now I know that my brothers will be horrified by my suggestion that this Court should take account of public opinion. They will tell you that public opinion is emotional and capricious, that it is based on half-truths and listens to witnesses who are not subject to cross-examination. They will tell you that the law surrounds the trial of a case like this with elaborate safeguards, designed to insure that the truth will be known and that every rational consideration bearing on the issues of the case has been taken into account. They will warn you that all of these safeguards go for naught if a mass opinion formed outside this framework is allowed to have any influence on our decision. But let us look candidly at some of the realities of the administration of our criminal law. When a man is accused of crime, there are, [*1871] speaking generally, four ways in which he may escape punishment. One of these is a determination by a judge that under the applicable law he has committed no crime. This is, of course, a determination that takes place in a rather formal and abstract atmosphere. But look at the other three ways in which he may escape punishment. These are: (1) a decision by the Prosecutor not to ask for an indictment; (2) an acquittal by the jury; (3) a pardon or commutation of sentence by the executive. Can anyone pretend that these decisions are held within a rigid and formal framework of rules that prevents factual error, excludes emotional and personal factors, and guarantees that all the forms of the law will be observed? In the case of the jury we do, to be sure, attempt to cabin their deliberations within the area of the legally relevant, but there is no need to deceive ourselves into believing that this attempt is really successful. In the normal course of events the case now before us would have gone on all of its issues directly to the jury. Had this occurred we can be confident that there would have been an acquittal or at least a division that would have prevented a conviction. If the jury had been instructed that the men's hunger and their agreement were no defense to the charge of murder, their verdict would in all likelihood have ignored this instruction and would have involved a good deal more twisting of the letter of the law than any that is likely to tempt us. Of course the only reason that didn't occur in this case was the fortuitous circumstance that the foreman of the jury happened to be a lawyer. His learning enabled him to devise a form of words that would allow the jury to dodge its usual responsibilities. My brother Tatting expresses annoyance that the Prosecutor did not, in effect, decide the case for him by not asking for an indictment. Strict as he is himself in complying with the demands of legal theory, he is quite content to have the fate of these men decided out of court by the Prosecutor on the basis of common sense. The Chief Justice, on the other hand, wants the application of common sense postponed to the very end, though like Tatting, he wants no personal part in it. This brings me to the concluding portion of my remarks, which has to do with executive clemency. Before discussing that topic directly, I want to make a related observation about the poll of public opinion. As I have said, ninety per cent of the people wanted the Supreme Court to let the men off entirely or with a more or less nominal punishment. The ten per cent constituted a very oddly assorted group, with the most curious and divergent opinions. One of our university experts has made a study of this group and has found that its members fall into certain patterns. A substantial portion of them are subscribers to "crank" newspapers of limited circulation that gave their readers a distorted version of the facts of the case. Some thought that "Speluncean" means "cannibal" and that anthropophagy is a tenet of the Society. But the point I want to make, however, is this: although al [*1872] most every conceivable variety and shade of opinion was represented in this group, there was, so far as I know, not one of them, nor a single member of the majority of ninety per cent, who said, "I think it would be a fine thing to have the courts sentence these men to be hanged, and then to have another branch of the government come along and pardon them." Yet this is a solution that has more or less dominated our discussions and which our Chief Justice proposes as a way by which we can avoid doing an injustice and at the same time preserve respect for law. He can be assured that if he is preserving anybody's morale, it is his own, and not the public's, which knows nothing of his distinctions. I mention this matter because I wish to emphasize once more the danger that we may get lost in the patterns of our own thought and forget that these patterns often cast not the slightest shadow on the outside world. I come now to the most crucial fact in this case, a fact known to all of us on this Court, though one that my brothers have seen fit to keep under the cover of their judicial robes. This is the frightening likelihood that if the issue is left to him, the Chief Executive will refuse to pardon these men or commute their sentence. As we all know, our Chief Executive is a man now well advanced in years, of very stiff notions. Public clamor usually operates on him with the reverse of the effect intended. As I have told my brothers, it happens that my wife's niece is an intimate friend of his secretary. I have learned in this indirect, but, I think, wholly reliable way, that he is firmly determined not to commute the sentence if these men are found to have violated the law. No one regrets more than I the necessity for relying in so important a matter on information that could be characterized as gossip. If I had my way this would not happen, for I would adopt the sensible course of sitting down with the Executive, going over the case with him, finding out what his views are, and perhaps working out with him a common program for handling the situation. But of course my brothers would never hear of such a thing. Their scruple about acquiring accurate information directly does not prevent them from being very perturbed about what they have learned indirectly. Their acquaintance with the facts I have just related explains why the Chief Justice, ordinarily a model of decorum, saw fit in his opinion to flap his judicial robes in the face of the Executive and threaten him with excommunication if he failed to commute the sentence. It explains, I suspect, my brother Foster's feat of levitation by which a whole library of law books was lifted from the shoulders of these defendants. It explains also why even my legalistic brother Keen emulated Pooh-Bah in the ancient comedy by stepping to the other side of the stage to address a few remarks to the Executive "in my capacity as a private citizen." (I may remark, incidentally, that [*1873] the advice of Private Citizen Keen will appear in the reports of this court printed at taxpayers' expense.) I must confess that as I grow older I become more and more perplexed at men's refusal to apply their common sense to problems of law and government, and this truly tragic case has deepened my sense of discouragement and dismay. I only wish that I could convince my brothers of the wisdom of the principles I have applied to the judicial office since I first assumed it. As a matter of fact, by a kind of sad rounding of the circle, I encountered issues like those involved here in the very first case I tried as Judge of the Court of General Instances in Fanleigh County. A religious sect had unfrocked a minister who, they said, had gone over to the views and practices of a rival sect. The minister circulated a handbill making charges against the authorities who had expelled him. Certain lay members of the church announced a public meeting at which they proposed to explain the position of the church. The minister attended this meeting. Some said he slipped in unobserved in a disguise; his own testimony was that he had walked in openly as a member of the public. At any rate, when the speeches began he interrupted with certain questions about the affairs of the church and made some statements in defense of his own views. He was set upon by members of the audience and given a pretty thorough pommeling, receiving among other injuries a broken jaw. He brought a suit for damages against the association that sponsored the meeting and against ten named individuals who he alleged were his assailants. When we came to the trial, the case at first seemed very complicated to me. The attorneys raised a host of legal issues. There were nice questions on the admissibility of evidence, and, in connection with the suit against the association, some difficult problems turning on the question whether the minister was a trespasser or a licensee. As a novice on the bench I was eager to apply my law school learning and I began studying these question closely, reading all the authorities and preparing well-documented rulings. As I studied the case I became more and more involved in its legal intricacies and I began to get into a state approaching that of my brother Tatting in this case. Suddenly, however, it dawned on me that all these perplexing issues really had nothing to do with the case, and I began examining it in the light of common sense. The case at once gained a new perspective, and I saw that the only thing for me to do was to direct a verdict for the defendants for lack of evidence. I was led to this conclusion by the following considerations. The melee in which the plaintiff was injured had been a very confused affair, with some people trying to get to the center of the disturbance, while others were trying to get away from it; some striking at the plaintiff, while others were apparently trying to protect him. It would have taken weeks to find out the truth of the matter. I decided that [*1874] nobody's broken jaw was worth that much to the Commonwealth. (The minister's injuries, incidentally, had meanwhile healed without disfigurement and without any impairment of normal faculties.) Furthermore, I felt very strongly that the plaintiff had to a large extent brought the thing on himself. He knew how inflamed passions were about the affair, and could easily have found another forum for the expression of his views. My decision was widely approved by the press and public opinion, neither of which could tolerate the views and practices that the expelled minister was attempting to defend. Now, thirty years later, thanks to an ambitious Prosecutor and a legalistic jury foreman, I am faced with a case that raises issues which are at bottom much like those involved in that case. The world does not seem to change much, except that this time it is not a question of a judgment for five or six hundred frelars, but of the life or death of four men who have already suffered more torment and humiliation than most of us would endure in a thousand years. I conclude that the defendants are innocent of the crime charged, and that the conviction and sentence should be set aside. Tatting, J. I have been asked by the Chief Justice whether, after listening to the two opinions just rendered, I desire to reexamine the position previously taken by me. I wish to state that after hearing these opinions I am greatly strengthened in my conviction that I ought not to participate in the decision of this case. The Supreme Court being evenly divided, the conviction and sentence of the Court of General Instances is affirmed. It is ordered that the execution of the sentence shall occur at 6 a.m., Friday, April 2, 4300, at which time the Public Executioner is directed to proceed with all convenient dispatch to hang each of the defendants by the neck until he is dead. Postscript Now that the court has spoken its judgment, the reader puzzled by the choice of date may wish to be reminded that the centuries which separate us from the year 4300 are roughly equal to those that have passed since the Age of Pericles. There is probably no need to observe that the Speluncean Case itself is intended neither as a work of satire nor as a prediction in any ordinary sense of the term. As for the judges who make up Chief Justice Truepenny's court, they are, of course, as mythical as the facts and precedents with which they deal. The reader who refuses to accept this view, and who seeks to trace out contemporary resemblances where none is intended or contemplated, should be warned that he is engaged in a frolic of his own, which may possibly lead him to miss whatever modest truths are contained in the opinions delivered by the Supreme Court of Newgarth. The case was constructed for the sole purpose of bringing into a common focus cer [*1875] tain divergent philosophies of law and government. These philosophies presented men with live questions of choice in the days of Plato and Aristotle. Perhaps they will continue to do so when our era has had its say about them. If there is any element of prediction in the case, it does not go beyond a suggestion that the questions involved are among the permanent problems of the human race. THE CASE OF THE SPELUNCEAN EXPLORERS REVISITED Kozinski, J. * [*1876] In the days when the judges ruled, a great famine came upon the land ... Ruth 1:1 The statute under which defendants were convicted could not be clearer. It provides that "whoever shall willfully take the life of another shall be punished by death." N. C. S. A. (n. s.) 12-A. These thirteen simple English words are not unclear or ambiguous; they leave no room for interpretation; they allow for no exercise of judgment. (It would be different, of course, if the statute contained such inherently ambiguous terms as "is," "alone," or "have sex" - which might mean anything to anybody - but fortunately it doesn't.) Statutory construction in this case is more accurately described as statutory reading. In these circumstances, a conscientious judge has no choice but to apply the law as the legislature wrote it. As the jury found, Roger Whetmore did not die of illness, starvation, or accident; rather, he was killed by the defendants. And the killing was not the result of accident or negligence; it was willful homicide. Indeed, defendants thought long and hard before they acted, even going to the trouble of consulting physicians and other outside advisors. Under the law of Newgarth, which we have sworn to apply, we must affirm the conviction. Defendants argue this result is unjust and ask us to make an exception because of the difficult and unusual circumstances in which they found themselves. They claim it is perverse, possibly hypocritical, to punish them for acts that even the best among us might have committed, had we found ourselves in the same predicament. These are good arguments, presented to the wrong people. There was a time in our history, during the age known as the common law, when judges did not merely interpret laws, they actually made them. At common law, when the legislature was seldom in session and statutes were few and far between, judges developed the law on a case-by-case basis. One case would announce a rule that, when applied to unanticipated facts, reached an absurd result. The judges would then consult their common sense - their sense of justice - and modify the rule to take account of the novel circumstances. At com [*1877] mon law, justice meant tweaking a harsh rule to reach a sensible result. But we are not common law judges; we are judges in an age of statutes. For us, justice consists of applying the laws passed by the legislature, precisely as written by the legislature. Unlike common law judges, we have no power to bend the law to satisfy our own sense of right and wrong. As a wise jurist once observed, "judicial discomfort with a surprisingly harsh rule is not enough to permit its revision." United States v. Fountain, 840 F.2d 509, 519 (7th Cir. 1988) (Easterbrook, J.). That we may feel sympathy for the defendants - that any of us might be in their place but for the grace of God - gives us no authority to ignore the will of the citizens of Newgarth, as embodied in their duly enacted laws. (Unless, of course, the laws violate the Newgarth Constitution - which the law here does not.) This case illustrates why justice is too elusive a concept to be left to judges. Before us stand sympathetic defendants, represented by silver-tongued lawyers who argue that their clients had no choice but to kill Whetmore. "If they had to eat, you must acquit," they tell us. The reality is more doubtful. Defendants were told there was "little possibility," supra, at 1852 (Truepenny, C.J.) - not "no possibility" - they would survive for the ten more days it would take to rescue them. The human body can be strangely resilient and oftentimes surprises us. For example, in late twentieth-century America, Karen Ann Quinlan lived for nine years after life support systems were removed from her comatose body - contrary to doctors' predictions that she would die at once if life support were removed. See Cruzan v. Harmon, 760 S.W.2d 408, 413 & n.6 (Mo. 1988), aff'd sub nom. Cruzan v. Director, Mo. Dep't of Health, 497 U.S. 261 (1990). Had defendants not taken Whetmore's life, everyone in the group might have survived. And if all had not survived, one surely would have died first, and that unfortunate fellow's body could have been eaten by the rest. Whetmore himself seemed to think that survival for another week was possible; why were the others in such a rush to shed blood? Whether the deliberate killing of a human being under these circumstances should be criminal is a difficult question. It must be answered by the conscience of the community, and that conscience is better gauged by the 535 members of the Newgarth legislature than by six unelected, effectively unremovable judges. Defendants also argue that the Newgarth legislature could not have meant what it said - that it must have overlooked a case such as theirs. But defendants are not the first to have suffered this predicament. More than two millennia have passed since Regina v. Dudley and Stephens, 14 Q.B.D. 273 (1884), which raised precisely the same question, and United States v. Holmes, 26 F. Cas. 360 (C.C.E.D. Pa. 1842) (No. 15,383), which dealt with a closely analogous situation. Unfortunate incidents like these do happen from time to time, and we [*1878] must presume the legislature was aware of them, yet chose not to make an exception. But even if this were a case of legislative oversight, it would make no difference. We are not free to ignore or augment the legislature's words just because we think it would have said something else, had it but thought of it. Next week we may have a case in which a man is sentenced to death for killing his dog. Could we affirm the sentence if we were persuaded that the legislature would have made canicide a capital offense, had it but thought of it? Surely not. If putting these defendants to death is unjust - if it offends the sense of the community - relief must come from the organs of government best equipped to judge what the community wants. Contrary to defendants' claim that they have widespread support among the population, elected officials have been strangely deaf to their pleas. The Newgarth legislature - which is almost always in session nowadays - could have amended N. C. S. A. (n. s.) 12-A to make an exception for defendants' case. Any such law could have been made expressly applicable to the defendants, as the Newgarth Constitution contains no reverse ex post facto or bill of attainder clauses. Then again, the Attorney General could have chosen not to prosecute, or to prosecute for a lesser offense. The grand jury - sometimes referred to as the conscience of the community - could have refused to indict, but indict it did. And the petit jury could have exercised its power of nullification by returning a not guilty verdict if convicting defendants offended its collective conscience. See Paul Butler, Racially Based Jury Nullification: Black Power in the Criminal Justice System, 105 Yale L.J. 677, 700-01 (1995). It would be arrogant for us to pretend that we know better than all these other public officials what justice calls for in this case. The political process may yet come to the defendants' rescue, or it may not. But it is in the political arena that defendants must seek relief if they believe the law, as applied to them, has reached an unjust result. We serve justice when we apply the law as written. * * * Although this concludes my analysis, I pause to comment on the views expressed by my colleagues. Some of them, see infra, at 1913 (Easterbrook, J.); infra, at 1884-85 (Sunstein, J.), infer judicial authority to read exceptions and defenses into N. C. S. A. (n. s.) 12-A from the fact that the statute, if read literally, would condemn willful killings by police, executioners, and those acting in self-defense. This presupposes that section 12-A is the only statute bearing on this issue, which it surely is not. In a statutory system, the definition of murder is written in categorical terms, as in section 12-A, while other provisions define justifiable homicide, such as legal authority and self- [*1879] defense (archaic examples dating from as far back as the twentieth century include sections 196 and 197 of California's Penal Code, Cal. Penal Code 196-197 (West 1988), and section 35.05 of the New York Penal Law, N.Y. Penal Law 35.05 (Consol. 1998)), and excusable homicide caused by accident or misfortune during a lawful activity (to give another twentieth-century example, section 195 of California's Penal Code, Cal. Penal Code 195). Defendants have not cited any of the provisions dealing with justification or excuse, doubtless because they do not apply. But that doesn't mean they don't exist, or that the legislature gave judges blanket authority to cut holes into the statute whenever the spirit so moves them. The folly of this approach is perhaps best illustrated by Justice Easterbrook, who finds justification here based on an easy calculus: the killing is justified if there is a net savings in lives. See infra, at 1915 (Easterbrook, J.). But, as Justice West ably demonstrates, there are many situations where one could offer such a justification - the case of the conscripted organ donor for example. See infra, at 1896 (West, J.). Justice Easterbrook offers a "negotiation" rationale for his conclusion - he infers that the spelunceans would have preferred to enter the cave under a regime where one would be sacrificed to feed the rest rather than under a regime where all would starve. See infra, at 1915-16 (Easterbrook, J.). One could just as easily hypothesize a negotiation as to organ donation: any group of five people (one healthy and four needing his organs) could be supposed to have made a pact, while they were all still healthy, to sacrifice the one among them whose organs would be needed to save the rest. Under Justice Easterbrook's rationale, the four would be justified in hunting down a fifth and ransacking his body for vital organs. The parties here did negotiate but failed to reach agreement because Whetmore refused to go along with the bargain; he, at least as of that time, thought that a one in five chance of being killed and eaten was worse than the alternative. My brother Easterbrook rejects this actual negotiation in favor of a hypothetical one where the outcome is dictated entirely by his personal preferences, but he gives no satisfactory reason for doing so. The negotiations actually conducted between the parties - where death was imminent and the risks concrete - are surely a better indication of what agreement would be reached by people in dire straits than Justice Easterbrook's musings about what imaginary explorers, faced with a remote and hypothetical risk, would decide if they took the trouble to think about it. This is a case of a judge who will not let mere facts stand in the way of a perfectly good theory. It demonstrates, better than anything I might say, the danger of appointing academics to the bench. I am more sanguine about the approach taken by my brother Sunstein, though he dithers mightily before he gets to the point. Unlike Justice Easterbrook - who lightly undertakes to weigh life and death [*1880] whichever way his fancy strikes him - Justice Sunstein at least announces a constraining principle: where the statute is clear, we can ignore its plain meaning only when it reaches an absurd result. See infra, at 1883-84 (Sunstein, J.). And he rightly concludes that application of the statute to this case does not reach an absurd result. See id. at 1883-84. Though Justice Sunstein makes the case harder than it need be, I agree with Parts II and III of his opinion because they articulate a workable principle of law that does not depend unduly on the value system of the judge applying it. Which is more than I can say for the opinion of my sister De Bunker. Aside from the fact that she is a Godless heathen - for which she will suffer the tortures of the Ghenna until the coming of the Messiah (which won't be too much longer now if we keep writing opinions like these) - her rationale is, not to put too fine a point on it, odd. As I understand her position, she believes that the defendants acted lawfully because the legislature did not specifically prohibit the killing and eating of someone under these circumstances. See infra, at 1912 (De Bunker, J.). The general prohibition against willful killing is not enough, De Bunker tells us; the legislature had to enact an affirmative prohibition. See id. at 1905. But the legislature also did not affirmatively prohibit killing on Tuesday, or killing for the purpose of harvesting body parts, or killing by someone who can achieve sexual gratification only when his partner succumbs. Nor did the legislature pass laws that specifically prohibit stealing from the rich to give to the poor, though many people believe it's entirely justifiable and have since the days of Robin Hood and Goldilocks. Were Justice De Bunker's rationale to become the law of the land, the legislature would spend its entire time reenacting every law it has already passed, only to say: Yes, we really mean for it to apply in this circumstance or that. And who can tell what special circumstances require affirmative legislative action? Not until the matter is brought before our Court will the legislature learn whether a particular situation is covered by the general rule or requires a specific prohibition - in which case the misconduct suddenly becomes lawful. Nor is this the only danger. Once the legislature is forced to abandon general statutes in favor of multiple specific prohibitions, the problem arises of how to deal with the interstices. If the statute prohibits theft of currency, and theft of bullion, and theft of negotiable securities - rather than merely theft of property - what happens when someone steals something not covered by one of the specific prohibitions, like ancient Krugerrands? Inclusio unius est exclusio alterius, will argue the defendants. Even though Krugerrands are in all material respects the same as bullion and currency, the listing of the latter two raises the inference that the third was meant to be omitted. Surely, the legislature must be permitted to outlaw a generic evil and then create specific exemptions where they appear to be warranted. [*1881] Justice De Bunker's system would quickly devolve into such chaos that a party who could afford a battery of clever lawyers would get away with murder. But for two reservations, I would be inclined to join my sister West's opinion. The two reservations, however, are substantial. Although I agree with much of what she says about the need for the law to be applied equally - and with her trenchant observation that failure to prosecute certain crimes is a species of discrimination visited upon the victims of those crimes, see infra, at 1894-95 (West, J.) - I believe she goes too far. The clear implication of Justice West's opinion is that the legislature here could not have passed a statute authorizing the killing of Whetmore under the circumstances of this case, because to do so would have posthumously withdrawn from Whetmore the right to equal protection of the laws. Presumably, she also believes it would have been a denial of equal protection for the Attorney General not to prosecute the defendants or for the Chief Executive to grant them a pardon, because each of these actions (or inactions) would deny Whetmore (and future Whetmores) the protection of law when they need it most. With this I cannot agree. As I said earlier, I believe that the legislature could properly conclude that the conduct here should not be criminal - and indeed could still do so. I do not agree that this would amount to a withdrawal of equal protection; it would merely adjust rights and responsibilities to reflect conflicting values. Because, as Justice Sunstein explains, this is not an absurd (or, I might add, invidious) choice, see infra, at 1888 (Sunstein, J.), I would leave it open to the legislature. The matter would be different for me if the legislature made a wholly irrational or invidious exception to a generally applicable law, such as legalizing murder or theft in poor neighborhoods. My other reservation about Justice West's opinion, of course, concerns her ruling as to the sentence. I need not dwell on our standing dispute as to whether the imposition of a sentence - particularly a death sentence - must be conditioned on the implementation of a mitigation principle that allows the sentencer to grant defendants "merciful justice," infra, at 1899 (West, J.). I find even more troubling the remedy she adopts, namely the remand for a mitigation hearing. What exactly will happen during the course of such a hearing? Presumably the defendants will try to persuade the judge or jury not to impose the death sentence. But what if they succeed? Our law authorizes death as the only punishment for violating N. C. S. A. (n. s.) 12-A. What can the sentencer do if it is persuaded that the death penalty here is too harsh? May it order whatever other punishment it believes fits the crime, such as whipping, nailing defendants' ears to the pillory, community service, amputation, or exile? My colleague may believe that the judge or jury would order defendants imprisoned, but I don't see where that punishment is authorized any more than [*1882] those listed above. The statute provides only one punishment for the crime of willful homicide, and imprisonment is not it. Were the jury to impose a term of years, we would be required to set defendants free because they would be held without legal authority. What can I say about my sister Stupidest Housemaid's opinion, as she has retreated into one of her occasional "other voices" methods of analysis? While I find her methodology refreshing and wish the rest of us had the courage and imagination to forsake our "whereases" and "wherefores" for a more colloquial form of discourse, in the end I believe she errs even on her own terms. If I understand Justice Studpidest Housemaid's approach, she is voting to reverse the conviction because she does not feel bound by the terms of N. C. S. A. (n. s.) 12-A. And she does not feel bound because she believes that there is no such thing as a rule of law - in her words "the law can often be argued every which way but up." Infra, at 1920 (Stupidest Housemaid, J.). My sister instead judges this case by her moral sense. Justice Stupidest Housemaid also recognizes, however, that "it would be useful for the rule of law to exist," and that "it may even be true that the servant needs a rule of law more than the master." Id. at 1922. Yet she does not take the opportunity to announce how the rule of law should apply in these circumstances, or to try to persuade a majority of the court to do so. Rather, she revels in what she sees as the absence of a rule of law, in a raw exercise of judicial power. This is too bad, because it might be useful to hear Justice Stupidest Housemaid's explication of how a fair and neutral law might be applied in this case. She gives us tantalizing hints, but fails to follow through. For example, she observes that the spelunceans' activities resulted in a great expenditure of resources and the death of ten workers. She says that defendants ought to be held responsible for those deaths. See id. at 1919. Perhaps so, yet Justice Stupidest Housemaid abandons that thought without bringing it to its logical conclusion. I don't understand why. Defendants, after all, stand convicted of murder. The conviction is based on the record developed at trial, which includes information about the ten dead workers. Because Justice Stupidest Housemaid has abandoned the statute as a guide of decision and, instead, uses her moral sense as a compass, she could well affirm the convictions on the ground that defendants caused the deaths of the workers. Such analysis would proceed along the lines of Justice Stupidest Housemaid's opinion. She should start by asking whether what defendants did was morally reprehensible. See id. at 1918. I infer she would say yes: Defendants went into the cave, exposed themselves to danger, knowing full-well that if they got into trouble great efforts would be made to rescue them - wasting valuable resources and endangering the lives of the rescuers. As Judge Cardozo said long ago, [*1883] "Danger invites rescue." Wagner v. International Ry. Co., 133 N.E. 437, 437 (N.Y. 1921). Second, my sister Stupidest Housemaid would look to deterrence. You can bet that if these defendants were convicted of murder for the death of the rescuers, that would make future billionaires think twice and three times about risking their lives in balloons and the like. In terms of incapacitation, we need not worry about those same billionaires doing it again. As for rehabilitation, the death penalty probably would not achieve that end, but three out of four ain't bad. Of course there are some gaps to fill, like the fact that defendants were not charged with killing the workers. But these are the kind of meaningless legal formalisms that my sister Stupidest Housemaid disdains. As she is fond of saying, "When you is sittin on top, you can spit on them below and they can't spit back." (Actually, she says something very close to this, but I changed one little word out of a sense of decorum.) To which I would add, "If you gonna spit, don't spit in the wind." Which is by way of saying: How does it help the cause of the poor, of the oppressed, of the people of color, to let these four rich white guys walk when the law pretty clearly says they're guilty? It seems to me that my sister Stupidest Housemaid got bit by the white man's bug: "When white folks sacrifice white lives for the greater good, it's a big confusing problem." Id. at 1923. But Justice Stupidest Housemaid doesn't need to make "a big confusing problem" out of it. She can simply apply the white folks' law to these white folks and - according to her own lights - they'd get their just deserts. Why should the stupidest housemaid work so hard to pull her master's chestnuts out of the fire? Sunstein, J. * The defendants must be convicted. Their conduct falls within the literal language of the statute, and the outcome is not so absurd, or so peculiar, as to justify this Court in creating, via interpretation, an exception to that literal language. Whether a justification or excuse would be created in more compelling circumstances is a question that I leave undecided. I also leave undecided the question whether the defendants might be able to mount a separate procedural challenge, on constitutional grounds, to the death sentence in this case. In the process of supporting these conclusions, I suggest a general approach to issues of this kind: Apply the ordinary meaning of statutory language, taken in its context, unless the outcome is so absurd as to suggest that it is altogether different from the exemplary cases that [*1884] account for the statute's existence, or unless background principles, of constitutional or similar status, require a different result. I I confess that I am tempted to resolve this case solely by reference to the simple language of the statute that we are construing. The basic question is whether the defendants have "willfully taken the life," N. C. S. A. (n. s.) 12-A, of another human being. At first glance, it seems clear that the statutory requirements have been met. Perhaps we should simply declare the case to be at an end. An approach of this kind would have the benefit of increasing certainty for the future, in a way that reduces difficulty for later courts, and also for those seeking to know the content of the law. This approach enables people to plan and keeps the law's signal clear; the increased certainty is an important advantage. Such an approach also tends to impose appropriate incentives on the legislature to be clear before the fact and to make corrections after the fact. I would go so far as to suggest that a presumption in favor of the ordinary meaning of enacted law, taken in its context, is a close cousin of the void-for-vagueness doctrine, n1 which is an important part of the law of this jurisdiction with respect to both contracts and statutory law. By insisting on the ordinary meaning of words, and by refusing to enforce contracts and statutes that require courts to engage in guessing games, we can require crucial information to be provided to all relevant parties, and in the process greatly increase clarity in the law. Nor is this a case in which a statutory phrase is properly understood as ambiguous or unclear. We do not have a term like "equal," "reasonable," or "public policy," whose content may require sustained deliberation or even change over time. It may be possible to urge that the statutory term "willfully" creates ambiguity, but I cannot see how this is so. There is no question that the defendants acted willfully under any possible meaning of that term. There is nothing wooden, or literal in any pejorative sense, in saying that the words here are clear. I have been tempted to write an opinion to this effect and to leave it at that. But both principle and precedent make me unwilling to take this route. As a matter of principle, it is possible to imagine cases that fit the terms of this statute but for which the outcome is nonetheless so peculiar and unjust that it would be absurd to apply those terms literally or mechanically. In any case, our own jurisprudence forbids an opinion here that would rest entirely on the statutory text. For centuries, it has been clear that the prohibition in N. C. S. A. (n. s.) [*1885] 12-A does not apply to those who kill in self-defense, even though there is no express statutory provision making self-defense a legally sufficient justification. Our conclusion to this effect is based not on literal language, but on the (literal) absurdity of not allowing self-defense to count as a justification. Those justices who purport to be "textualists" here are running afoul of well-established law; I cannot believe that they would remain "textualists" in a genuine case of self-defense. Nor is it clear that the statute would apply, for example, to a police officer (or for that matter a private citizen) who kills a terrorist to protect innocent third parties - whether or not there is an explicit provision for justification or excuse in those circumstances. Where the killing is willful, but necessary to prevent a wrongdoer from causing loss of innocent life, a mechanical or literal approach to this statute would make nonsense of the law. A statute of this breadth creates a risk not of ambiguity, but of excessive generality - the distinctive sort of interpretive puzzle that arises when broad terms are applied to situations for which they could not possibly have been designed and in which they make no sense. A possible response would be that to promote predictability, excessive generality should not be treated as a puzzle at all; we must follow the natural meaning of the words, come what may. But as I have suggested, our self-defense jurisprudence makes this argument unavailable in the current context. But put the precedents to one side. In ordinary parlance, people routinely counteract excessive generality, and thank goodness for that. For example, a parent may tell his child: "Do not leave the house under any circumstances!" But what if there is a fire? A judge may tell his law clerk: "Do not change a single word in this opinion!" But what if by accident, the word "not" was (not?) inserted in the last sentence? Interpreting statutes so as to avoid absurdity could not plausibly undermine predictability in any large-scale or global sense. Nor is it clear that absurdity would be corrected by the legislature before or after the fact. Whether the legislature would correct the absurdity is an empirical possibility, and it is no more than that. Even the most alert people have imperfect powers of foresight, and even the most alert legislature cannot possibly anticipate all applications of its terms. I conclude that when the application of general language would produce an absurd outcome, there is a genuine puzzle for interpretation, and it is insufficient to invoke the words alone. The time-honored notion that criminal statutes will be construed leniently to the criminal defendant strengthens this point. I am therefore unwilling to adopt an approach that would, in all cases, commit our jurisprudence to taking statutory terms in their ordinary sense. [*1886] II As I will suggest, the key to this case lies in showing that the best argument for the defendants is unavailable, because a conviction here would not be analogous to a conviction in the most extreme or absurd applications of the statutory terms. But before discussing that point, I pause to deal with some alternative approaches. Troubled by a conviction in this case, the defendants and several members of this Court have urged some creative alternatives. It is suggested, for example, that under the extreme circumstances of the collapse of the cave opening, the law of civil society was suspended and replaced by some kind of law of nature. See supra, at 1855 (Foster, J.). To the extent that this argument is about a choice of law problem, I do not accept it. There is no legitimate argument that the law of some other jurisdiction applies to this case, and I do not know what is meant by the idea of the "law of nature." The admittedly extreme circumstances themselves do not displace the positive law of this state. Extreme circumstances are the stuff of hard cases, and what makes for the difficulty is the extreme nature of the circumstances, not anything geographical. The question is what the relevant law means in such circumstances, and to say that the law does not "apply" seems to me a dodge. The view that extreme circumstances remove the law's force is a conclusion, not an argument. Nor is this a case in which a constitutional principle, or a principle with constitution-like status, justifies an aggressive construction of the statute so as to make it conform to the rest of the fabric of our law. When a statute poses a problem of excessive generality, a court may properly avoid an application that would raise serious problems under the Constitution, including, for example, the Equal Protection Clause, the First Amendment, or the Due Process Clause. If a legislature intends to raise those issues, it should be required to focus on them with some particularity. Though it cuts in a different direction from the "plain meaning" idea, this principle is also a close cousin of the void-for-vagueness doctrine, designed to require legislative, rather than merely judicial, deliberation on the underlying question. But there is no such question here. Several members of this Court emphasize the "purpose" of the law. See, e.g., supra, at 1857 (Foster, J.). They claim that the defendants should not be convicted because while their actions fall within the statute's letter, they do not fall within its purpose. I have considerable sympathy for this general approach, which is not terribly far from my own, and I do not deny that purpose can be a helpful guide when statutory terms are ambiguous. Statutes should be construed reasonably rather than unreasonably, and when we do not know what statutory terms mean, it is legitimate to obtain a sense of the reasonable [*1887] goals that can be taken to animate them and to interpret them in this light. But there are two problems with making purpose decisive here. First, there is no ambiguity in the statutory terms; when text is clear, resort to purpose can be hazardous. Second, the purpose of any statute can be defined in many different ways and at many levels of generality; and at least in a case of this kind, it is most unclear which characterization to choose. Is the purpose of this statute to reach any intentional killing? Any intentional killing without sufficient justification? Any intentional killing not made necessary by the circumstances? To reach willful killings while at the same time limiting judicial discretion? To make the world better on balance? Any answer to these questions will not come from the statute itself; it is a matter not of excavating something but of constructing it. Where the statute is not ambiguous, we do best to follow its terms, at least when the outcome is not absurd. It is that question to which I now turn. III Thus far, I have urged a particular view of this case: the statute contains no linguistic ambiguity. At most, the statute raises the distinctive interpretive problem created by excessive generality. We have long held that self-defense is available by way of justification. It is unclear whether - and we need not decide whether - the statute would or should be inapplicable to some other cases in which a life was taken "willfully" in order to prevent the death of innocent people. For purposes of analysis let us assume, without deciding, that the statute would and should not be so applied. The question then is whether this case is sufficiently like such cases. If it is, then we will have to reach the difficult question of whether an exemption would be allowed in those extreme cases. In cases that seem to raise a problem of excessive generality, it is often useful to proceed by identifying the exemplary or prototypical cases, that is, the cases that are most clearly covered by the statute. I do not mean to suggest that a statute's reach is limited to such cases; generally it is not. But an identification of the prototypical or exemplary cases can help in the decision whether an application is so far afield as to justify an exemption. The exemplary or prototypical cases within the purview of this statute include those of willful killing of an innocent party, motivated by anger, greed, or self-interest. It is also possible to imagine cases that are at an opposite pole but that seem covered by the statute's literal language: when a defendant has killed someone who has jeopardized the defendant's own life, we have a legally sufficient justification under our law, no matter what the statute literally says. And why would cases of this kind be at the opposite pole? The answer is that, in such cases, the victim of the killing is [*1888] himself an egregious wrongdoer, one whose unlawful, life-threatening misconduct triggered the very killing in question. In such a case, application of the ban on willful killing would indeed seem absurd. It is hard to identify a sensible understanding of the criminal law that holds a defendant criminally liable in such circumstances. In fact, the law recognizes a legally sufficient justification in such circumstances, despite the literal language of the statute. If this case were akin to those at this pole, I have suggested that we would have an exceedingly hard question. But - and now I arrive at the crux of the matter - we have here a quite different situation. The victim was not a wrongdoer, and he did not threaten innocent persons in any way. His death was necessary only in the sense that it was necessary to kill an innocent person in order to permit others to live. The question is not whether we would agree, if we were legislators, to apply the statute in such situations; to overcome the ordinary meaning of the statutory terms, the question is whether it would be absurd or palpably unreasonable to do so. The clear answer is that it is not. It is hardly absurd to say that there is no legal justification or excuse for a willful killing in a situation like this one, even if more people on balance will live (or the killing is otherwise justified by some cost-benefit calculus). Many people who engage in killing can and do claim that particular excuse. To be sure, this case is different from the exemplary or prototypical ones in the sense that the killing was necessary to save lives. But there is nothing peculiar or absurd about applying the law in such circumstances. People with diverse views about the criminal law should be able to accept this claim. Those who believe in retribution and those who believe in deterrence should agree that the outcome, whether or not correct, is within the domain of the reasonable. Retributivists and Kantians are unwilling to condemn someone who has killed a life-threatening wrongdoer. But retributivists and Kantians could certainly condemn the defendants here, who, to save their own lives, took the life of a wholly innocent person, one who withheld his consent at the crucial moment. For the retributivist, those who have killed, in these circumstances, have plausibly committed a wrongful act, even if that act was necessary to save a number of lives. It is not unreasonable to say that the victim deserved to be treated as something other than a means to other people's ends. At the very least a conviction could not, for a retributivist, be deemed absurd. For their part, those who believe in deterrence should concede that a verdict of "innocent" could, in the circumstances of this case, confuse the signal of the criminal law and hence result in more killings. Many people who willfully kill believe that the outcome is justified on balance, and we should not encourage them to indulge that belief. A judgment that N. C. S. A. (n. s.) 12-A protects all blameless victims [*1889] creates a clear deterrent signal for those whose independent judgments may not be trustworthy. From the point of view of deterrence, applying the statute in this instance would, at the least, not be absurd, which is sufficient to justify my conclusion here. I would not entirely exclude the possibility that the defendants would have had a legally sufficient excuse if the unfortunate proceedings had been consensual at all times. It is conceivable that the absurdity exception would apply in that event as well. But this case is emphatically not that one, because the victim's consent was withdrawn before the dice were thrown. At that point, the victim expressly said that he did not wish to participate in this method of deciding who would live or who would die. Where, as here, there was no consent to participate in the process that led to an unconsented-to death, the answer is clear: Those who killed acted in violation of the statute. Thus, it should be possible for those with diverse views of the purpose of the criminal law to agree that there is nothing absurd about following the ordinary meaning of the statutory text here. Indeed, I do not understand any of those justices who disagree with my general conclusion to disagree with this particular point. Their disagreement stems not from a judgment of absurdity, but from a willingness to disregard the text and to proceed in common law fashion - a willingness that would, in my view, compromise rule-of-law values. For example, Justice West urges the need for an individualized hearing, not because she thinks the conviction absurd, but in order to ensure individualized justice. See infra, at 1899 (West, J.). Justice Easterbrook thinks this case is analogous to self-defense, see infra, at 1913 (Easterbrook, J.), but he seems to take our jurisprudence to mean that courts may make particularized inquiries into the circumstances of killings. He does not suggest that a conviction would be absurd. I do not understand Justice Stupidest Housemaid or Justice De Bunker to find absurdity here. And while I very much agree with Justice De Bunker's suggestion that criminal statutes should be narrowly construed, see infra, at 1902 (De Bunker, J.), I would apply that suggestion only in cases of genuine textual doubt. Some members of this Court plainly believe that the killing was morally excusable, because it was necessary in order to ensure that more people would live, and because the victim originally designed the plan that led to his death. See, e.g., infra, at 1916-17 (Easterbrook, J.). But that moral argument cannot be taken to override the natural meaning of the statutory terms, at least where the outcome is one that reasonable people could regard as justified. A serious underlying concern here is that to allow an exception on the claimed principle would be likely to undermine the statutory prohibition, either in principle or in practice. In principle, it is at least unclear that an exemption in this case could be distinguished from a claimed exemption in other cases in [*1890] which our moral judgments would argue otherwise. (Consider, for example, a case in which someone shot, in cold blood, a person whom the killer reasonably believed to be conspiring to kill others.) In practice, the deterrent value of the law might well be undermined by such an exemption, and it is at least possible that some people would kill in the belief or hope that they would be able to claim an exemption. Cost-benefit analysis has its place, but when a statute forbids "willful killing," we ought not to allow anything like a cost-benefit exception. A kind of "meta" cost-benefit analysis may well support this judgment. If courts engaged in individualized inquiries into the legitimacy of all takings of life, law would rapidly become very complicated, and the deterrent value of the statute might start to unravel - especially if prospective killers are at all attentive to the structure of our jurisprudence. I have considerable sympathy for Judge Easterbrook's approach to this case; in most ways his approach tracks my own, and I have been tempted to accept his conclusion as well. We part company, I think, only because I am more concerned about the increased uncertainty and muffled signals, for courts and prospective killers alike, that would come from finding an "exception" here. See id. at 1914-15. I fear the systemic effects of his (not unreasonable) view about this particular case. An implication of my general approach is that the interpretation of statutes, or rules, has an important analogical dimension. The difference between rule interpretation and analogical reasoning is far from crisp and clean. In the interpretation of rules, the ordinary meaning of the terms presumptively governs; but when the application at hand is entirely different from the exemplary or prototypical cases, the ordinary meaning may have to yield. In deciding whether the application is in fact different, we are thinking analogously. But because it is reasonable to think that this case is analogous to the exemplary ones - because it involved the taking of an innocent life - we do best to follow the statutory language. It is for this reason that I do not believe that we should at this time consider legal challenges to the death sentence, as opposed to the conviction, in this case. Justice West has eloquently argued that the death sentence is constitutionally illegitimate. See infra, at 1897-99 (West, J.). I am not sure that she is wrong; nor am I sure that she is right. Most of the time, the Constitution does not permit litigants to "open up" rule-bound law by arguing that it is unreasonable as applied and asking for an individualized hearing on its reasonableness as applied to them. A doctrine that would permit frequent constitutional attacks on rule-bound law would threaten the rule of law itself - increasing unpredictability, uncertainty, and (because judges are merely human) threatening to increase error and injustice as well. There can be no assurance that judges will reach the right outcome once all the facts emerge for individualized decision. But the death penalty is a distin [*1891] ctive punishment (to say the least), and the facts of this case are not likely to be repeated. Perhaps a degree of individualized judgment is constitutionally required before anyone may be sentenced to death. I would be willing to think long and hard about a separate challenge to the death sentence as applied; but I would not decide that issue where, as here, the defendants' challenge is to the conviction rather than the sentence. IV It is my hope that a decision of the case along the lines I am suggesting would impose some pressure on other institutions to design a statute that makes reasonable distinctions to which this provision, standing on its own, appears oblivious. This is in fact a virtue of the species of textualism that I have endorsed here: the creation of incentives for lawmakers, rather than courts, to make appropriate judgments about the numerous cases that fall within law's domain. West, J. * Trapped in a cave, on the verge of starvation, with no credible hope of timely rescue, five speluncean explorers resolve that their only hope of survival is to eat one of their own. They determine to do so and to throw dice to identify who will be the sacrificial lamb. One member then denounces the plan and withdraws his participation. The group proceeds over his objection, with his dice being thrown for him by another. The dissenting member, by bad luck, loses the throw, is killed, and is eaten by his comrades. The group is soon rescued and hospitalized, but only after the accidental deaths of eight of the rescuers seeking to secure their release. The survivors are now charged with murder or, as defined by the relevant statute, with "willfully taking the life," N. C. S. A. (n. s.) 12-A, of another human being, punishable in all cases by death. Under our procedural rules, and acting within its discretion, the jury convened for this case requested that it be relegated only to the role of fact-finder, leaving this Court to determine the legal conclusions. The jury found the facts as briefly recounted above, and it is now our obligation to determine whether the defendants' conduct constitutes murder. If we decide that it does, then the mandatory punishment under the statute is death, unless commuted to a lesser penalty by the governor of the state. [*1892] I The defendants present two novel arguments that require a response. First, they argue that they were operating beyond the jurisdiction of this or any other legal system, not for the usual territorial reason, but rather, for a jurisprudential one: they claim that their very survival in this peculiar situation demanded a course of action, the morality or legality of which is beyond the legitimate power of law to judge. The purpose of law, they urge, is to facilitate cooperative social living and to maximize the fruits of that cooperation. Law, then, is predicated upon the possibility that cooperation will not only increase chances of mutual survival, but will also yield additional benefits to all. Here, cooperation among all would only guarantee their mutual demise; thus, the logical predicate for law was absent. The purpose of law could not be to condemn these actions. Rather, it was both legal and moral for these trapped men to establish their own council and take whatever actions were necessary to assure the survival of the greatest number possible. This they did by agreeing to the procedures of the lottery. Second, the defendants argue that even if our law applies, they are not guilty of the crime of murder because they acted in their own self-defense. A killing is in self-defense, the defendants argue, whenever the situation is such that one life must be taken in order to save one's own. Such killings are basically non-deterrable, the defendants explain: there is no threat of punishment that could change the rational decision to kill. The purpose of our criminal laws against homicide is deterrence, but these acts were non-deterrable; hence, they were not crimes. There is no point in applying the criminal sanction of the law, and therefore, the law does not apply. Are these arguments meritorious? Of course, there is no authority for the proposition that the "purpose" of either the rule of law or the laws forbidding murder, whatever those purposes may be, should determine the limits of the law's reach. Nor is there authority for the narrow proposition that the self-defense justification should extend as far as the defendants contend. But that lack of authority is not fatal to the defendants' argument - at most, it implies that we are not compelled by precedent to follow the course the defendants urge. We still need to decide, as a matter of first impression, whether the arguments they have presented have merit. In my view they do not. The defendants' first claim is powerful, well reasoned, and rests on seemingly incontrovertible premises. Much of our law - particularly contract and property law - is indeed based on the assumption that cooperation through legal mechanisms can increase the benefits of cooperative social living, and hence on the further, typically unstated, assumption that cooperative social living increases rather than decreases the chances of mutual survival. It is also true that a blanket accep [*1893] tance of our laws by the defendants in their natural cage would have done nothing to increase their chances of survival or the benefits of cooperation. We might agree that in order to insure the survival of the majority of them, or for that matter even one of them, they would have had to break one or more of this jurisdiction's laws. If the purpose of law is to secure the gains of cooperation, with the most significant gain being mutual survival, and if law should not extend beyond the limits of its defining purpose, then it does seem to follow that the defendants were beyond the law's reach. There is also a good bit of sense in the defendants' claim that law, or a law, should not extend beyond the limits of its defining purpose. To do otherwise is capricious and irrational, rather than lawful, and, in the case of capital crimes, forces the state to engage in acts that are themselves unjustified killings. That degree of hypocrisy is intolerable. The problem with the defendants' argument is not the lack of authority for their bold assertion that the law should not be pressed beyond its purpose, or with the logic of that assertion itself. The problem is that they have misidentified the law's driving purpose. The core purpose of law, or of the rule of law, is not contract, but rather, the protection of rights, the most important of which is the individual's right to equal respect, and accordingly, equal protection under the law. The point of law is to protect all, equally, against the wrongful private aggression of others. Indeed, it is only within the umbrella of such equal protection and the individual rights that guarantee it that contractual freedom and contract law yield any benefits at all. The insistence on the right of each individual to the enjoyment of equal protection by the state from the private aggression of others - particularly homicidal aggression - is the essence of what distinguishes a society living under the rule of law from a society living under the whimsical dictates of a state of nature. In the state of nature, an individual or group may, for any number of reasons, decide that its own chances of survival would be well served by killing, enslaving, or oppressing another person or group, and such a decision would quickly become a political reality. The point of the rule of law is essentially to create and then protect the individual's right not to be so treated and to sanction the conduct of the group or individual who attempts to do so. The defendants are surely right that contract, and the protection of social gain that it facilitates, is at the center of a great deal of our law. But that body of law is only intelligible once the more fundamental right of equal protection against private assault is secured. An individual may exploit his natural talents and strengths in whatever way imaginable in securing gains through contract. What he may not do is exploit his strength - whether the source of that strength be a natural inheritance, a cultivated talent, or the strength of numbers - in such a [*1894] way as to violate the rights of others. The most central of those rights is, unquestionably, the right not to be killed, consumed, enslaved, or violently attacked for the benefit of his brothers. The individual has the right to expect the state to protect him against exactly this form of exploitation. Much follows from this core purpose regarding the content of our law. For example, the common prescriptions against contracting oneself into slavery, contracting for the sale of a body part, or contracting for one's own death can be understood as stemming from our conviction that these rights to state protection against private aggression are so fundamental that they cannot even be voluntarily foresworn. Contract is predicated upon the provision of these core protections against private violence, and thus, these protections, in turn, define the limits of contractual freedom. More important, if less obvious, than the limits on contract that are implied by the priority of the individual's right to protection against violence, are the limits this principle places on actions or inactions of the state. A state may not decide, for good reasons, bad reasons, or no reason, simply to withdraw its protective shield from the vulnerable lives of some individual or group, leaving that individual or group to the mercies of his or her stronger co-citizens. Nor may a state decide not to extend its protection. A state may not decide, for example, to proceed with the execution of a wrongly accused criminal defendant out of the belief that such an execution might prompt a serial killer to stop killing children. Even if such a belief is fully justified - even if the state knew that the true killer would in fact stop killing after the execution in order to reinforce the false societal belief that the correct killer had been identified - such an execution of an innocent person would nevertheless be an intolerable violation of the accused's right to equal protection of the law. Nor may a state decide not to protect a particular group - for example, poor people who live in dangerous neighborhoods - against private violence and aggression, even for the reason that to provide such protection threatens an exceedingly high number of policemen's lives. Nor, of course, may a state decide not to protect a subgroup - a racial or sexual minority, for example - against violence out of a habitual, unconscious, or calculated attempt to enable a favored group to secure the exploitative gains or benefits that might follow from a withdrawal of such protection. Such scapegoating is inimical to the system of rights that is at the heart of our rule of law. Indeed, it is no exaggeration to say that the core meaning of the rule of law is precisely that scapegoating - whether for noble or ignoble reasons and whether prompted by state or private calculations of benefit and loss - is paradigmatically illegal. As citizens of a society governed by the rule of law, we should not deny to any individual or group of individuals the state's protection against private violence in situations in which that violence is intended to secure benefits to - [*1895] or even the survival of - the favored. All individuals have the right to be protected against violence, including violence that is premised upon the moral calculation that the sacrifice will save more lives than it will take. It is thus apparent that the defendants' actions in this case are not merely within the scope of the rule of law, as defined by its purpose, but rather, are at the very heart of it. There are indeed different degrees of moral culpability in the motives that prompt different murders. Some such motives are more or less reprehensible than others. But from the perspective of the virtues and values central to the ideal of the rule of law, the defendants' jurisprudential and jurisdictional challenge only raises differences in degrees of moral culpability that are ultimately inconsequential: the violation of the individual's right to equal respect and regard, and accordingly his right to equal protection of the law, is not lessened by the strength of the justification for the killing. That he cannot be so sacrificed is precisely what it means to have a right: a right, virtually by definition, cannot be outweighed by individual or group-based calculations of moral or economic gain, even when the gain is measurable in lives saved. The right to equal protection of the law against private violence is violated when the state allows, promotes, or acquiesces in such calculations, and does nothing to prevent or deter the violence to which it leads. This conclusion, it may be necessary to add, is not undercut by the victim's ambivalence regarding his own participation in the scheme that eventually took his life. Even had the victim's participation been consistently voluntary and enthusiastic, the killing would nevertheless have been a murder for the reasons given above. Our well established prescriptions against assisted suicide, suicide pacts, and mutual contracts of self-destruction make clear that our fundamental right to the state's protection against the assaultive conduct of others takes priority over schemes that waive that protection, even with our full consent. The facts here, however, do not even present us with the admittedly more difficult question of whether the ban on assisted suicide can be reconciled with our strong traditions of individual autonomy. The victim in this case initially was supportive of the plan and did concede the fairness of the procedures governing the throwing of the dice. Nevertheless, the victim clearly withdrew his support from the overall plan. This is not, then, a question of assisted suicide. There was no suicide. This victim was killed against his will and without his consent. The defendants' second argument is more modest, but if accepted, would also challenge some of our most defining legal ideals. The defendants argue that the recognized excuse of self-defense should be extended to include all killings in which the victim, if dead, could supply biological matter that could potentially save the defendant's life - rather than confine the defense, as we presently do, to those killings in which the victim himself aggresses against the perpetrator. But this [*1896] we cannot do without inviting a lethal social chaos. Private violence, or even private ordering, cannot be given full sway whenever there are conditions of relative scarcity, rather than the conditions of abundance we have become accustomed to enjoying. To do so invites a slide to state-of-nature conditions, precisely when the need for law is greatest. Contrary to the defendants' representations, we do not already accept such a limit on the criminal sanction, nor are the conditions or circumstances that might give rise to such a claim quite so rare or infrequent as the defendants suggest. For example, there are currently a sizable number of citizens in this country awaiting organ donations, bone marrow replacements, and blood transfusions. The profound scarcity of such organs, bone marrow, and non-contaminated rare blood types is the sad reality that all such patients (as well as those of us who may at any point become such a patient) are forced to endure. That scarcity prompts incomparable anguish among the needy donees, and tortured decisions by medical personnel. Clearly, some percentage of the total number of hopeful donees could conceivably identify potential donors whose organs, marrow, or blood might save their lives. If three, four, or five of those individuals could, in turn, identify the same potential donor - someone with the healthy liver, the matching bone marrow, or the requisite rare blood type - what is to prevent them, under the principle urged by the defendants here, from taking those organs by force, even at the cost of the donor's life? If we do not allow and should not allow such pillaging of another's organs in this not so fanciful scenario, why should we allow it here? The objective need for some body part is the same, whether the need is for the marrow within the bone or the flesh on the outside of it. The moral calculation is the same and comparably motivated: if one life is sacrificed, then a greater number will be saved. One could even imagine the killing in the medical transplant case being preceded by agreement, which was later withdrawn by the victim-donee, as was the case here. In both cases, nothing can excuse the subsequent murder. The broader principle, governing both the speluncean and the organ transfer cases, is simply this: that perpetrators require a part of a victim's body for the perpetrators' own survival does not make the killing that is so motivated one of "self-defense." No act of aggression is being defended. Rather, there is only a tragic dilemma of incompatible needs and scarce resources. Nor is this action justified by the related doctrine of "moral necessity." The invasive, assaultive taking of the life or body parts of one individual is never "morally necessary," even if such body parts may be necessary to secure a greater number of lives of those in need. Even such an innocent creature as a full-term fetus, or, as some believe, an unborn child, is not permitted to pillage the bodily fluids and organs of the mother when the fetus's actions, although utterly involuntary, threaten the mother's life. The pregnant woman is not expected to [*1897] sacrifice her life to promote the well-being of the fetus inside her who needs her body when that need is at the expense of her own life and the sacrifice is against her desires. Rather, it is in precisely these circumstances (and perhaps only in these circumstances) in the contested and difficult area of reproduction law that we have achieved a sort of societal consensus that the mother (not the fetus) has the right to defend herself against the needy and life-threatening fetus within her by expelling the fetus, even at the cost of the fetus's life. This consensus is not surprising: surely if a born child - for example, an adult - who needed a parent's bone marrow, attempted to secure it from a non-consenting parent, the state would presumably help protect that parent against the child's aggression; the state would not grant the "moral necessity" of the child's action. Nothing here distinguishes the sacrificed speluncean from the pregnant woman whose life is threatened by the needs of the invasive fetus, or from the parent whose life is threatened by the child; indeed, the lack of a parent-child or mother-fetal relationship from which one might arguably infer a duty on the part of the parent or pregnant woman makes the spelunceans' predicament a much weaker case. In all three cases, the sacrificial life is biologically necessary for the aggressor's survival, but in none of them does that fact make the killing (or the letting die, in the case of the pregnant woman) morally necessary. The defendants' actions in the cave, in short, were neither taken in self-defense against unwarranted aggression, nor were they morally necessary. The killing was not justified. II Having rejected the defendants' contentions, it is nevertheless clear to me that these men should not be executed and that to carry out the executions would constitute an injustice - indeed, a killing perhaps as unjustified, ultimately, as the one they committed. The action they took was criminal, and the crime was murder. But does it follow that the punishment must be death by hanging? These defendants have not been given a chance to show this Court - either the jury or the justices - that their actions, although not justified, might be partially or totally excused by the harshness of their circumstances, or alternatively, that the harshness of the penalty applied should be mitigated by a judicial recognition of the extraordinary conditions of hardship under which they struggled. Nor has this Court - again either jury or judge - been permitted to make such a determination. We have not heard the mitigating evidence - whether about the men themselves, their character, the conditions in the cave, the altered states of consciousness those conditions might have brought on, or the feel or the force of the natural imperative of survival to which they eventually acquiesced. This evidence might prompt the Court to recognize the [*1898] unique horror that gripped these defendants and consequently impose a penalty that might be less severe than death for the all-too-human actions they took in response to that horror. But such an exploration - and possibly a recognition - seems to be precisely what this case requires for its just resolution. These men were in desperate circumstances and took desperate measures to survive. It is not obvious that any of us would have responded differently. Even though their action was a criminal homicide, it does not follow that the punishment of death is warranted. These defendants are no threat to the survival of the state or the safety of the community. They have already suffered tremendously. Although not so unique as to remove them from the jurisdiction of our courts, their situation was surely peculiar - so much so that their execution would provide little in the way of general deterrence. Why kill them? Can it really be true that justice requires such a harsh conclusion, without even a hearing of facts or argument that might mitigate it? Our criminal law, as presently constituted in 4300 A.D., seems to require as much. The judge and jury, according to theory, apply the law to the facts toward the end of justice; the Chief Executive, pursuing radically inapposite principles of mercy, can mitigate the punishment by reference to all that the Court, in its pursuit of justice, cannot hear: the stories of these defendants' lives, of their travails, of the pressures upon them, of their remorse, and of their fears and hopes for their future. But this bifurcation of justice and mercy, of "law" and mitigation, of the Court's province and the Chief Executive's office - so reminiscent of the antiquated split between law and equity, long ago abandoned in our civil jurisprudence - serves no one well. It forces defendants to make specious arguments. It forces the Court to make formalistic conclusions, and it tempts judges to make decisions for unstated reasons - an unstated hope, prayer, or expectation that the Chief Executive will or will not act in a certain way; an unsound argument accepted in defense of an action, when it is, in fact, a judge's imagined full accounting of the events in question that constitutes the real grounds of decision. The statute that seemingly requires this woodenness is classically and flagrantly overinclusive: it includes within its sweep acts and defendants whose differentiating circumstances are such that they ought to be treated differently. It also forces the ultimate decision of life or death upon an elected official who may or may not have the requisite popular support, and thereby the political power, to forego executions, even should he think it the morally right course of action. The statute puts the lives of these defendants at the dubious "mercy" of an elected official whose own political survival is beholden to the whims of majoritarian politics. In short, it makes our law unmerciful and the Executive's mercy lawless. The quality of [*1899] our law and the quality of the Executive's mercy both suffer when we pretend that justice and mercy can be severed. For these reasons, I hold that the provision of the murder statute that requires death by hanging as the punishment for the intentional taking of another human life, without any possibility for the judicial mitigation of the punishment, is an unconstitutional deprivation of the defendants' right not to have their lives taken from them without due process of law, and a deprivation of their right to a rational application of law. Just as the victim of criminal violence has a fundamental right to the protection of law, the charged defendant in a criminal case has a right to an individualized determination of an appropriate punishment that reflects the degree of his culpability. In a rights-based system of law such as ours, we can no more neglect a defendant's right to be individually judged than a victim's right to be included in the community and under the law's protection. The choices that the unconstitutional provision now presents us - a judicial finding of guilt, followed by execution; a judicial finding of guilt followed by an Executive's decision to decrease the punishment to six months; or an acquittal on dubious grounds - are too stark. The statute prevents the Court from pursuing merciful justice, and it deprives the defendants of precious constitutional rights. These defendants should be given the opportunity to present their own story in their own defense and in mitigation of the punishment for their criminal action, and this Court should be given the opportunity to so decide. We need to hold a hearing to determine the appropriate sentence. Accordingly, the provision of the statute that denies such an opportunity should be struck, to allow this case to proceed to a fully merciful - and hence more just - resolution. DE BUNKER, J. * I. Overview This case raises disturbing questions about the continuing influence of such anachronistic concepts as "natural law," "inalienable rights," and other legal fictions of ages past. We have yet to reject these irrational residues of the past even in the present fifth millennium (a system of dating which itself is based on what we now recognize to be a religious myth). As is well known from the history disks, shortly after the beginning of the third millennium, the world became engulfed in religious warfare among fundamentalist Christians, Muslims, Jews, and others. Apocalyptic religious extremists obtained access to weapons of mass [*1900] destruction. The result was the cataclysmic decimation of human life in the name of the various "gods" under whose symbols - crosses, crescents, and stars - the slaughters were implemented. The survivors of this apocalypse began to realize that the religious myths surrounding such deities as "the Holy Spirit," "Allah," and "Jehovah" were indistinguishable from those that had surrounded the gods of ancient Egypt, Greece, and Rome. Gradually, a new consensus emerged, at first questioning the existence of any supernatural god (the Agnostic Epoch or AGEP), and then, in the current age, disclaiming any such belief in deities (the Atheistic Epoch or ATEP). n1 Just as the Christian, Muslim, and Jewish primitives of the first and second millennia regarded the Greek and Roman myths of divinity, so too, our enlightened age regards the myths of the so-called monotheistic religions - myths such as the divine origin of the Bible, the divine paternity of Jesus, and the claim that Mohammed was a messenger of God. n2 We appreciate the poetry and occasional insights of the Bible and the often wonderful teachings of the so-called Hebrew prophets, Jesus, and Mohammed - much as the monotheists of the first and second millennia appreciated the religious art and literature of their polytheistic forebears - but we now know for certain that they are entirely of human origin. We know, too, that the world has no "purpose," at least as imposed by some external superior force. Human beings are the product of es [*1901] sentially random processes, such as evolution, genetic mutations, or other largely non-purposive factors. We have long understood these self-evident truths, and we apply them to most areas of our lives, such as science, education, and literature. But when it comes to law, we have stubbornly resisted the necessary process of rooting out of our current legal system the anachronistic remnants of the divine mythologies of our past. We persist in speaking about "natural law," as if the physical "laws" of nature carried with them any normative corollaries. We continue to invoke "inalienable rights," as if we believed that they derived from some preexisting, supernatural, non-human source. Because this case raises questions that challenge the very basis of our laws, I see it as an appropriate vehicle for considering the meaning of such concepts as "natural law" and "inalienable rights" in a world free of superstitions about divine beings, supernatural forces, and purposive creation. I am convinced that in such a world - in our world - there can be no such meaningful concepts as natural law or inalienable rights. Natural law presupposes a view of nature - of the nature of human beings and of the world - that is demonstrably false. The nature of human beings is so diverse - ranging from the most amoral and predatory to the most moral and self-sacrificing - that all or no normative conclusions can be drawn from its descriptive diversity. n3 Inalienable rights presuppose an externally imposed hierarchy that makes no sense in the absence of an external law-giver. We must now ac [*1902] knowledge that all law must be positive law and all rights must merely be strongly held preferences that we or our predecessors have agreed to elevate over other positive law. This elevated status of particular laws - such as the guarantee of free speech - can be the result of a constitution (written or oral), an entrenched tradition, or another form of super-positive law. It cannot come from any claim of supernatural or natural forces external to the human processes of lawmaking. Thus, the only basis for preferring one set of laws or rights over another is human persuasion and advocacy. In this opinion, I will try to persuade others to accept my approach, not by reference to some natural or supernatural authority, but rather exclusively by reference to human reason and agreed-upon principles. These principles may take the form of preferred imperatives, such as those proposed by ancient philosophers including Immanuel Kant, or they may take the form of preferred situational rules, such as those proposed by Jeremy Bentham and others. But they are all merely human preferences, even if often articulated in the language of natural law and inalienable rights. n4 II. Discussion How then should a supreme court, unencumbered by concepts of natural law or inalienable rights, evaluate the actions that form the basis of this case? First, some preliminary observations are necessary: a civilized society could reasonably legislate either result advocated by my judicial colleagues. The legislature could have, if it had anticipated the current problem, written a clear, positive law explicitly prohibiting starving people from killing one of their number in order to save the rest. The arguments in favor of and in opposition to such a rule are fairly obvious and have been made over the ages. n5 Yet our legislature has never explicitly resolved this millennia-old debate by enacting legislation either prohibiting or permitting such life-saving killings. My preference in this situation is for the following rule of law: when a tragic choice is sufficiently recurring so that it can be anticipated, and when reasonable people over time have disagreed over whether a given choice should be permissible, the onus must be on the legislature to prohibit that choice by the enactment of positive law if it wishes to do so. For those who argue that such a positive law would be ineffective because it is against the self-preservatory nature of human beings, there is a simple answer: legislate creative punishments that will be ef [*1903] fective. Such punishments might include posthumous shame, n6 deprivation of inheritance rights for offspring, or enhanced painful punishments for survivors. The point is that this is largely an empirical, rather than a moral, objection to prohibiting the eating of one starving human to save others. n7 A civilized society could also legislate a positive law permitting (even requiring) the sacrifice of one starving innocent person to save several others. The arguments in support of such a law are also obvious and long standing. As Oliver Wendall Holmes reportedly wrote, "All society has rested on the death of men and must rest on that or on the prevention of the lives of a good many." Objections, such as the slippery slope, are also commonplace. The point is that neither approach is more "natural" than the other. Nor can the case be resolved by reference to any inalienable right, such as the "right to life." Both approaches claim to be natural and to further the right to life. Both also have considerable moral and empirical advantages and disadvantages, and no one in our society is inherently better suited to choose one over the other than anyone else. n8 Yet a choice must be made. Accordingly, we move the argument from the level of substance to the level of process: who shall be authorized to make such decisions, on what bases shall they be made, and if there are gaps in the primary decisionmaking, who shall be authorized to fill the gaps in particular cases? These issues must also be matters of preference and persuasion. The problem presented by this case has existed since the beginning of recorded history. There are examples - at differing levels of abstraction - in numerous works of history, religion, and literature. Why then did the representative body that was authorized to enact general laws not specifically address this recurring issue? To be sure, the issue does not occur with the frequency of self-defense, but it is widely enough known to be capable of specific inclusion in any modern code governing homicide. Indeed, one of the most ancient of legal [*1904] codes - the Talmud - did include specific discussions of this and related questions. n9 Philosophers and legal scholars have also considered these issues over the years. Yet few, if any, criminal codes explicitly tell starving cave explorers, sailors, or space travelers what they may, should, or must do if they find themselves in the unenviable position in which these defendants found themselves. It is to be noted that this case is not unlike one that occurred in the ninth century of the second millennium in a nation then known as Great Britain. See Regina v. Dudley and Stephens, 14 Q.B.D. 273 (1884). Yet even after the divided court in that case expressed considerable difficulty in arriving at a principled decision based upon those facts, the legislature did not enact a positive law to resolve the issue definitively. Nor can the legislature's silence in the face of the nominal affirmance of that conviction be deemed evidence of its intent to demand conviction in this case. The vast majority of comparable cases - both before and after that decision - resulted in acquittal or decisions not to prosecute, and the English case produced a pardon. The law is more than the isolated decisions of a small number of appellate courts. What does this long history of legislative abdication of responsibility tell us about how we, a court, should resolve this case? It tells us that the people do not seem to want this issue resolved in the abstract by legislation. Our elected representatives apparently prefer not to legislate general approval or disapproval of the course of action undertaken by the defendants here. Our citizens cannot bring themselves to say that eating one's neighbor in the tragic situation presented here is morally just. Nor can they bring themselves to say it is unjust. They would prefer to leave the decision, as an initial matter, to the people in the cave (at least as long as they make it on some rational and fair basis). Then they would have a prosecutor decide whether to prosecute, a jury whether to convict, a court whether to affirm, and an executive whether to pardon or commute. That is the unwieldy process, composed of layers of decisionmakers, they seem to have chosen. The question still remains: by what criteria should we, the Supreme Court, decide whether to affirm the jury's conviction (and recommendation for clemency)? The answer seems relatively obvious to me, and [*1905] I will try to persuade others to agree with the preferences on which it is based. I begin with my strong preference - a preference which I believe and hope is now widely shared - for a society in which any act that is not specifically prohibited is implicitly permitted, rather than for a society in which any act that is not specifically permitted is implicitly prohibited. As Johann Christoph Friedrich von Schiller similarly expressed, "Whatever is not forbidden is permitted." n10 The lessons of history have demonstrated why the former is to be preferred over the latter. A general preference for freedom of action in the absence of specific prohibition does, however, raise some troubling problems. Innovative harm-doers often find ways to do mischief between the interstices of positive law, and old laws have difficulty keeping up with new technologies. Accordingly, this preference occasionally results in the failure to punish the initial group of creative criminals in any particular genre. Still, I would argue for a strong presumption in favor of freedom in the absence of a specific prohibition - even at the cost of letting some guilty go free. In any event, the problem outlined above does not describe the situation we face. The actions committed by these defendants were not part of some technological innovation unknowable to the drafters of our positive law. Our drafters could easily have legislated against what the defendants did here. They did not. Why they did not - laziness, thoughtlessness, cowardice, superstition, or an unwillingness to resolve an intractable moral dilemma - is in the realm of speculation. That they did not is not fairly open to doubt. Some may argue, of course, that the general prohibition against willful killing is enough to cover the conduct at issue here because this killing was willful. n11 But I do not believe that it can be reasonably maintained that the absence of an explicit exception to the broad prohibition against killing contained in the positive law must be interpreted as an implicit prohibition against the kind of killing done here. That mode of reasoning would substantially compromise the principle that what is not specifically prohibited is implicitly permitted, especially in the context of a widely reported and debated historical genre of alleged crime such as the killing under consideration here. Moreover, the law has long recognized justifications for taking actions expressly prohibited by the letter of the law when such actions are "necessary" to prevent a "greater harm." This principle has been [*1906] summarized by the quip, "Necessity knows no law." n12 It is a mischaracterization, however, because there is a well-developed, if imprecise, law of necessity that permits the choice of a lesser harm to prevent a greater harm. n13 Throughout history, philosophers and jurists have debated cases - both hypothetical and real - that tested this difficult principle. During the Nazi holocaust of the second millennium, a group of Jews who were hiding from Nazi killers smothered a crying baby in order to prevent the Nazis from discovering their hiding places and killing them all. When that terrible dilemma - which occurred in slightly differing contexts throughout the holocaust - was presented to distinguished religious leaders, the consensus was that the conduct could not be condemned. See Marilyn Finkelman, Self-defense and Defense of Others in Jewish Law: The Rodef Defense, 33 Wayne L. Rev. 1257, 1278-80 (1987). Nor do I believe that a secular court would have found these desperate people guilty of murder even if they willfully, deliberately, and with premeditation killed the innocent baby. n14 Necessity as a general defense to crime "seems clearly to have standing as a common law defense." n15 Model Penal Code 3.02 commentary at 10 (1962). Nearly all jurisdictions recognize the necessity [*1907] defense for crimes that are short of killing. n16 Thus, if our defendants had found a locked food-storage box in the cave with a sign saying "private, personal property, do not open under any circumstances," and they had broken open the lock and eaten the food, no one would deny they were acting lawfully. I doubt that any of my colleagues would convict such defendants of theft even if the words of the theft statute provided for no exception. The general law of necessity provides the requisite exception in cases in which theft is a lesser evil than multiple deaths. However, some jurisdictions have explicitly refused to extend the necessity defense to the killing of an innocent person that is necessary to prevent the deaths of several innocent people. n17 Other jurisdictions have not limited the necessity defense to non-killings. n18 Academic opinion is divided, and the weight of the American Law Institute is on the side of not limiting the defense as long as the killing is necessary and results in the saving of more innocent lives than are taken. "The principle of necessity is one of general validity ... It would be particularly unfortunate to exclude homicidal conduct from the scope of the defense." n19 Model Penal Code 3.02 commentary at [*1908] 14. The reason that judicial decisions about this issue are "rare," see id. at 10, is that prosecutors almost never bring charges against people who have chosen the lesser evil of taking one life to save many others. Our jurisdiction has not resolved this debate or even confronted this issue. Our own common law of necessity is thus written in terms as general as our murder statute: "Anyone who commits an act that would otherwise be a crime under circumstances in which it is necessary to prevent a greater evil shall not be guilty." The issue before us, therefore, is whether the legislative silence should be interpreted as acceptance or rejection of the limitation adopted by some jurisdictions and rejected by others. Compounding the complexity of the problem is the fact that in the absence of legislative resolution, these defendants sought authoritative guidance from various sources before deciding what to do - the best they could do under the circumstances. They were denied any such guidance. To hold them criminally liable is to convict them of guessing wrongly regarding what the unpredictable vote of this Court would be. Moreover, to convict them under these circumstances - especially in the face of our legislature's refusal to resolve the debate over the limits of the necessity defense - would be to prefer a rule of judicial interpretation that resolves doubts in favor of expanding the criminal law rather than of resolving "ambiguity concerning the ambit of criminal statutes ... in favor of lenity." United States v. Bass, 404 U.S. 336, 347 (1971) (quoting Rewis v. United States, 401 U.S. 808, 812 (1971)) (internal quotation marks omitted). n20 The same rule of lenity must apply, as well, in construing the common law of crime. See Bouie v. City of Columbia, 378 U.S. 347, 352-54 (1964). Where does our Supreme Court get the authority to narrow the law of necessity and thereby to make criminal what the legislature has declined explicitly to proscribe? My brothers and sisters do not answer this question. [*1909] Of course, if the legislature had explicitly considered the "choice of evils" presented by the case and expressly foreclosed the action taken, the necessity defense would not be available. But as I have shown, our legislature has not explicitly spoken to this specific problem, despite its prominent place in legal and philosophical discourse. n21 Accordingly, applying the salutary rule placing the onus on the legislature to prohibit questionable conduct by specific, targeted language, it follows that these defendants may not lawfully be punished. III. The Views of My Colleagues Several of my colleagues point to the plain language of the statute, while acknowledging that there must be exceptions, such as self-defense and executions, that are recognized from time to time at common law. But necessity also has been recognized from time to time, and there has been a great debate over the millennia regarding whether necessity can excuse a killing done to prevent greater harm, such as multiple deaths. Renowned authorities have come down on different sides of this debate, and our legislature has refused to resolve it explicitly. It is in this context that the words included in, and omitted from, the statute must be interpreted. That process can be undertaken in different ways. [*1910] One of my colleagues, Justice Kozinski, proposes an absolute rule of inclusion: unless there is an express exception, the literal words of the statute must apply, regardless of how absurd the result may appear to us. See supra, at 1876 (Kozinski, J.). Taken to its logical conclusion, this rule would punish the proper use of deadly force by policemen because the statute does not explicitly exclude such killings. It is important to recognize that the legislation at issue here is an example of a "common law statute," prohibiting a general category of conduct - in this instance, willful killing - in the broadest of terms, while anticipating judicial narrowing. It cannot rationally be argued that the legislature intended the judiciary to recognize certain exceptions, such as self-defense, while precluding it from recognizing other defenses, such as necessity, that are accepted by numerous jurisdictions. Once it is agreed that this Court has the power to decide whether the defense of necessity is part of our law, it surely must follow that it has the power to define its parameters. It is plainly preferable to leave such decisions to the reasoned judgment of disinterested courts than to the unarticulated discretion of adversarial prosecutors. n22 I am not suggesting that every possible category of crime be specifically mentioned in the statute, but rather that widely recognized defenses, such as necessity, cannot be deemed to have been abrogated by legislative silence, especially when the statute seems to invite inclusion of some recognized defenses that are not explicitly mentioned. Another of my colleagues, Justice Sunstein, proposes an "absurdity exception" to the otherwise absolute rule of plain meaning. See supra, at 1883-84 (Sunstein, J.). This would permit prosecution in the following case: A train loses its brakes and heads in the direction of a fork. If the conductor does nothing, the train will hit a school bus full of children. If he takes the fork, it will hit a drunk sleeping on the track. There is no third alternative. He takes the fork, thus killing the drunk. Convicting him would be wrong because his beneficent purpose was to save lives, but it would not be "absurd" because he intended to kill the drunk. n23 Yet another of my colleagues tells us that all statutes must be interpreted by reference to a "right" whose source is nowhere identified, namely that "all individuals have the right to be protected against violence, including violence that is premised upon the moral calculation that the sacrifice will save more lives than it will take." Supra, at 1895 (West, J.). This rule would permit prosecution not only of the train conductor, but also of the hiding Jews who killed [*1911] the baby in order to prevent their apprehension and murder by the Nazis. Would my colleagues really support their preferred rules in the face of these testing cases? Justice West also poses a provocative hypothetical case, which should be troubling to any thoughtful judge or legislator. She asks whether a reversal of this conviction would require the conclusion that a group of people in need of organs to live may properly kill one person in order to harvest his organs so that all in the group might live. See id. at 1896. It is a good question. One must begin with the conclusion that any general rule of law that would routinely permit the killing of a human being for his organs is a rule of law that should not be accepted by a civilized society. That certainly would be my strong preference. Our case can be distinguished from this one on several grounds. First, there is a universal consensus that killing for organs should be deemed unacceptable. I am aware of no dissent to this proposition in all of jurisprudence, philosophy, or even ancient religion. n24 There is considerable disagreement, however, concerning the speluncean case and its sister case involving the crying baby during the Nazi Holocaust. This difference in the level of agreement alone may distinguish the speluncean case from the organ case, though the reasons underlying it may bolster the difference in outcome. A second distinction between the organ donor case and this case is that in this case the victim would have died within days even if the defendants had not killed him. In the organ donor case, the murdered organ donee could have lived out his life. Thus, the issue in the instant case is not whether the victim would have died, but only whether he was to die at the time he was killed so that others could live or whether he would die a few days later in which case no one would have lived. Quite a difference! Third, among the most powerful reasons why we universally reject killing to harvest organs is that organ shortages are a widespread and continuing problem, as Justice West acknowledges. n25 See id. Were we to approve the killing of a potential organ donor, no one would be safe. Everyone with a healthy life-saving organ would be placed at risk by such a rule. The situation is quite different with our explorers or the crying baby. Although these rare situations recur throughout history, they are unlikely to be experienced more than once [*1912] in a long period of time. Whatever we decide in these unusual cases will have little or no impact on the future actions of the infinitesimally tiny number of people who may find themselves in the unexpected situation faced by our explorers or the Jews hiding from the Nazis. These are sui generis cases, about which, in the absence of explicit legislative resolution, we can afford to provide pure retrospective justice, without fear of establishing a dangerous precedent. To be sure, every case contributes to the corpus of precedents, and if the legislature disapproves of our decision, it may announce a rule of law that forbids killing in these situations. The reason the legislature has not explicitly done so for organ-donor killing, is that no one has ever tried - or, likely, would ever try - to raise a defense of necessity in such circumstances. Such a result would be "absurd," to paraphrase another of my colleagues, and legislators need not explicitly reject every "absurd" defense, especially when no one has ever tried to use it. The defense raised in our case, however, is not absurd and it has been raised and even accepted. See Kadish & Schulhofer, supra, at 877-78. These are the differences. Does Judge West believe that smothering the crying baby and killing the person for his organs are really the same case? If not, is not the instant case closer to the former than to the latter? IV. Conclusion I believe that those who would punish the conduct at issue here have the burden of acting to prohibit it explicitly and provide for the appropriate punishment. n26 That burden has not been satisfied by the inaction here. Accordingly, I conclude that the principles expressed above require the conclusion that the killing committed by the defendants in this case cannot be deemed unlawful. The people in the cave could not look to the law for guidance. The statute was not explicit. The precedents cut both ways. They made every reasonable effort to obtain advance guidance from authoritative sources. In the end they had to decide for themselves. They did the best they could under the circumstances, selecting a process which was rational and fair. The end result was a net saving of lives. I cannot find it in my heart - and, more important, I cannot find it in the law - to condemn what they did. If there is disagreement with the preferences stated herein or with the conclusions derived therefrom, let the debate begin. I have [*1913] an open mind, untrammeled by the "natural" and "supernatural" myths of the past. Easterbrook, J. * "Whoever shall willfully take the life of another shall be punished by death." N. C. S. A. (n. s.) 12-A. Defendants killed and ate Roger Whetmore; they did this willfully (and with premeditation, too). Were the language of the statute the end of matters, the right judgment would be straightforward, as Justices Keen and Kozinski conclude. See supra, at 1864 (Keen, J.); supra, at 1876 (Kozinski, J.). Then when the hangman had finished implementing the judgment, he too would be doomed, for the executioner takes life willfully; likewise we would condemn to death the police officer who shot and killed a terrorist just about to hurl a bomb into a crowd. Yet throughout the history of Newgarth such officers have been treated as heroes, not as murderers - and not just because the Executive declines to prosecute. Language takes meaning from its linguistic context, but historical and governmental contexts also matter. Recall the text: "Whoever shall willfully take the life of another shall be punished by death." "Willfully take the life of another," not "be convicted of willfully taking the life of another." Yet the latter reading is one that all would adopt: in our political system guilt is determined in court, not by the arresting officer or the mob. The statute is addressed in part to would-be killers and in part to judges, who in adjudicating a charge apply the complex rules of evidence that may make it impossible to prove beyond a reasonable doubt the guilt of someone who actually committed a murder. No one believes that N. C. S. A. (n. s.) 12-A overrides the rules of evidence, the elevated burden of persuasion, the jury, and other elements of the legal system that influence whether a person who committed a killing will be adjudicated a murderer. Like other criminal statutes, N. C. S. A. (n. s.) 12-A calls for decision according to the legal system's accepted procedures, evidentiary rules, burdens of persuasion - and defenses. For thousands of years, and in many jurisdictions, criminal statutes have been understood to operate only when the acts were unjustified. The agent who kills a would-be assassin of the Chief Executive is justified, though the killing be willful; so too with the person who kills to save his own life. Only the latter is self-defense; the case of the agent shows that self-defense is just one member of a larger set of justifications. All three branches of government historically have been entitled to assess claims of justification - the legislature by specifying the [*1914] prohibition and allowing exceptions, the executive by declining to prosecute (or by pardon after conviction), and the judiciary by developing defenses. As a result, criminal punishment is meted out only when all three branches (plus a jury representing private citizens) concur that public force may be used against the individual. The legislature might curtail the role of the judiciary by enacting a closed list of defenses to criminal charges, but it has not done so. New statutes fit into the normal operation of the legal system unless the political branches provide otherwise. N. C. S. A. (n. s.) 12-A does not provide otherwise. Our legislature could write a law as simple as N. C. S. A. (n. s.) 12-A precisely because it knew that courts entertain claims of justification. The process is cooperative: norms of interpretation and defense, like agreement on grammar and diction, make it easier to legislate at the same time as they promote the statutory aim of saving life. The terrorist example proves the point. "Necessity" is the justification offered by our four defendants. After the first landslide, all five explorers were in great peril, and the rescuers outside the cave confirmed that all were likely to starve by the time help came. The choice was stark: kill one deliberately to save four, or allow all five to die. The death of one was a lesser evil than the death of five, and it was therefore the path that the law of justification encouraged. Military commanders throughout time have understood this equation and have sent squads and platoons on missions from which they were not expected to return, so that a greater number might be saved. Like all of the lesser-evil justifications, necessity is openly utilitarian. Self-defense may reflect uncertainty about the ability of the law to affect conduct by those in imminent fear of death, as Justice Tatting supposes, see supra, at 1862 (Tatting, J.) - though if this is so one wonders why the force used must be the least necessary to defeat the aggression, a restriction that makes sense only if the object of aggression is capable of rational thought and susceptible to influence of legal subtleties. But other lines of justification assume that the actor (our police officer, for example) is calculating and alert. The question is: what shall the law lead him to include or exclude from the calculation? Allowing a defense of necessity creates a risk that people may act precipitately, before the necessity is genuine. Thus if the law allows a starving mountaineer to break into a remote cabin as a last resort to obtain food - if, in other words, necessity is a defense to a charge of theft - it creates a risk that wanderers will break doors whenever they become hungry, even though starvation is far in the future. The parallel risk is that a hungry and poor person surrounded by food may decide to bypass the market and help himself to sustenance. These risks are addressed by the rule that the evil must be imminent and the means, well, necessary; the departure from the legal norm must be (as [*1915] with self-defense) the very least that will avert the evil. United States v. Bailey, 444 U.S. 394 (1980), employs this understanding to conclude that a prisoner under threat of (unlawful) torture by the guards may defend against a charge of escape by asserting that the escape was necessary to avert a greater evil, but the prisoner loses that defense if he does not immediately surrender to a peace officer who will keep him in safe custody. Allowing a defense of necessity creates a second hazard: the very existence of the defense invites extensions by analogy to situations in which criminal liability should not be defeated. That risk is met by the rule that all lawful or less hazardous options must first be exhausted. A prisoner must report his fears to the warden before escaping; and if the warden does nothing, the prisoner must escape rather than harm the guard. United States v. Haynes, 143 F.3d 1089 (7th Cir. 1998), which held that a prisoner who poured boiling oil over his tormentor rather than trying to flee could not assert a defense of necessity, illustrates this approach. The difference between the mountaineer case, in which breaking into a cabin is permitted, and Commonwealth v. Valjean, which held that a poor person may not steal a loaf of bread from a grocer, is that the poor person could negotiate with the grocer, or get a job, or seek public or private charity. A mountaineer who lacks other options to find food, and cannot negotiate with the cabin's (missing) owner, may break into the cabin because that is the last resource; theft is a lesser evil than death, though not a lesser evil than working. Negotiation, actual or potential, offers a good framework with which to assess defenses based on utility. If a defense actually promotes public welfare, then people who are not yet exposed to the peril would agree that the defense should be entertained. Suppose the five speluncean explorers had stopped on the way into the cave to discuss what to do in the event they became trapped. Doubtless they would have undertaken to wait as long as possible for rescue; and it does not stretch the imagination to think that they would have further agreed that if starvation appeared before rescuers did, they would sacrifice one of their number to save the rest. Each would prefer a one-fifth chance of death, if calamity happened, to a certainty of death. Although they might find the prospect so revolting that they would abandon their journey rather than reach such an agreement, the alternative - entering the cave under a set of rules that required all five to starve if any did - would be even worse in prospect. We know that they did enter the cave, and did so under a legal regimen that some members of this Court believe condemned all to starve; it follows that they would have preferred an agreement in which each reduced that risk by eighty percent. Hypothetical contracts are easy to devise; perhaps this accounts for endless philosophical debate about how people negotiate behind a veil [*1916] of ignorance. Judges should subject these speculations to a reality check. What do actual contracts for risk-bearing provide? I refer not to agreements reached after a disaster (such as the explorers' initial plan to cast dice on the twenty-third day, a plan that Whetmore later abjured in favor of waiting some more), but to agreements made before the fateful venture begins - agreements that encompass all of the relevant options, including the option of avoiding the risk altogether. Before going underground, spelunkers, like their above-ground comrades the rock climbers, agree to rope themselves together when scaling or descending walls and chimneys. If one loses his grip, the rope may save a life by stopping the fall - but the rope also creates a risk, for the falling climber may take the others down with him. By agreeing to rope up, each member of the group exposes himself to a chance of death because of someone else's error or misfortune. In exchange he receives protection against his own errors or misfortunes. Each accepts a risk of death to reduce the total risk the team faces, and thus his portion of the aggregate risk. Each agrees, if only implicitly, that if one person's fall threatens to bring all down, the rope may be cut and the others saved. What happened in the cave after the landslide was functionally the same: one was sacrificed that the others could live. That Whetmore turned out to be that one is irrelevant; the case for criminal culpability would have been equally strong (or weak) had any of the others been chosen. The explorers' ex ante agreement did not cover the precise form that the risk would take, or the precise way in which total loss would be curtailed, but it established the principle of mutual protection by individual sacrifice. Securing the reciprocity of advantage ex ante justifies the fatal outcome ex post for an individual team member. Society should recognize this agreement, and the way in which it promotes social welfare, through the vehicle of the necessity defense. To reject the defense is to reject the agreement itself, and to increase future loss. To accept the necessity defense (that is, the risk-sharing agreement) in principle is not necessarily to accept that a given death is within its scope. Rock climbers who cut a dangling comrade's rope prematurely, without exhausting the options to save all, commit murder. Cicero opined that if two sailors were cast adrift on a plank adequate to support only one until rescue came, each could try to be the survivor without criminal liability. But what if they were mistaken, and the plank would support two for long enough? What if all five explorers could have survived until rescue (on day thirty-two), or could have found another exit by further exploration rather than encamping near the cave mouth? Ancient mariners consented to the practice of survival cannibalism in principle, but a broad defense of necessity would have led them to kill a comrade too quickly. Reports were remarkably consistent in relating that the youngest or most corpulent survivor drew the short straw. See A.W. Brian Simpson, Cannibalism and the [*1917] Common Law 124, 131 (1984). To prevent a lesser-evil defense from becoming a license to perpetrate evil, the necessity must be powerful and imminent - again following the self-defense model. But the prosecutor did not argue that the speluncean explorers should have looked for another exit from the caverns, and the jury found that a committee of medical experts had informed the men trapped in the cave that if they did not eat, then there was "little possibility" of their survival until day thirty. The danger that a necessity defense would lead people to magnify (in their own minds) the risk they are facing, and to overreact, did not come to pass. On the facts the jury found, all five very likely would have died had they passively awaited rescue. They acted; four lived. Putting these four survivors to death would be a gratuitous cruelty and mock Whetmore's sacrifice. The judgment of conviction must be reversed. STUPIDEST HOUSEMAID, J. * No superior wants a servant who lacks the capacity to read between the lines. The stupidest housemaid knows that when she is told "to peel the soup and skim the potatoes" her mistress does not mean what she says. Supra, at 1858-59 (Foster, J.) I. The Truth "O'yeah, O'yeah, O'yeah." Now comes the "stupidest housemaid" to clean up the mess the white folks have made. Of course the convictions should be reversed. The stupidest housemaid don't know nothin' 'bout the rule of law. Of all the pretty things she's seen in the Big House she ain't never run cross that. But she knows what she thinks is right. That is the basis of her judgment. As it is the basis of all the other judgments as well. The housemaid the onliest one stupid enough to admit it. Maybe 'cause she got the least to lose. They call these things opinions for a reason. In the stupidest housemaid's opinion, the government should not stand a person on a platform, tie a rope around his neck, and then kick the platform out from under him. And invite guests to watch him vomit blood. In the first place, who but the stupidest housemaid gone be left to scrub the blood out the city square? She good at cleaning up white folks' ugly messes, but it hard work and it take a long time. [*1918] Second, what the point? The government should kill people to prove that killing people is wrong? It don't make no sense to the stupidest housemaid. She know she sposed to separate the punishment from the crime but she cain't. She shouldn't. And most importantly she don't have to. Because, for once, she the judge! And so she won't. The conviction is reversed because the stupidest housemaid think the death penalty is wrong. It so ordered. But it ain't over. Doing day work in the courthouse the stupidest housemaid watches the judges in their chambers. She know they reach they decisions exactly the same way that she just did. They decide what result they want. Then they "interpret" the law to get that outcome. They "opinion" ain't nothing but a big fantasy to explain they climax. But the stupidest housemaid different: she a squirrel that go right to the nut. So she gone tell the truth about her decisionmaking process. She reverse the conviction cause she do not feel what the defendants did was wrong. Maybe if she did she could "interpret" an excuse for the government to break necks. But she sposed to write an opinion! So maybe the stupidest housemaid try that analysis foreplay and see if it get good to her. Her fantasies good as anybody's. Look here. II. The Analysis First of all, the stupidest housemaid would like to thank God, without Whom none of this would be possible. A "crime" is an expression of the moral condemnation of the community, or at least the jury, or, at least in this case, the judge. On her knees the stupidest housemaid prayed to God. God answered "I find nothing to condemn. Haven't you read Exodus? I told Pharaoh to let my people go. When he would not, I killed all the firstborn sons in the land. That changed Pharaoh's mind right quick. So when I consider these spelunceans and how they dealt with the obstacle they encountered on the way to their own promised land, all I can say is you gotta do what you gotta do. If life is holy - and it is - it is better that one person died rather than five." Having determined no moral culpability in the defendants' actions, the stupidest housemaid finds no practical reason to punish them either. Certainly there is no justification from deterrence. People who believe that they are going to die immediately will not be prevented from saving they own lives by the threat of dying ultimately. The stupidest housemaid knows that if she found herself in the position that the spelunceans encountered she would have grabbed a butcher knife and commenced to stabbing with the quickness. Most anybody would. In Regina v. Dudley and Stephens, 14 Q.B.D. 273 (1884), Lord Coleridge, considering a similar case, voted for conviction saying, "We are often compelled to set up standards we cannot reach ourselves, and [*1919] to lay down rules which we could not ourselves satisfy." How very traditional, to support a law with which one has no intention of complying. The stupidest housemaid says "later for that bullshit." The remaining justification of punishment - incapacitation - fails as well. There is no need to incapacitate these men because hopefully they will have more sense than to go poking around caves again without taking the appropriate precautions. And if they do, they will assume the risk that they might meet the same demise as their lost brother Whetmore. The stupidest housemaid knows that the law cannot stop a billionaire from trying to fly around the world in a hot air balloon. Rich men gone do what they want to do, regardless of the consequences. And when they finally reach they goal, they gone be lauded as heroes. Regardless of the losses. Were it up to her, the stupidest housemaid would forbid the government from sending workmen to rescue any explorers who find themselves lost due to their own folly. Here's a killing that would make a nice prosecution. Her brothers were among the ten who died to rescue the four who survived. And everybody having fits and conniptions about whether the four explorers should be punished for the death of the fifth speluncean. Ain't nobody uttering a damn word about whether the law should avenge the killing of the workmen. Oh the government sent the families a plaque commemorating the sacrifice of true and faithful servants. But the prosecutor explained the law didn't fit right around the concept of crime and punishment for their deaths. Seemed to the stupidest housemaid like the criminal law was made to protect the spelunceans, not the workmen. There was, hundreds of years ago, another justification of punishment: rehabilitation. This justification died in the last part of the twentieth century, in part because of the Negroes: they were difficult and expensive to rehabilitate and it was pleasurable to punish them. Accordingly, there is no need to consider here whether rehabilitation would be an appropriate reason to punish the speluncean defendants because no jurisdiction, including Newgarth, now recognizes rehabilitation as an appropriate justification. All right, how they end it? What is the magical incantation you supposed to put at the conclusion? Oh yeah, here it go: "For the foregoing reasons, the convictions must be reversed." III. The Whole Truth Whee! That was fun! Habit forming, even. The stupidest housemaid start to like the smell of her own shit. But for real, even her own words just a bunch of sound and fury, signifying nothing. Leastways they do not signal a rule of law. Because the stupidest housemaid knows that the rule of law is a myth, something rich white folks made [*1920] up to keep everybody else from taking they stuff. Poor and colored folks sposed to shut up when the law tells them they cain't have what rich people have. They sposed to believe it ain't the rich folks making up shit - it's the rule of law. But the law can often be argued every which way but up. And when a judge decides a hard case all he doing is choosing the argument he like the best. Or sometimes choosing his own argument instead. If he chooses another result, that would suit the law just as well. So in any case it ain't no "neutral" decisionmaking. The judge chooses, not interprets, and he chooses based on the result he wants. And the Supreme Court of Newgarth ain't never gone choose law to favor the poor and colored folks - at least not to the point that the rich white folks' richness and whiteness is threatened. They might, if they feeling expansive, put a stupid housemaid on the Supreme Court. But rich white folks gone handle they business. They gone protect their interests. So that why it works out well for some people that there just ain't no rule of law. But even if folks wanted to follow one rule to get justice in every case, they couldn't. Laws made by human beings ain't that smart. Including the Newgarth murder statute. The stupidest housemaid don't care what All Knowing Bell Curve Topping white man thought them up, thirteen words ain't gone hold the just answer to every case, and nobody can believe that they do. For example, soon as the stupidest housemaid read the words, "Whoever shall willfully take the life of another shall be punished by death," she think, "Oh good. Now some of these trigger happy cops riding 'round shooting black and Hispanic folks in the line of duty gone get they just deserts." Then come to find out that ain't what the law means. The stupidest housemaid asks, "ain't that what it say?" "Yeah," rule of law shout back, "but that ain't what it mean." Oh. So how you sposed to know what it mean? That old cracker Justice Foster say even the stupidest housemaid know how to read between the lines. Sometimes Miss Ann say fetch me B when she mean fetch me C. You bring her B, your ass gone get whipped, and what Miss Ann actually said ain't gone make a damn bit of difference. So old man Foster right about one thing: when you the servant on the bottom, you better learn how to read the mind of the master on the top. It's a survival skill. And knowing what the stupidest housemaid know, ain't one police officer who kills in the line of duty ever gone be hanged by the government, even though that what the law call for. 'Cause the law don't mean what its words say it mean. It mean what the judge say it mean. And Hallelujah, Stupidest Housemaid the judge right now! She not the only judge, however. The stupidest housemaid ain't got too much to say about the opinions of the other judges, 'cause, for real, they opinions don't matter any more than hers. Onliest thing [*1921] that matters is they votes. So what we got? Two judges say the government should break necks, and four say the government should not, leastways not no speluncean necks. The non-breakers of necks prevail. It funny though - all these masters of the legal universe and they couldn't agree on whether shit stinks. But they all write so pretty. They all persuade the stupidest housemaid. They all right about the law. They all wrong about it too. Justice Kozinski onliest one say follow the words of the statute, 'cause they "clear." See supra, at 1876 (Kozinski, J.). Okay, so after he kill the speluncean, he gone kill the executioner? He gone kill the police officer who shoots in the line of duty? He gone kill the self-defender? 'Cause the law tell him to? He imply he will, but the stupidest housewife say that's a damn lie. Justice Sunstein say follow the law less the outcome so "peculiar and unjust" it seem "absurd." Supra, at 1884 (Sunstein, J.). Just how you sposed to know what is "peculiar" and "unjust" and "absurd" the good Justice don't directly say. He do say if you kill a terrorist to save the "innocent" that's cool, but if you kill a speluncean to save your ownself you go directly to jail. See id. at 1885, 1888. Ok. But then he add if you kill a speluncean as part of a plan that the speluncean agreed upon, then you don't go to jail. See id. at 1889. Well he say you might not. He say that punishment in that case "conceivably" would be absurd. See id. I guess it depend on what the judge decides. That's cute, but what it got to do with the rule of law? Justice West be making up stuff also. She go on and on 'bout the beauty of the rule of law and how in this case it means those spelunceans should be convicted. See supra, at 1893-95 (West, J.). Then she have the nerve to add, "having rejected the defendants' contentions, it is nevertheless clear" to her that the spelunceans should not be executed. Id. at 1897. She pick and choose the parts of the rule of law she like. So to hang the defendants would be "unjust." Apparently we ain't sposed to measure justice by what the legislature decided - we sposed to have a hearing about "mercy." The stupidest housemaid feels Justice West's pain, but sisterfriend, let's be real: you doing politics and religion here, not law. So take a deep breath and put that rule of law baggage down - it will set you free. Justice Easterbrook done discovered some contract the speluncean made to share risk. See supra, at 1916 (Easterbrook, J.). The stupidest housemaid looked all over the Newgarth law books, but she ain't found no contract exception to the murder law. Even so, Easterbrook say killing the spelunceans would be "gratuitously cruel[ ]." Id. at 1917. So I guess he calling his boys Kozinski and Sunstein - who voted to break the spelunceans' necks - "gratuitously cruel." Ironic thing is Easterbrook is the main one claim to be applying science to [*1922] reach his result. So it seem if Easterbrook gone talk about his boys, he should call them stupid, not cruel. But he right. Kozinski and Sunstein ain't dumb - they just mean. And when Easterbrook call them cruel, he simply proves the stupidest housemaid's point and does what all the other justices do: religion, not science. They use words like "absurd" and "unjust" and "cruel" as an excuse to do as they damn well please. The stupidest housemaid could trash her own opinion just as well. She claim she totally opposed to the death penalty but then she cite God's offing the Egyptians to prove that killing ain't necessarily wrong. She claim she don't like the Newgarth punishment for murder, but she also say she tried to get it applied to the people responsible for her brothers' deaths. Stupidest Housemaid re-read her opinion and she think she out to lunch when she wrote that shit. But at least she open about her purpose. She never claimed she was doing anything but politics. IV. Nothing But the Truth So what it all mean? Two things about the law: it can be argued both ways in hard cases; and, in the hands of rich white men, it can be a real bitch. Take the Declaration of Independence and the Constitution of the United States. Please. You want to see a rebuke to the principle of rule of law, just look right there. Declaration of Independence say "all men are created equal," The Declaration of Independence para. 2 (U.S. 1776), and Constitution say bring in all the niggers you want as slaves until 1808. Then stop and just breed them. See U.S. Const. art. I, 9, cl. 1. Thomas Jefferson is writing about freedom and liberty and fucking his slave and selling their children. There are schools named after this man where they teach you about the rule of law. The Fourteenth Amendment say every citizen has the right to equal protection of law, see U.S. Const. amend. XIV, 1, and in McCleskey v. Kemp, 481 U.S. 279 (1987), the Supreme Court say if some citizens receive the death penalty cause they black, what the hell can we do? Shit happens. See id. at 314-19. It scare the stupidest housemaid, but she can look at the Fourteenth Amendment and read Plessy v. Ferguson, 163 U.S. 537 (1896), and think that opinion is rightly decided. It seems correct. The rationale make sense. Hell, Chief Justice Rehnquist said the same thing when he was a law clerk. But then to the relief of the stupidest housemaid, the Brown v. Board of Education, 347 U.S. 483 (1954), opinion make sense too. It seems right also. So much for the rule of law. And that scare her too. Why? Because it is true that it would be useful for the rule of law to exist. It may even be true that the servant needs a rule of law more [*1923] than the master. But the stupidest housemaid knows that her needs and the way the world works are two different things. As necessary as it might be, the rule of law does not exist. Don't take it out on the stupidest housemaid. It ain't her radical assault on truth, it's the truth itself. When Pythagoras announced that the world is round, people fussed at him too. They said the world was easier to navigate if it was flat. The pitifulest thing is that the main ones believing in the rule of law are the ones getting screwed by the myth of it the most. The stupidest housemaid finds those jurors who surrendered their power to this Court might be just a little more stupid than she. What this Court know any better than they? Why should its "opinion" be more respected? If you on the bottom, and you get a little bit of power, you ought to have more sense than to give it right back. The stupidest housemaid laughs, considering how the chickens have come home to roost. White folks been sacrificing the lives of people of color for centuries - for the white folks' greater good. First they put them in ships and now they put them in cages. Reservations. Detention Centers. Send them back to Mexico, or the greedy killing fields. But when white folks sacrifice white lives for the greater good, it's a big confusing problem. FOOTNOTES: n1. Unlike several of the writers of the opinions that follow, I cannot resist the temptation to use footnotes (in moderation, of course). The purpose of this footnote is to suggest to the reader that you may well prefer, as I do, to read a foreword, if at all, only after you have read all that follows. This practice not only tends to make the foreword more readable, but also eliminates any chance that the views expressed in the foreword will affect your reactions to what follows. Thus, you are in a better position to assess the merits and defects of the foreword itself. But if you must, read on. n2. Henry M. Hart, Jr., The Power of Congress to Limit the Jurisdiction of Federal Courts: An Exercise in Dialectic, 66 Harv. L. Rev. 1362 (1953). n3. Lon L. Fuller, The Case of the Speluncean Explorers, 62 Harv. L. Rev. 616 (1949). The article is reproduced below, see infra, at 1851-75, and for ease of reference, citations will be to the article as it appears in this issue. n4. This saying is a Shakespearean phrase I learned from Paul Freund and have always treasured because it is so obscure. In olden times, a branch of ivy (a bush) was hung outside a tavern to indicate wine for sale. n5. Hart's Dialogue was reproduced in the third edition of Paul M. Bator, Daniel J. Meltzer, Paul J. Mishkin & David L. Shapiro, Hart and Wechsler's The Federal Courts and the Federal System (3d ed. 1988), but was replete with footnotes bringing aspects of the text up to date. See id. at 393-423. Because this burden had become increasingly heavy, we decided, as editors of the fourth edition, to discuss and quote liberally from the article but not to reproduce it. See Richard H. Fallon, Daniel J. Meltzer & David L. Shapiro, Hart and Wechsler's The Federal Courts and the Federal System 366-67 (4th ed. 1996). n6. See Hart, supra note , at 1395. At this point, Hart was complaining that under then-current Supreme Court doctrine, an alien who had entered the country illegally had all the protections afforded by the guarantees of due process, while a resident alien who goes abroad to visit a dying parent and seeks to return with a duly issued passport and visa appeared to have none. Now, we live in an era when - perhaps as a reaction to earlier times - one can be accused of a racial slur and lose one's job (at least for a while) for using "niggardly," a word of Scandinavian origin, see Jonathan Chait, Doubletalk, New Republic, Feb. 22, 1999, at 50, 50, and when "he" is no longer a politically acceptable generic pronoun. We are all caught up in our times. n7. That all the justices are male is evident from the internal references by the justices themselves to their colleagues. Their other characteristics are matters of conjecture, and indeed assumptions about those characteristics can only be based on a guess about how people in the late 1940s thought about the judiciary, and on the failure of any of the justices to make a point about his race, nationality, or background. To quote Professor Eskridge in his 1993 discussion of Fuller's piece, "There is no explicit clue of any sort to the race of any participant. That is, itself, an implicit clue. In the 1940s, it went without saying that you were white if your race was not noted." William N. Eskridge, Jr., The Case of the Speluncean Explorers: Twentieth-Century Statutory Interpretation in a Nutshell, 61 Geo. Wash. L. Rev. 1731, 1750 n.111 (1993). (Is this assertion - that a black writer in the 1940s, writing in any context, would always refer to his race - supported by empirical data?) Eskridge goes on to say, "The affluence of the Speluncean world is suggested by the preppy, upper-class context of the hypothetical: the hobby is the rarefied, relatively expensive one of cave-exploring. Moreover, [the case ends] up as a battleground of Newgarth's political elites (the Chief Executive and the Court)." Id. at 1750-51 n.112 (citation omitted). (Was Fuller's move - from the real-life seafaring cases cited below, see infra note , to a case involving explorers - made in order to change the social class of the accused or because the cave situation was more pliable in terms of the facts he wanted to develop? And is the institutional issue he wanted to present - the issue of institutional role in a system of law - properly characterized, in terms of either its significance or the author's purpose, as a "battleground" of "political elites"?) n8. Fuller, infra, at 1859 (Foster, J.). To quote Professor Eskridge again, "The only appearances of nonwealthy people in the case are demeaning... Most revealing is the snide reference by Justice Foster - the "nice' Justice - to the "stupidest housemaid.'" Eskridge, supra note 7, at 1751 n.112. This point is made the capstone of Professor Paul Butler's opinion on this issue. See infra, at 1917 (Stupidest Housemaid, J.). n9. See Regina v. Dudley & Stephens, 14 Q.B.D. 273 (1884) (involving defendants, who, after twenty days on a lifeboat, killed and then ate the youngest person on the boat - evidently without any agreed-upon procedure for determining the one to be sacrificed - and who were ultimately convicted of murder but had their death sentences commuted); United States v. Holmes, 26 F. Cas. 360 (C.C.E.D. Pa. 1842) (No. 15,383) (involving a defendant who was a member of the crew of a ship that sank and who was convicted of manslaughter and sentenced to six months imprisonment for throwing several passengers out of a long-boat so that he and the others in the boat might survive). n10. Some think that to deal with a case fairly and fully, we must be able to explore in depth every aspect of the context in which it arises. Cf., e.g., John T. Noonan, Jr., Persons and Masks of the Law 111-51 (1976) (discussing the context of Palsgraf v. Long Island Railroad, 162 N.E. 99 (N.Y. 1928)). Of course, no hypothetical can meet such a demanding standard, though Fuller has clearly gone beyond the standard A, B, and C of the law school classroom, and made a concerted effort to provide enough information for full debate of the issues he wanted to raise. n11. Fuller, infra, at 1853 (Truepenny, C.J.). n12. See id. at 1853-54. n13. Eskridge suggests that they are, and I agree. See Eskridge, supra note 7, at 1742. n14. Fuller, infra, at 1855 (Foster, J.). n15. See id. at 1858. n16. See id. n17. See Fuller, infra, at 1859-61 (Tatting, J.). n18. See id. at 1863. n19. See Fuller, infra, at 1864 (Keen, J.). n20. See id. at 1868. n21. See Fuller, infra, at 1868, 1870 (Handy, J.). In a delightful passage in which Fuller perhaps gets carried away, Justice Handy dismisses the likelihood of executive clemency on the basis of his knowledge of the Chief Executive's character - knowledge acquired because, as it happens, "my wife's niece is an intimate friend of his secretary." Id. at 1872. n22. See id. at 1870. n23. See Eskridge, supra note , at 1737 n.38 (citing Lon L. Fuller, The Law in Quest of Itself (1940); Lon L. Fuller, American Legal Philosophy at Mid-Century, 6 J. Legal Educ. 457 (1954); and Lon L. Fuller, Reason and Fiat in Case Law, 59 Harv. L. Rev. 376 (1946). n24. See Henry M. Hart, Jr. & Albert M. Sacks, The Legal Process:Basic Problems in the Making and Application of Law 1111-1380 (William N. Eskridge, Jr. & Philip P. Frickey eds., Foundation Press 1994) (1958). n25. Eskridge, supra note . n26. Naomi R. Cahn, John O. Calmore, Mary I. Coombs, Dwight L. Greene, Geoffrey C. Miller, Jeremy Paul & Laura W. Stein, The Case of the Speluncean Explorers: Contemporary Proceedings, 61 Geo. Wash. L. Rev. 1754 (1993) [hereinafter Contemporary Proceedings]. n27. Eskridge, supra note , at 1732. n28. Id. at 1743. n29. Id. at 1750-51. n30. This is my reading of the conclusions reached, but in some instances, the authors might disagree with that reading. n31. See Eskridge, supra note , at 1751-52. n32. See Contemporary Proceedings, supra note , at 1800 ("We have both the right and the responsibility to interpret statutes in such a way as to serve the apparent legislative purpose - indistinct as that may be ....") (opinion of Professor Geoffrey C. Miller). n33. See id. at 1801-07 (opinion of Professor Jeremy Paul). Paul rejects Justice Handy's reliance on the views of "the common man." Id. at 1806. But in reaching his conclusion that it would be "monstrous ... to put these defendants to death for actions we can't even agree constitute a crime," id. at 1807, Paul confesses his "inability to announce an overarching principle that compels reversal," along with his lack of concern that this is so, id. at 1805. n34. Id. at 1763 (opinion of Professor Naomi R. Cahn). n35. See id. at 1785, 1787, 1789 (opinion of Professor Mary I. Coombs). In Fuller's hypothetical, the jury foreman asked that the question of guilt be determined by the court on the facts as found, and we are told that "counsel for the defendants" accepted the procedure. Fuller, infra, at 1853 (Truepenny, C.J.). n36. Contemporary Proceedings, supra note , at 1811 (opinion of Professor Laura W. Stein). Chief Justice Truepenny's summary of the facts states only that Whetmore (speaking from inside the cave and just before communications were cut off) "asked if there were among the party [outside the cave] a judge or other official" who would tell them whether it "would be advisable" to cast lots to determine who should be eaten, but "none of those attached to the rescue camp was willing to assume the role of advisor in this matter." Fuller, infra, at 1852 (Truepenny, C.J.). Thus, we are not told whether any judges or other authorities on the law were present at all. n37. Contemporary Proceedings, supra note , at 1766 (opinion of Professor John O. Calmore). n38. Id. at 1790-91 (opinion of Professor Dwight L. Greene). n39. Fuller does not tell us whether Newgarth has a constitutional guarantee of jury trial like ours or whether it has a constitution at all. Indeed, I am sure he wanted his readers to think about these issues free from whatever constraints a constitution might impose. Moreover, Professor Coombs seemed determined to jam a jury trial down the defendants' throats whether they wanted one or not - an idea squarely at odds with our own constitutional precedent, see Patton v. United States, 281 U.S. 276 (1930). n40. My count assumes that Professor Butler would reverse the conviction in its entirety. If he would reverse only on the issue of the appropriate sentence, then the vote to affirm would be 4-2. n41. Infra, at 1878 (Kozinski, J.). Since I first set down my thoughts on the initial drafts of the contributors to this revisiting of Fuller's case, several of those contributors have supplemented their opinions with insightful critiques of the approaches of their colleagues. Perhaps the most complete of these critiques is Kozinski's, whose comments sometimes overlap, sometimes improve on, and sometimes considerably surpass, my own. But having invested the initial effort in collecting my own thoughts, I am unwilling to forgo the opportunity to voice them now. n42. See id. n43. Id. at 1879. n44. In this example, Kozinski asks whether it would be appropriate for a court to remedy a "legislative oversight" or fill in what may or may not have been an inadvertent gap in the statute, by applying the law to the killing of a dog. Id. at 1878. n45. See Eskridge, supra note , at 1798 (opinion of Professor Geoffrey C. Miller) ("There are many contexts in which "another' can mean an animal. True, we naturally read the qualification "human being' after the word "another,' but that is only because execution for killing an animal seems excessive."). n46. See, e.g., Cass R. Sunstein, The Supreme Court, 1996 Term - Foreword: Leaving Things Undecided, 110 Harv. L. Rev. 6, 20-21 (1996). n47. See infra, at 1884-85 (Sunstein, J.). n48. See Fuller, infra, at 1860-62 (Tatting, J.), 1864-67 (Keen, J.). n49. See Fuller, infra, at 1861 (Tatting, J.) (noting the impulsive character of resisting an aggressive threat to one's life). n50. See id. at 1862 (questioning whether it would have mattered if Whetmore had refused from the beginning to participate in the plan). n51. See infra, at 1895-97 (West, J.). n52. See id. at 1896-97. n53. See id. at 1899. n54. See id. at 1898. n55. See Fuller, infra, at 1870-73 (Handy, J.). n56. The history of the Supreme Court's struggle with the constitutional problems presented by capital punishment is remarkable, with respect to both the changes in the Court's approach over time and the deep divisions within the Court at any particular time. For several decades, the Court has grappled with a steady series of cases involving the permissible circumstances in which capital punishment may be imposed, as well as the considerations that may, may not, or must be taken into account. See Carol S. Steiker & Jordan M. Steiker, Sober Second Thoughts: Reflections on Two Decades of Constitutional Regulation of Capital Punishment, 109 Harv. L. Rev. 355 (1995) (describing and critiquing the Supreme Court's treatment of these issues since 1972 and concluding that "the death penalty is, perversely, both over- and under-regulated"). n57. See supra note . n58. Cf. David L. Shapiro, Continuity and Change in Statutory Interpretation, 67 N.Y.U. L. Rev. 921 (1992) (making a similar argument about the role of courts in dealing with statutes). n59. See infra, at 1915 (Easterbrook, J.). n60. See id. at 1915-16. n61. In a previous article, Easterbrook more fully discusses this distinction, emphasizing, inter alia, the difference between a statute that enacts a code of rules, on the one hand, and a statute that delegates a kind of common law interpretive function to the courts on the other. See Frank H. Easterbrook, Statutes' Domains, 50 U. Chi. L. Rev. 533 (1983). n62. Fuller, infra at 1856-57 (Foster, J.). n63. See infra, at 1893 (West, J.). n64. See infra, at 1914 (Easterbrook, J.). n65. Who turns out to be a "gay woman of color." Infra, at 1901 n.3 (De Bunker, J.). n66. See id. at 1899-1900. n67. See id. at 1904-05. n68. See United States v. Holmes, 26 F. Cas. 360, 369 (C.C.E.D. Pa. 1842) (No. 15,383); Regina v. Dudley & Stephens, 14 Q.B.D. 273, 288 (1884). n69. Citing what is surely the more famous of these cases - Regina v. Dudley and Stephens, 14 Q.B.D. 273 (1884) - in support of his argument, Dershowitz notes that the Dudley court was divided, that the result was followed by executive clemency, and that in any event, "the vast majority of comparable cases - both before and after that decision - resulted in acquittal or decisions not to prosecute ...." Infra, at 1904 (De Bunker, J.). The first two of these points strike me as furnishing little support for Dershowitz's argument. Few controversial decisions are unanimous; what is critical is that neither the British nor the Newgarth legislature opted to reject the result. And as for the subsequent commutation, it resembles what Chief Justice Truepenny urged in voting to affirm; such extraordinary cases, he contended, are not appropriate for rules promulgated by courts without any legislative authorization, but rather, they call for the case-by-case exercise of executive discretion focused on the particular circumstances. See infra, at 1853-54 (Truepenny, C.J.). Finally, I am puzzled by the reference to "the vast majority of comparable cases." There is no citation of supporting authority, and I did not know that the practice of cannibalism in these circumstances is so common that it is possible to speak of the cases in terms of a vast majority. (Perhaps my notion of what cases are comparable is a less expansive one.) There may be a large iceberg under the few appellate cases on the subject, but I am unaware of any empirical studies to support its existence. n70. Kozinski's examples in support of this point, see infra, at 1880 (Kozinski, J.), are a delight. n71. See infra, at 1905-09 (De Bunker, J.). n72. Id. at 1909. n73. See Paul Butler, Racially Based Jury Nullification: Black Power in the Criminal Justice System, 105 Yale L.J. 677 (1995). n74. Fuller, infra, at 1859 (Foster, J.). n75. Infra, at 1918 (Stupidest Housemaid, J.). n76. Id. at 1919. n77. See id. at 1918. n78. See id. n79. Id. at 1920. n80. Id. at 1923. n81. See Fuller, infra, at 1862 (Tatting, J.). n82. "The majestic equality of the law ... forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread." Anatole France, Le Lys Rouge 111-23 (1894), quoted in The Oxford Dictionary of Quotations 292 (Angela Partington ed., 4th ed. 1992). n83. I must admit that in spite of myself, I couldn't help smiling at Butler's Handy-like trashing of the opinions of the other new justices, as well as of his own, and at the allusion (advertent, I'm sure) to Henny Youngman's most famous one-liner. n84. See Fuller, infra, at 1870 (Handy, J.). n1. The presumption in favor of plain meaning and the void-for-vagueness doctrine are cousins because both are designed to promote rule of law values and, in particular, to give the legislature an incentive to speak clearly. n1. It is to be regretted perhaps, though understood, that many atheists remained whetted to prior tribal groups. There were Jewish atheists, Catholic atheists, Protestant atheists, Muslim atheists, and other smaller groupings, arguing vigorously over which God not to believe in. Even prior to the great apocalypse, many thoughtful people understood that their religious "beliefs" and practices were based on myths similar to those of their polytheistic predecessors. But they also saw that religion was important to the lives of many of their friends and that it produced much good - like a placebo taken by one who believes it to be a potent medicine. They were content to regard religion as a pious and harmless fraud. But the great apocalypse demonstrated how dangerous such myths had become, and most citizens began to demand that religion be treated like other irrational belief systems such as astrology, tarot cards, and voodoo. Soon it became as unfashionable to believe in the supernatural doctrines of formal religion as it was to believe that the earth was flat. Even prior to the Great Fundamentalist Wars of the third millenium, some courageous intellectuals began to challenge monotheistic dogma, but they had considerable difficulties in persuading the masses. Part of the reason for their hardship was that certain evil totalitarian regimes had forced atheism on their citizens, thereby associating disbelief in God with tyranny. It became voguish for prudent intellectuals to argue that science (empirical truth) and faith (belief) must be kept separate and that matters of faith should not be judged by scientific criteria. This, too, however, was a myth because many of the claims of faith - for example, that Moses parted the Red Sea, that Jesus walked on water, and that Mohammed ascended to heaven on a horse - are empirical and historical: they either happened or they were made up. Following the wars, more people began to insist on proof of such claims and concluded that they were fictional. n2. Contemporary historians still cannot solve the intellectual puzzle of why, for more than 2,000 years, so many people concluded that belief in one supernatural being (monotheism) was regarded as an "advance" over belief in many supernatural beings (polytheism). n3. To illustrate the point that principles of "natural law" can cut in different directions, consider the principle that every human life is of equal value. Justice West employs a variation on that principle to demand conviction in this case. Yet the American Law Institute cites precisely the same principle to justify the killing of one innocent person to save the lives of many: "The life of every individual must be taken in such a case to be of equal value and the numerical preponderance in the lives saved compared to those sacrificed surely should establish legal justification for the act." Model Penal Code 3.02 commentary at 14-15 (1962). As a gay woman of color, I am particularly skeptical of deriving moral laws from the nature of human beings because history has shown that most such laws have been derived from the purported nature of "man" - in the past, usually a white, heterosexual man of the dominant group. I am also skeptical of inalienable rights because, for centuries, such rights did not include those of women, gays, or racial minorities. Today, of course, whites are the racial minority in most nations, including our own. The principle, however, remains the same. Of course, positive laws - such as those enacted in Nazi Germany in the second millenium - have been used to subordinate (and worse) many human beings, but natural law has been likewise abused. These are all powerful arguments for why we should prefer laws that entrench certain basic rights, such as equality, freedom of conscience and expression, due process, and other protections against the tyranny of positive, natural, or other kinds of law and lawlessness. I also prefer a system that assures both religious freedom for those few dissidents who continue to insist that there is a god - who gave Moses the Torah, is Jesus's father, and inspired Mohammed - and the freedom to believe in and practice other irrational superstitions, so long as such practices do not interfere with the rights of the vast majority of rational people to base our lives on principles of human reason. Efforts to impose atheism by law have failed, as have efforts to impose religion by law. The marketplace of ideas and beliefs has proved to be the better option. n4. I, too, believe that certain rights should be accepted by agreement as inalienable, or at least as not subject to abrogation by a simple majority. This is my preference, and I hope to persuade others to agree with it. n5. As the ancient Talmud rhetorically asked: "Who knows that your blood is redder?" Sanhedrin 74a in The Babylonian Talmud 503 (I. Epstein ed. & H. Freedman trans., 1935). n6. In the old days, the prospect of punishment in the afterlife - eternity in hell - could be threatened. Today, of course, few believe in such irrational "ghost stories." Even in the past ages of religions it is doubtful whether many people actually believed in heaven and hell because so many sins were committed by "believers." The threat of eternal punishment and reward did not dispense with the need for earthly punishments to deter crimes that were also sins. n7. There may, of course, be moral objections if the penalties necessary to deter the conduct are too harsh or fall too heavily on innocent third parties. See, e.g., supra, at 1897-99 (West, J.) (appearing to make such an argument in her rejection of the death penalty as a punishment for the defendants, although she does believe they are guilty under the statute). n8. It could be argued that elite philosophers or jurists are better suited because of their intellect and education to make such decisions. Many millennia ago, a Greek philosopher named Plato proposed such an elitist theory of decisionmaking. Most democracies have rejected it, concluding instead that representative decisionmaking is preferable. Choosing who should decide the law, too, is ultimately a matter of preference and persuasion. However, the advocates of representative decisionmaking have generally prevailed over time. n9. See, e.g., David Daube, Collaboration with Tyranny in Rabbinic Law (1965); Marilyn Finkelman, Self-defense and Defense of Others in Jewish Law: The Rodef Defense, 33 Wayne L. Rev. 1257 (1987). Among the cases - some actual, others hypothetical - considered in the Talmud are the following: an enemy general surrounds a walled city and threatens to kill all of its inhabitants unless they turn over one individual for execution; two people are dying of thirst in the desert with enough water between them to save one but not both; a child, below the age of legal responsibility and thus deemed innocent, threatens the life of another innocent person and can be prevented from killing only by being killed (the filmmaker Alfred Hitchcock presented a variation on this theme in an episode from his television program); and a fetus endangers the life of a pregnant mother who can be saved only by killing the fetus (a variation is that during delivery, the baby endangers the life of the mother who can be saved only by killing the partially delivered baby). n10. Johann Christoph Friedrich von Schiller, Wallenstein's Camp, sc. 4 (1798), quoted in Bartlett's Familiar Quotations 365 (John Bartlett & Justin Kaplan eds., 16th ed. 1992). n11. The killing was also premeditated, as are all judicial executions. The official death certificate in a famous death penalty case during the last century of the second millennium - the Sacco and Vanzetti case, Commonwealth v. Sacco, 151 N.E. 839 (Mass. 1926) - listed the cause of death of the defendants as "electric shock judicial homicide." Certificate of Death of Bartolomeo Vanzetti (1927) (on file with the Harvard Law School Library). n12. One of my judicial colleagues, whom I will not name, is sometimes referred to as "Necessity," because he too "knows no law." n13. See Sanford H. Kadish & Stephen J. Schulhofer, Criminal Law and its Processes: Cases and Materials 860-80 (6th ed. 1995). Surely the death of several people is a greater harm than the death of one person. But see Nezikin 5, in The Babylonian Talmud (I. Epstein ed. & H. Freedman trans., 1935) ("Whosoever preserves a single soul of Israel [it is] as though he had preserved a complete world."). n14. Perhaps this decision would be influenced by the tragic reality that so many of those who created the dilemma - the Nazi murderers - got away with it. n15. The necessity defense has been "anciently woven into the fabric of our culture." J. Hall, General Principles of Criminal Law 416 (2d ed. 1960), cited in Laura J. Schulkind, Note, Applying the Necessity Defense to Civil Disobedience Cases, 64 N.Y.U. L. Rev. 79, 83 n.20 (1989). It can be found in caselaw dating as far back as 1551 in Reniger v. Fogossa, 75 Eng. Rep. 1 (K.B. 1551). Arguing that a captain who docked his ship to avoid a storm would not have to forfeit his goods as the statute would have required, the Court concluded: [A] man may break the words of the law, and yet not break the law itself ... And therefore the words of the law ... will yield and give way to some acts and things done against the words of the same laws, and that is, where the words of them are broken to avoid greater inconvenience, or through necessity ... Id. at 29. The Reniger court reached even further back to the New Testament example in Matthew 12:3-4 of eating sacred bread or taking another's corn through necessity of hunger. See id. at 29-30; see also Edward B. Arnolds & Norman F. Garland, The Defense of Necessity in Criminal Law: The Right to Choose the Lesser Evil, 65 J. Crim. L. & Criminology 289, 291 n.27 (1974) (citing Reniger). Arnolds and Garland enumerate many other older, see Arnolds & Garland, supra, at 291 nn.29-34, and modern, see id. at 291-92 nn.35-37, English cases that "recognize the general principle of necessity," id. at 291, as well as both federal, see id. at 292 nn.38-44, and state, see id. at 292 nn.45-50, cases in the United States. The court system's recognition of the necessity defense is also acknowledged in casebooks. See, e.g., Sanford H. Kadish & Stephen J. Schulhofer, Criminal Law and Its Processes 860-80 (6th ed. 1995). n16. The necessity defense is part of the Model Penal Code, see Model Penal Code 3.02, and has been incorporated into many state criminal codes, see Lawrence P. Tiffany & Carl A. Anderson, Legislating the Necessity Defense in Criminal Law, 52 Denv. L.J. 839 (1975) (examining how many states included the necessity defense when they recodified their criminal statutes). n17. See, e.g., Ky. Rev. Stat. Ann. 503.030 (Michie 1985) (stating that "no justification can exist ... for an intentional homicide"); Mo. Rev. Stat. 563.026 (1994) (stating that "conduct which would otherwise constitute any crime other than a class A felony or murder is justifiable and not criminal when it is necessary as an emergency measure to avoid an imminent public or private injury"); Wis. Stat. Ann. 939.47 (West 1997-98) (stating that necessity "is a defense to a prosecution ... except that if the prosecution is for first-degree intentional homicide, the degree of the crime is reduced to 2nd-degree intentional homicide"); Regina v. Pommell, 2 Crim. App. 607, 608 (1995) (stating that the necessity defense does not apply to murder and attempted murder), cited in Alan Reed, Duress and Provocation as Excuses to Murder: Salutary Lessons from Recent Anglo-American Jurisprudence, 6 J. Transnat'l L. & Pol'y 51, 68 n.20 (1996). Those jurisdictions that limit the necessity defense to crimes other than killing face the following conundrum: A person who was provoked into killing by seeing his wife in bed with another man can have the charges reduced from murder to manslaughter if he is deemed to have acted as a reasonable man would have acted under a similar provocation. But a man who kills one person to save multiple lives faces conviction for first-degree murder. Such cases and statutes also contradict the general principle found in the Model Penal Code commentaries that the defense is available [when] a person intentionally kills one person in order save two or more." 1 Wayne R. LaFave & Austin W. Scott, Jr., Substantive Criminal Law 5.4, at 632 (1986). n18. As Tiffany and Anderson conclude: The common law rejection [in Dudley] of the defense when the intentional killing of an innocent person was involved, appears now to be almost universally rejected itself. The most common statutory approach is to provide, merely, that if the other conditions of the defense are all satisfied, the actor's "conduct" is justified. Tiffany & Anderson, supra, at 860 (footnotes omitted). n19. The American Law Institute continued: For, recognizing that the sanctity of life has a supreme place in the hierarchy of values, it is nonetheless true that conduct that results in taking life may promote the very value sought to be protected by the law of homicide. Suppose, for example, that the actor makes a breach in a dike, knowing that this will inundate a farm, but taking the only course available to save a whole town. If he is charged with homicide of the inhabitants of the farm house, he can rightly point out that the object of the law of homicide is to save life, and that by his conduct he has effected a net saving of innocent lives. The life of every individual must be taken in such a case to be of equal value and the numerical preponderance in the lives saved compared to those sacrificed surely should establish legal justification for the act. So too, a mountaineer, roped to a companion who has fallen over a precipice, who holds on as long as possible but eventually cuts the rope, must certainly be granted the defense that he accelerated one death slightly but avoided the only alternative, the certain death of both. Although the view is not universally held that it is ethically preferable to take one innocent life than to have many lives lost, most persons probably think a net saving of lives is ethically warranted if the choice among lives to be saved is not unfair. Certainly the law should permit such a choice. Kadish & Schulhofer, supra, at 877-78 (quoting Model Penal Code 3.02 commentary at 14-15 (1985)). n20. See also United States v. Lanier, 520 U.S. 259, 266 (1997) ("The canon of strict construction of criminal statutes, or rule of lenity, ensures fair warning by so resolving ambiguity in a criminal statute as to apply it only to conduct clearly covered."); Staples v. United States, 511 U.S. 600, 619 (1994) (noting that under the rule of lenity, an "ambiguous criminal statute" should be "construed in favor of the accused"). n21. Indeed, it is fair to say that few lawyers get through law school without discussing this conundrum and its numerous variations. Most law students read Dudley and Stephens and United States v. Holmes, 26 F. Cas. 360 (C.C.E.D. Pa. 1842) (No. 15,383). Many also study the writings of the great twentieth-century philosopher Robert Nozick, who, in 1974, constructed the following prescient hypotheticals: If someone picks up a third party and throws him at you down at the bottom of a deep well, the third party is innocent and a threat; had he chosen to launch himself at you in that trajectory he would be an aggressor. Even though the falling person would survive his fall onto you, may you use your ray gun to disintegrate the falling body before it crushes and kills you? Libertarian prohibitions are usually formulated so as to forbid using violence on innocent persons. But innocent threats, I think, are another matter to which different principles must apply. Thus, a full theory in this area also must formulate the different constraints on response to innocent threats. Further complications concern innocent shields of threats, those innocent persons who themselves are nonthreats but who are so situated that they will be damaged by the only means available for stopping the threat. Innocent persons strapped onto the front of the tanks of aggressors so that the tanks cannot be hit without also hitting them are innocent shields of threats. (Some uses of force on people to get at an aggressor do not act upon innocent shields of threats; for example, an aggressor's innocent child who is tortured in order to get the aggressor to stop wasn't shielding the parent.) May one knowingly injure innocent shields? If one may attack an aggressor and injure an innocent shield, may the innocent shield fight back in self-defense (supposing that he cannot move against or fight the aggressor)? Do we get two persons battling each other in self-defense? Similarly, if you use force against an innocent threat to you, do you thereby become an innocent threat to him, so that he may now justifiably use additional force against you (supposing that he can do this, yet cannot prevent his original threateningness)? Robert Nozick, Anarchy, State, and Utopia 34-35 (1974). Students have also debated the following hypothetical case: A doctor is experimenting with a deadly virus; the virus begins to spread (through no fault of the doctor); the only way to prevent the spread of the virus is to seal the room from which the doctor is trying to flee, thus dooming him. n22. Justice Easterbrook premises his decision largely on the assumption that these defendants implicitly consented to the decision ultimately taken and the conclusion that "society should recognize that agreement." See infra, at 1916 (Easterbrook, J.). The problem is that consent, even when explicit, has not always been accepted as a defense to willful killing, as evidenced by the ancient case of People v. Kevorkian, 527 N.W.2d 714 (Mich. 1994). n23. Indeed, under governing case law, his homicide was even premeditated because premeditation can occur in an instant. n24. There are, however, some who justify using organs of prisoners condemned to death, despite the reality that this might result in more executions for the sole purpose of using the prisoner's organs to save others' lives. n25. As Justice West states: There are currently a sizable number of citizens in this country awaiting organ donations, bone marrow replacements, and blood transfusions. The profound scarcity of such organs, bone marrow, and non-contaminated rare blood types is the sad reality that all such patients (as well as those of us who may at any point become such a patient) are forced to endure. That scarcity prompts incomparable anguish among the needy donees, and tortured decisions by medical personnel. Supra, at 1896 (West, J.). n26. Another important indicium that our legislature did not intend to include the type of necessity killing under the general prohibition against murder is that it failed to specify an appropriate punishment for this kind of tragic-choice killing. Surely it would be wrong for a judge to be empowered to punish our defendants as severely as a defendant who killed for profit, thrill, or hatred. From checker at panix.com Wed May 18 22:51:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:51:51 -0400 (EDT) Subject: [Paleopsych] Nick Allen: From Mountains to Mythologies Message-ID: Nick Allen: From Mountains to Mythologies Key Informants on the History of Anthropology ethnos, vol. 68:2, june 2003 (pp. 271-284) Oxford University, UK How did I become an anthropologist? The remoter beginnings must lie in family history. Several of my forebears served in India, most but not all of them in the army. My father was a Celtic numismatist, who started his career in the British Museum and, after many years as a civil servant, ended up administering the British Academy, first as Secretary (succeeding Mortimer Wheeler) and then as Treasurer (Turner 1976). He surely transmitted to me a love of research, albeit more by example than by words. Schooling In 1947 the family followed my father when he was sent to Hong Kong by the Ministry of Shipping, but it is difficult to say whether those two childhood years in the colony significantly influenced my eventual career choice. I assi?milated little about Chinese culture but recall being impressed by van Loon?s Story of Mankind, a present for my ninth birthday. Back in England, at prep school, I enjoyed climbing trees and being a Boy Scout. I then went to Rugby, where Rugby football was invented. The stress on physical fitness and outdoor activities continued--we were required to take exercise at least three times a week; and in addition to the team games I often played squash and rackets. Combined with the influence of my mother, who had serious climbing experience in the Alps, this laid the basis for my own climbing, which in turn was to lead me to the Himalayas. Looking back, I note the extraordinarily narrow and intense focus on Latin and Greek. A few O levels were rapidly disposed of in the first year at Rugby, and thereafter eyes were firmly fixed on an Oxbridge classical scholarship. To that end, year after year, we read the ancient texts and translated Macaulay or Churchill into Ciceronian Latin, Shakespeare or Metaphysical poets into Greek iambics or Ovidian elegiac couplets.1 We did not bother with A levels, and for five terms in the top form (the Upper Bench), I was taught very much like students beginning the Oxford course in Lit. Hum.: outside class we each spent an hour a week solo with the senior classics master in his study. He also took a group of us around Italy, and became a friend as well as a teacher. What a strange education it was! Elitist to a degree, single-sex (and thus lived in an atmosphere tinged with homosexuality), extraordinarily apolitical (I mean of course overtly), and so traditional that my only formal exposure to science might well have been one term on the amoeba and similar beings. If boarding school domesticity was oppressive, the Upper Bench repre?sented spiritual liberation; and in our teacher?s study, ranging beyond the great writers of prose and poetry, we listened to Beethoven string quartets and looked at art books.2 I was encouraged, and flourished. However, I came to feel that I wanted a wider perspective on life. I do not know how it happened. Relevant factors probably included my mother?s brother (father of the anthropologist Alfred Gell) who, when studying at Cambridge, had changed from Moral Sciences to Medicine and ended up as Professor of Immunology and Fellow of the Royal Society (Silverstein n.d.); the embel?lished reminiscences in The Story of San Michele by the Swedish doctor, Axel Munthe; and rebellion against a prediction I heard that I was surely destined to become a lawyer or diplomat. Certainly, like many teenagers, I wondered how I could make the world a better place, and although I was not sure that doctors could do so, I saw that medicine would allow me to enlarge my intel?lectual range, and that, whether or not it was really to be my vocation, tactically it would be easy enough to make a case for the transfer. So after taking a scholarship to New College,3 I spent my last two terms at school studying elementary science with the thirteen-year-olds. Medicine in Britain did not at that time demand particularly high entrance qualifications, and after my first term at Oxford (1957), I was regarded as having caught up. I was thinking vaguely of some sort of medical research, and after an undistinguished performance in Finals I was lucky to be taken on for a year?s research for a BSc in neurophysiology. This was both instructive and chasten?ing. I loved thinking and reading about the brain and nervous system, but I did not enjoy doing experiments, surely because I was not good at it. Everyone assumed that I would proceed to the clinical course, and although I was less and less sure where I was heading, I duly transferred to St. Mary?s Hospital in Paddington. After Oxford, the London environment was sordid and depressing, but I cannot simply say that I detested medical school; it was rather that I felt out of place. It was interesting talking to patients--some?times heart-warming, and I liked gaining elementary acquaintance with vast new fields of knowledge and experience such as obstetrics and gynaecology (at last women were entering my interests - I had lacked both sisters and girl friends), meeting death at close quarters, and so on. But what was I doing? Ideas came and went. Perhaps I could become a dermatologist so that, having no medical emergencies, I could devote evenings and weekends to some?thing more congenial (but what?); or an expedition doctor, so that I could travel (but to what end?); or, recalling my teenage idealist angst, a family plan?ning expert, so that I could help save the world from over-population (but would I really want to look back on a life spent distributing pills and con?doms?). I really had no idea what to aim for. I went for a long climbing season in the Alps, and did my first six-month job as junior hospital doctor. Anthropological Apprenticeship It was on the shelves of the uncle mentioned above that I happened across the History of Anthropology by Haddon (of the Torres Straits Expedition). Though I had once heard some anatomy lectures by the biological anthropologist Le Gros Clark, I knew nothing of anthropology as a discipline. But here I found a subject that somehow straddled the arts-science divide, that gave opportunities to indulge a love of languages and travel, that could lead to an academic career, above all perhaps (though I hardly realised this straightaway) allowed one the freedom to range widely across the intellectual world, across space, time and topic. At the least, one could hope in fieldwork to record what no one else had recorded or would record (this salvage aspect of ethnography remains im?portant to me); and at best one could hope to make discoveries about humanity, or human nature, or the human mind - anyway something grand and inspiring. I wrote to my former moral tutor at New College, Geoffrey de Ste Croix, the Marxist classical historian, and he advised me to apply to the newly founded Linacre College. I am enormously grateful for that advice, if only because--to jump forward to 1967--it was there that I met my wife. But first I went to talk with Godfrey Lienhardt, who must have been in charge of admissions. I remember him reassuring me that most Institute students who wanted jobs in the subject seemed to get them. In my naivety, I was just slightly jolted: in that period of full employment, with the fairly easy life I had led, it had simply not occurred to me that I might fail to get the sort of job I wanted. I also lacked the social awareness to feel guilty at leaving behind studies that must have been well subsidised by others, and since I liked learning new things and could now if necessary fund myself by locum medical jobs, I was little wor?ried by the charge of being a perpetual student. Around that time I found my?self--still a hitch-hiker--sitting at a New College Gaudy next to a contemporary who was a bank manager with a Jaguar car and two children. I was no hippie, but I did feel that the life I was aiming for would be more interesting than his.4 So in 1965-66 I studied for the Diploma, the ancestor of the present-day MSc, being supervised throughout by Rodney Needham. The subject lived up to my hopes, and it was an exciting year, both intellectually and socially. Even now, 35 years on, I find it hard to present a balanced account of it, evalu?ating the personalities and ideas I encountered or failed to encounter. Need?ham himself, a curious and difficult person, was in some ways inspiring. He had a genuine respect for erudition and boldness of thought, whether expressed in English, French, German or Dutch, whether shown by anthropologists or orientalists or philosophers, and many of the enthusiasms he transmitted have remained with me. Equally important perhaps, many of the topics he dispar?aged or ignored remain among my limitations and weaknesses. One must also remember the period: no Marxism, unless one turned to Gluckman, no Said, no Gender Studies or Medical Anthropology as such, pretty little Applied Anthropology, nil on China or Japan, and one could go on. The transition from medicine was not easy. Perhaps one day I shall dare to look back on my old essays as historical documents. They were always short, often unfinished, surely pretty awful, and my exam results were again disappointing. What of my other teachers? When Needham asked early on where I might like to do fieldwork, the first place that came to mind was the Himalayas, on which I had recently read a mountaineering book by Fosco Maraini. Having himself served in the Gurkhas, Needham welcomed the idea, and sent me to David Pocock for the optional paper on Indian Sociology. Pocock (from whom I first heard mention of the Beatles) seemed to me a somewhat peppery figure, one who did not suffer fools gladly. For Evans-Pritchard too I had slightly mixed feelings, though of a different kind. His lectures, published posthum?ously in his History of Anthropological Thought, were somewhat dry to listen to, and I knew too little intellectual history to appreciate their significance;5 moreover, I was never at ease when invited to join the drinking circle at the pub. On the other hand ?E-P?, as he was called, could be extremely kind and encouraging. He teased me over the copper bangle he wore, which he claimed would do more for his arthritis than any medicines we quacks could prescribe, and he called me his ?Benjamin?, affectionately, I supposed. John Beattie was sound and sensible, though I was left in little doubt by Needham that I should regard him as a boring old functionalist. Francis Huxley was stimulating and intriguing, with his psychoanalytic curiosity, but no doubt at the time I found Edwin Ardener too difficult - it was only later that I came to penetrate the flowery language and appreciate the insights beneath it. Peter Riviere, Wendy James and Bob Barnes had yet to join the Institute teaching staff, and the Lienhardts left little impression on me until they became colleagues. After the Diploma and another Alpine season, I returned to hospitals for a year to complete the requirements for medical registration and to make some money. A job in surgery and casualty at Banbury was a six-month ordeal, and confirmed, if confirmation was needed, that medicine was not my calling. On the other hand, six months in psychiatry at Edinburgh were truly enriching, even if I always felt that psychiatrists were working in the dark. I wrote to Needham proposing that I write a BLitt on the anthropology of the body, a topic that had scarcely yet become recognised, but he advised me, no doubt wisely, to concentrate on Himalayan ethnography in preparation for field?work. Soon after, E-P exerted himself to obtain for me a grant from the Nuffield Foundation, and although I was by now determined to proceed in one way or another, this was a great help. The pre-fieldwork thesis took me most of eighteen months. Apart from immersing myself in the ethnography and history of Nepal and neighbouring areas, I studied Nepali at soas, together with Alan Macfarlane and Pat Caplan. We were all to have money from a project organised by F?rer-Haimendorf, that urbane Austrian aristocrat with his roots in Vienna diffusionism, who in spite of a chair at soas energetically continued his fieldwork trips up to and beyond retirement. As for the writing, I had been interested in semantic change at least since reading Ullmann?s Semantics in 1964, and the high point of my library research came when I found a Dravidian-type kinship terminology (Lall 1911) reported from near the north-west tip of Nepal but ignored in subsequent literature. This set me thinking about the history of kinship terminologies within the framework of the Tibeto-Burman language group, but it took me some years to realise that the hypothesis proposed in my BLitt thesis was wrong, being too cautious and too respectful of previous authority. Before leaving for fieldwork in 1969 I visited Paris and met among others Dumont, Pocock?s predecessor at Oxford, and Sandy Macdonald, the philo?logically sophisticated anthropologist/Himalayanist from whom I was to learn a lot. I also dropped in, as one still can, to the Coll?ge de France, where I heard Dum?zil lecturing to an audience of three (I did not meet him personally until the 1980s). I was still climbing a good deal, and before starting the field?work was able to join some Outward Bound instructors on a climbing expe?dition in the Western Himalayas. Like many in that hippie period, we travelled overland to India, and we then managed the first ascent of an unclimbed 20,000 foot summit--of course by Himalayan standards a mere pimple. We were a joint Indo-British expedition, so my first experience of India was of a success?ful collaborative undertaking. After the climbing I moved on to Nepal by myself. I have written elsewhere (2000a) about the twenty months of fieldwork that followed, so I shall be brief. I chose a Tibeto-Burman group from the Middle Hills, who were essen?tially unstudied, and settled in a peasant household. I was supposed to be focusing on social change, which I interpreted as meaning that I must learn all I could about the old traditions that were disappearing--the salvage orien?tation again. This meant using my Nepali to study the unwritten tribal lan?guage, and using that in turn to study the tribal mythology and ritual that were being absorbed into the Nepalese style of Hinduism (Allen 1997). Having already digested the regional literature, and being fairly hardened to feeling myself an outsider, I experienced little culture shock. The details that come to mind are trivial: the first time I ate with my fingers, the difficulty of sitting cross-legged for long periods, the lack of distinction between the morning and evening meals - if breakfast was rice and dal, so was dinner. More deep-lying was the embarrassment I felt when invited for tea to the house of a Blacksmith untouchable. I could not compromise the purity of my kindly higher-caste landlord, but hated having to refuse. My medical training had included no experience of general practice and, in the absence of nurses and pathology laboratories, turned out to be of little help. I was haunted for many months, as many other students must have been, by the sense that nothing I was learning was of sufficient academic interest to add up to a DPhil, and indeed progress during the first year was unsatisfactory. It was only then that a trip to Kathmandu gave me access to work by the Summer Institute of Lin?guistics on the phonology of other unwritten languages in Nepal, and showed me how I ought to have been tackling Thulung. Apart from pointing me to?wards mythology and fourfold social structures, the Thulung reinforced my awareness of fieldwork as a dip in the river of social, cultural and environ?mental history. Thus I was never tempted to essay a synchronic account of their life, much of which I inwardly dismissed as ?boring? Hills Hinduism. Returning to Oxford was academically harder than leaving it had been, and writing up was at first desperately, almost paralytically slow. My first paper, on ?vertical classification? (responding to Needham?s teaching as expressed at Canterbury on anthropology and medicine, together with others on ritual, but although it would have been an obvious path to follow, I could not enthuse about medical anthropology. In those days we were mercifully free from the contemporary pressure to finish doctorates in four years, and I took my full seven. Meanwhile I got married, and was interviewed for an anthropology teaching job at Newcastle. Fortunately, I was not offered it, and soon after a better one came up at Durham (following an initiative by Lucy Mair). It was at Durham that I wrote an account of the Thulung language and my doctoral thesis on Thulung mythology, as well as a few papers, including a couple on kinship terminologies which derived from my BLitt. I was already seeing myself less as a descriptive ethnographer than as a Tibeto-Burman cultural comparativist, and as a teacher I enjoyed initiating a course on Anthropology and Language. Quarter Century at the Oxford Institute I could happily have stayed on in Durham, but in 1975 my father died, just around the time when Ravindra Jain resigned to return to India. Ravi was Pocock?s successor as India specialist at the Oxford Institute, and the post was advertised. I was quite doubtful about applying, and had a long heart-to-heart talk with David Brooks, another Institute student who was teaching anthropology at Durham. On the one hand I loved Oxford, with its easy bicycl?ing and old masonry, and it would be good to be near my widowed mother--as it has been and still is. Nor did I underrate the privilege of interacting with Oxford students or of being in such an international centre of research. On the other hand, I foresaw difficulties in crossing the gap from student to colleague. In particular, I had puzzled a good deal over my supervisor?s Belief, Language and Experience and concluded that it was open to severe criticism; but Needham did not like criticism from anyone--let alone a former student. Eventually I applied, but my hesitation at first seemed justified. During my first term Needham was appointed Professor, and the next academic year was tense. I shall not analyse the conflicts that divided us, but as junior member of staff I had to write the minutes of one particularly strained staff meeting. The minutes were sent to the relevant authorities, the Proctors, who wrote back noting that the University Statutes did not oblige the Professor of Social Anthropology to be Head of Department. The Professor packed his books, moved off to All Souls, and minimised dealings with colleagues. Edwin Ar?dener became Chairman of the Department, and the position devolved on the rest of us according to seniority, reaching me in the last year or two be?fore the arrival of John Davis in 1990. After the initial difficulties Oxford has proved a good environment in which to develop in one?s own way. In my research I continued for a while as a Himalayanist, studying Tibetan with Michael Aris, reviewing books, publishing bits and pieces, mostly on myth. Perhaps the most interesting paper (1981) explored the connection be?tween landscape and bodies: again and again in this region we find tracts of territory across which particular shrines or areas are linked one-for-one with the body parts of supernatural beings. Meanwhile, when sabbatical leave at last allowed it, I planned a new spell of fieldwork, which I hoped would take me to Tibeto-Burman speakers in the eastern Himalayas, but ended up being in the west, some eight hours from Simla. This too I wrote about in 2000a. By the mid 1980s I was becoming over-extended, spreading myself too thinly. My general reading in anthropology lagged far behind what I felt to be incumbent on a teacher. Apart from Institute administration, I was Vicegerent at Wolfson College for two years, and my research interests were moving in several directions at once. Ethnographically, I now had behind me two spells of fieldwork, neither adequately published, which interlinked much less than I had hoped. Theoretically, I had retained my interest in the formal study of kinship, but I now wanted to transcend the Tibeto-Burmans or Sino-Tibetans and work at the level of world history, returning to questions that Morgan posed but answered wrongly. Moreover, ever since Needham had recommend?ed me to read Mauss?s Anthropologie et Sociologie, I had been fascinated by the workings of that outstanding anthropological mind. I had bought his Oeuvres in the 1970s and felt obscurely that far more could be made of them than had been made so far: this eventually led to Carrithers et al. (1985), and later, under the stimulus of Bill Pickering and David Parkin, to James & Allen (1998) and Allen (2000b). My post at Oxford was tied to India, not the Himalayas, and following the example of Dumont, and of Mauss before him, I wanted to learn Sanskrit. Acquiring Coulson?s Teach Yourself volume, I worked through the exercises twice, in different summer vacations, much helped by knowing some Nepali and Hindi. But this opened up not only Indology--itself vast enough--but the even vaster field of Indo-European studies. Here too Need?ham had sowed seeds by encouraging me to read Dum?zil, and although du?ring the Diploma year I found the subject matter far too difficult, I sensed the grandeur of Dum?zil?s vision and the quality of his erudition, and started buy?ing his books (I now have more than twenty). I tried to carry on all these interests simultaneously, but something had to give, and it was the Himalayan work (though I still intend to publish my doctoral thesis). The three other types of research interlink at least to some extent, but the West Himalayan experience stands apart and fell by the wayside. Regrettably too, though I reviewed a fair number of current publications on South Asia, my general anthropology remained patchy--some might say lamentably so, and I relied heavily on what I heard in seminars or came across in other contexts. Without any conscious planning, my interests developed roughly as had those of L?vi-Strauss--kinship, then modes of thought (classification etc.), then mythology, which seem to range in that order along the science-arts continuum. First, then, some remarks on kinship. When L?vi-Strauss saw cross-cousin marriage as expressing a relationship between entities that exchange women, he was focusing on the intragenerational dimension; but the intergene?rational dimension can be treated similarly: thorough assimilation of alternate generations expresses a relationship between entities that exchange children. Indeed, to generate what are truly elementary structures of kinship, one must combine these two exchange relationships in their simplest (binary) form. The result is a model kinship system of maximal logical simplicity, one from which any attested structure can be generated by transformations such as are arguably both historically and semantically plausible (Allen 1998a).6 It was only well after I first proposed the theory that I saw its possible bearing on ritual: if weddings dramatise intragenerational exchange, did not initiation originally dramatise intergenerational exchange? In any case, ?tetradic theory?, as I called it, is hopefully a contribution to understanding human origins. Although I do not think it has been refuted, rather few colleagues have referred to it (ex?ceptions include Per Hage, Wendy James and Irina Kozhanovskaya), but this may be because the sort of question it asks is (still?) deeply unfashionable within Anglophone social anthropology, and my attempts to publicise it elsewhere have been inadequate. No doubt the theory ought to be presented in the light of the substantial literature on kinship and social origins, but perhaps that task is best left to others, less committed to my particular point of view and less impatient with what seem to me the cross-purposes and confusions of the past.7 Guided by Dum?zil and Biardeau, I saw fairly early on that an Indo-European 8 approach to the Hindu world would need to take off from the Mah?bh?rata, not the Vedas, but it was only in the later 1980s that my own take-off began. The starting point was a comparison which Dum?zil in his mature work had regarded as unpromising, namely that with ancient Greece. One part of the career of Arjuna, the central hero of the Sanskrit epic, turned out to parallel one part of the career of Homer?s Odysseus, in such a detailed and well struc?tured way that the two narratives must have a common origin. Some years later a family bicycling holiday in Ireland prompted me to explore early Irish narratives, and I recall my joy at finding the same narrative pattern in the ca?reer of C?chulainn (Allen 2000c). This might seem merely a matter of literary history but, quite apart from the extra pleasure and understanding it can give to a reading of the texts, it has many culture-historical and sociological rami?fications. For instance, it can help one try to rethink the significance of the Vedas, the interplay of Aryan and non-Aryan, the history of yoga. Moreover, Dum?zil had already seen in effect that the proto-Indo-European speakers had a form of primitive classification, in the sense of Durkheim and Mauss, but the form he proposed was too compressed.9 Once the compression is corrected, many topics take on new aspects, including Indian theories of king?ship, space, substance and society. Dumont?s binary structuralist interpretation of caste turns out to be a partial view of a structure that is larger and more com?plex (Allen 1999). More generally, one sees that, again and again, independent invention has been given the credit that in fact belongs to common origin. I shall just mention three other areas that I have touched on within the field of Indo-European cultural comparativism (1998b, 2000d, 2002). Firstly, if the Zu?i and the Chinese included colours within their forms of the primitive classification, what of the early Indo-European speakers? The question led me not only to the variously coloured four horsemen of the Apocalypse, but also to what I see as a survival of the old pattern in an Arthurian narrative told in twelfth-century French by Chr?tien de Troyes. This was all great fun.10 Secondly, the subfield of Indo-Iranian comparativism has usually focused on the literate religions--Zoroastrianism in Iran, Vedic Hinduism in India, but the mountain peoples of north-east Afghanistan, not Islamised until the 1890s, have interesting data to contribute, bearing for instance on the articulation of cosmic time. Soon afterwards I found, virtually by accident, and entirely to my surprise, that the Buddha?s life-story partly follows the same pattern as the epic heroes with whom I was familiar. Retirement It will probably be obvious by now why in 2001 I retired early--though not very early, since I was 62. I miss the active day-to-day involvement with the young minds both of students and new colleagues, but the break is not total; valued attachments continue with both Wolfson College and the Insti?tute. Fundamentally, I have to look at my life as a whole. Although I have published around sixty papers (and eighty reviews) and, with various degrees of justification, find my name on the covers of a number of academic works, the gap between what I have published and what I would like to publish has not been declining, but rather growing alarmingly, and I was impatient to tackle the gap while I still had the health and energy (my father was 65 when he died). Also, though I may be flattering myself here, perhaps writing now offers my best chance of repaying the debts I have contracted to society (that into which I was socialised), and to that subsociety which is the University. Immediately on retirement, my wife and I left for a peaceful and refreshing year in India. Half was spent in the semirural university founded by Rabin?dranath Tagore in West Bengal, half in the crowded and polluted city of Pune, probably the leading Indian centre for Sanskrit studies. I returned with yet more drafts, a somewhat better knowledge of Sanskrit, new friends, and a broader knowledge of the country than when I taught about it. So, recalling my 1972 paper, I see retirement less in terms of stepping ?down? than of step?ping up--into an indefinite sabbatical. Here then is a list of topics that I hope to explore. It is not complete: for instance, it omits certain topics related to Scandinavia and mediaeval France, on which I have only the germs of ideas; and one can still hope for a few new ideas, as yet unforeseen. The Mah?bh?rata battle compared with the Trojan War. Mah?bh?rata-Odyssey comparisons. Biography of the Buddha. Roman pseudo-history.11 Early Irish literature in the light of Indo-European comparison. Plato (whose thinking is far more ?Indo-European? than Aristotle?s). Given the record so far, my chances of covering all six topics at book length are slim; still, in all cases I can build on either published articles or unpub?lished drafts, so the aspirations may not be totally unrealistic. We shall see. Obviously I am no Dum?zil, either as regards command of the primary and secondary literature, or as regards fluency as a writer, but it is encouraging to reflect that, after retiring aged seventy, that astonishing scholar published more than a dozen books. I hope that this work on myth and epic, despite its philological aspect, will be regarded as anthropology; but it may eventually find its place within some other discipline, perhaps even one as yet unrecognised. In any case, even if in a way I have returned to the Latin and Greek from which I started, it is certainly to anthropology that I owe the theoretical insights, the compara?tive range and, above all, the freedom of thought that the narrower philo?logies themselves would hardly have afforded. The emphasis of the last few pages has deliberately been on ideas, abstracted--awkwardly perhaps, to anthropologists--from the professional and socialcontext in which they were elaborated. What of the students, colleagues, friends and family, both in the uk and elsewhere, who contributed either to the elabora?tion or to the reasonably tranquil state of mind that made it possible? Lists of acknowledgements, however heartfelt, can be dry as well as invidious, so I close by expressing a hope directed at the students, of whatever discipline, who will carry on the long tradition of scholarly curiosity about humanity: may you not be too much trammelled by short-term pressures. My Sanskrit primer is inscribed ?summer 1977?, but it took me sixteen years to publish a paper that made use of the language, and I have still to publish drafts that use it to greater effect. Notes 1. In preparation for exams where one handed in only twelve lines of verse in a classical language. 2. A post-retirement book (Saunders 1967) gives a good idea of our teacher?s taste but not of the freshness with which he conveyed it to a seventeen-year-old. Sociology is absent, and so is the Orient, save via Fitzgerald?s translation of the Rub?iy?t of Omar Khayy?m. 3. I was runner-up to Edward Hussey, now at All Souls College, Oxford, an expert in the Presocratics. 4. The prolonged period as a student accustomed me to a relatively thrifty life?style, but it scarcely politicised me. The single day I committed to an Aldermaston (for uk nuclear disarmament) March indicates the direction of my sympathies, then and now, but also my reluctance to spend much time on such murky issues. In general, any efforts I made to synthesise a coherent personal ideology were scrappy, spasmodic and provisional; it was easier to luxuriate in internal multi-vocality. 5. Paine describes the lectures as ?disconcertingly "flat" (1998:136), and Beidelman is no more enthusiastic (1998:282). I was not conscious then, as I am now, of disagreeing with E-P?s views on Durkheim and the anthropology of religion. 6. I was at first disconcerted that this model has no place for the familiar genealogical levels or generations, but those egocentric categories, whose membership ages and dies off, are here subsumed by the sociocentric generation moieties, which are constantly replenished. 7. The most enjoyable spin-offs from the kinship work were the opportunities to meet academics in Russia for the first time (1992) and to serve on the jury for the magnificent doctorat d??tat by Fran?ois H?ran (1993). 8. The Mah?bh?rata contains a short version of the other Sanskrit epic, the R?m?yana. 9. The need to expand Dum?zil?s schema was recognised by the Rees brothers (1961, a book which my father passed on to me later in that decade), but their argument was neglected. 10. Serious fun, of course, even adventure - as research can be, perhaps should be. Like a rock climber, a comparativist must take some risks. 11. Dum?zil showed how much Roman ?history?, from Romulus down to at least Camillus in the fourth century b.c., is historicised Indo-European myth, but he did not exhaust the topic (Allen n.d.). References Allen, Nicholas J. 1972. The Vertical Dimension in Thulung Classification. Journal of the Anthropological Society of Oxford, 3(2):81-94. --. 1981. The Thulung Myth of the Bhume Sites and Some Indo-Tibetan Comparisons. In Asian Highland Societies: In Anthropological Perspective, edited by Christoph von F?rer-Haimendorf, pp. 168-182. New Delhi: Sterling. --. 1997. Hinduization: the Experience of the Thulung Rai. In Nationalism and Ethni?city in a Hindu Kingdom: The Politics of Culture in Contemporary Nepal, edited by David N. Gellner, Joanna Pfaff-Czarnecka & John Whelpton, pp. 303-323. Am?sterdam: Harwood. --. 1998a. The Prehistory of Dravidian-type Terminologies. In Transformations of Kinship, edited by Maurice Godelier, Thomas R. Trautmann & Franklin E. Tjon Sie Fat, pp. 314-331. Washington: Smithsonian Institution. --. 1998b. Varnas, Colours and Functions: Expanding Dum?zil?s Schema. Zeitschrift f?r Religionswissenschaft, 6:163-177. --. 1999. Hinduism, Structuralism and Dum?zil. In Miscellanea Indo-Europea, edited by Edgar C. Polom?, pp. 241-260, [Journal of Indo-European Studies Monograph No. 33]. Washington: Institute for the Study of Man. --. 2000a. The Field and the Desk: Choices and Linkages. In Anthropologists in a Wider World: Essays on Field Research, edited by Paul Dresch, Wendy James & David Parkin, pp. 243-257. Oxford: Berghahn. --. 2000b. Categories and Classifications: Maussian Reflections on the Social. Oxford: Berghahn. --. 2000c. C?chulainn?s Women and some Indo-European Comparisons. Emania, 18:57-64. --. 2000d. Imra, Pentads and Catastrophes. Ollodagos, 14:278-308. --. 2002. The Stockmen and the Disciples. Journal of Indo-European Studies, 30:27-40. --. n.d. The Indra-Tullus comparison. To appear in Indo-European Language and Cul?ture: Essays in Memory of Edgar C. Polom?, [special issue of General Linguistics 40], edited by Bridget Drinka & Joseph Salmons. Beidelman, T.O. 1998. Marking Time: Becoming an Anthropologist. Ethnos 63:273-296. Carrithers, Michael, Steven Collins, & Steven Lukes (eds). 1985. The Category of the Person: Anthropology, Philosophy, History. Cambridge: University Press. Coulson, Michael. 1976. Sanskrit: An Introduction to the Classical Language. [Teach Yourself Books]. London: Hodder & Stoughton. Evans-Pritchard, Edward E. 1981. A History of Anthropological Thought, edited by Andr? Singer. London: Faber. Haddon, Alfred C. 1934. History of Anthropology. London: Watts. H?ran, Fran?ois. 1993. Figures et L?gendes de la Parent?. Paris: Institut National des Etudes D?mographiques. James, Wendy & Nicholas J. Allen (eds). 1998. Marcel Mauss: A Centenary Tribute. Oxford: Berghahn. Lall, Panna. 1911. An Enquiry into the Birth and Marriage Customs of the Khasias and the Bhotias of Almora District, UP. Indian Antiquary, 40:190-198. Maraini, Fosco (trans.) 1961. Karakoram: the Ascent of Gasherbrum iv. London: Hut?chinson. Mauss, Marcel. 1973 (1950). Anthropologie et sociologie. Paris: Presses Universitaires. --. 1968-69. Oeuvres (tomes i-iii). Paris: Minuit. Munthe, Axel. 1927. The Story of San Michele. London: Murray. Needham, Rodney. 1972. Belief, Language and Experience. Oxford: Blackwell. --. (ed.). 1973. Right and Left: Essays on Dual Symbolic Classification. Chicago: Uni?versity Press. Paine, Robert. 1998. By Chance, by Choice: A Personal Memoir. Ethnos, 63(1):134-154. Rees, Alwyn & Brinley Rees. 1961. Celtic Heritage. London: Thames and Hudson. Saunders, A. Norman W. 1967. Imagination All Compact: Understanding the Arts. Lon?don: Methuen. Silverstein, Arthur M. n.d. Philip George Houthem Gell, 1914-2001. To appear in Biographical Memoirs of the Royal Society. Turner, Eric G. 1976. Derek Fortrose Allen, 1910-1975. Proceedings of the British Academy, 62:435-457. Ullmann, Stephen. 1962. Semantics: An Introduction to the Science of Meaning. Oxford: Blackwell. van Loon, Hendrik W. 1948 [1922]. The Story of Mankind. London: Harrap. From checker at panix.com Wed May 18 22:53:30 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:53:30 -0400 (EDT) Subject: [Paleopsych] R.A. Sharpe: Philosophical Pluralism Message-ID: Philosophical Pluralism* Review Discussion R.A. Sharpe University of Wales, Lampeter Inquiry 42: 129-41 (prob. 1989) * Avrum Stroll, Sketches of Landscapes: PhilosophybyExample (Cambridge, MA/London: Bradford Books, MIT Press, 1998), xiv + 282 pp., ISBN 0?262?19391?4, ?29.50. Avrum Stroll? s new book is pluralist. He repudiates overarchingtheory or the single conceptual model in favour of the description of what he sees as a varied, un?uni? ed landscape of concepts, and in his perambulations through this countryside he touches on topics as varied as scepticism, reference in fiction and reference to natural kinds, Platonism and more, discussions unified more by a common outlook than by any single theme. The common perspective is that he repudiates the `master?key? approach. He is struck by the diversity of philosophical problems and the diversity of the answers to them and in this his approach is somewhat Wittgensteinian. He calls it `piecemeal? . The opening discussion is a fairly lengthy examination of scepticism. Professor Stroll applies the diallelus or `wheel? argument widely. The gist of the diallelus is this. Ifwe are to know how things really are as opposedto how they appear to us, we need a criterion to distinguishthe apparent from the real. But of any criterion which may be offered we can ask `Does this criterion succeed in distinguishing the apparent from the real?? To know that it succeeds is already to be able to distinguish and this assumes we are in prior possession of a criterion for distinguishing seeming from reality. The search for criteria looks to be viciously circular. Stroll extends this argument to beliefs as well as to appearances; then the same objection applies; any criterion for distinguishing true beliefs from false assumes that we already have a means of detecting which beliefs are true and which not, so we need to be already in possession of the criterion we seek. Stroll links the discussion of criteria for beliefs with the use of criteria in grading. The suggestion that we might be able to grade beliefs as we grade, say, fruit goes back at least as far as Descartes? s introduction, in his objections and replies on Meditation One, of the apple barrel metaphor. Remember that the idea was that we should sort through beliefs casting out those which did not come up to scratch. Stroll does not challenge the analogy. However, there are reasons for supposing the apple barrel an ill?judged metaphor. First, what sense does it make to speak of beliefs as good or bad? Stroll seems to think there are no problems, but it is clear that we do not ordinarily speak this way. The very idea of sorting through beliefs, dividing them into good and bad beliefs, is bizarre. For to identify a belief as a bad belief (assuming this is an epistemological matter) is already to make it impossible to believe it. If I see that it lacks evidential support etc. then I willnot believe it. (ceteris paribus). (Ifthe badness of a belief is a moralmatter rather than an epistemological one, then, I suppose, we may think a belief to have morally objectionable implications but still believe it because we think it true; a racist might think another race both intellectually and morally inferior, recognize that this view has regrettable consequences as far as their treatment is concerned but, with a sigh, register it as a sad but unavoidable fact. In such a case the proposition might be both taken to be bad and believable.) The second reason why this model is unhelpful is that the holism of beliefs makes it impossible to weed out beliefs one at a time. Of course, in one respect, the apple barrel metaphor is not so inapposite; it suggests that one bad belief infects the others; and it is true that if I have one wrong belief I will have others and that the removal of this false belief has knock?on effects. (Though in as much as the metaphor suggests this is a temporal process it is misleading.) If I come to believe that the computer model for the mind is mistaken, then all sorts of things follow. I do not just relinquish this belief, I will also cease to believe in the claims of `cognitive science? about many other matters. Still the analogy fails in as much as I do not scrutinize these connected beliefs separately.(It is doubtfulas to whether Descartes thought ofthe matter in this way;wrongly, he thoughtof beliefs as items which can be separately scrutinized, which suggests he had some other view of the infecting process.) So, given that the criterion for relinquishing a belief is that you now either have reason to believe that it is false or you see that there is no reason to believe it true, the analogywithapples does not run through. For the content of the belief is intimately connected with the change. In the first, stronger case, you see that the propositional content of the belief is false. So the grounds for rejecting it are necessarily connected with your belief. Once you see that it is false, necessarily you cannot believe it (barring irrationality and double?think etc.)It ceases to be a proposition you believe.It is, then, no longer one of your beliefs. What you believe changes from it to its contrary or contradictory. As we see, in the centralsense of `belief? , beliefs are no longer in existence if they are not believed. But the criteria for goodness in an apple, firmness, texture, andtaste, are independent ofthe fact that it is an apple. If it fails these criteria it does not cease to be an apple; it is merely a bad apple and it might, unfortunately, be your apple. The dissimilarities go further, of course. I have spoken of `mybeliefs??with the implication that this ceases to be one of my beliefs once I see it to be false or groundless. But others might continue to believe this proposition ? even where I try to dissuade them. The purchase of an apple precludes its purchase by somebody else. Why should the analogy with apples have seemed a good parallel to anybody? Perhaps the fact that `belief?is sometimes taken to be equivalent to `proposition? lends Descartes? s argument an unmerited degree of plausibility. For propositions continue to exist when they are not being entertained and, if beliefs are taken in that sense the objection that, normally, negative grading produces extinction, will not apply. To return to the diallelus, the assumption is that criteria needto be justified. But there seems no more reason to believe this than to assume that justifications in general have themselves to be always justified ? a move which is evidently regressive. As Wittgenstein famously observed, `justifications have to stop somewhere?(the precise reverse of an equally famous dictum of Charles Sanders Peirce). There is reason for supposing Wittgenstein right; criteria do not generally invite the regressive move; primarily there is the observation that if the criterion of a good apple is that it is crispand has a strongslightlyacidic ? avour, we seem under no pressure to answer the question, `But how do you know that this criterion is a rightone or 1 a good one?? It just is the one commonly used ? or at the very least, the one used by the speaker. If pushed, I would invite people to taste certain apples, but then, if they did not agree I might conclude that they had defective taste buds or whatever. The moves are familiar in aesthetics. In as much as the judgment of an apple ultimately depends upon preference, there is no case for supposing it potentially regressive in the way that epistemic notions are, rightly or wrongly, imagined to be. A point to notice is that there are people who are more or less competent in grading. A sommelier might reject a wine for reasons not obvious to me. But it would be odd, and perhaps implausible for me to question the criteria used. Doubts about his judgment would not usually present themselves as invitations to him to `justify his criteria? .For what more could be required than the recognition that he is qualified in this area. David Hume memorably summarized the requirements for a good judge in his `Of the standard of taste? ; `strong sense, united to delicate sentiment, improved by practice, perfected by comparison and cleared of all prejudice.? This does not, of course, preclude the assessor making a bad judgment say in making a judgment which is rejected by all his peers, and a number of misjudgments will lead us to conclude that he is not an expert. Let? s return to the analogy between apples and beliefs. There are experts in grading apples. Are there also experts in settling beliefs? Part of the history of epistemologyhas been premised on the assumption that people could improve their capacities in this in a general sort of way, mainlyby a trainingin logic. I am somewhat sceptical. The fact is that the imprudent acquiescence in a belief is usually a result of unfamiliarity with the area and not a general failing. It is not usually an inability to employ modus ponens that is the problem. A man who has a symptom which he knows to be one of the symptoms of something serious like cancer may jump to the conclusion that he has that disease, where an expert, aware of the fact that any one symptom can indicate any one of a number of conditions or nothing very serious at all, will be far more cautious. If my car fails to start I may leap to the conclusion that it must be the coil because it was that which caused it to fail to start previously;but a mechanic, aware of the various factors which prevent a car from starting, mightcarryout further tests. Itwould be consistent with Stroll? s general position to acknowledge this. The dissimilarity between grading apples and grading beliefs is this. There are experts in grading apples but there are no experts in grading beliefs. There are experts in grading beliefs according to their implausibility or plausibility in various areas. There are experts on oncology, car mechanics, the history of music, performance practice, the history of philosophy etc., but there are no experts on beliefs per se and the idea that philosophy might produce these is absurd. For to be good in any of these areas is to be aware of the range of pros and cons. If a philosophy teacher marks an essay on his special field, he can do so because he is aware of the arguments which are available against even the most plausible seeming position, the thesis which seems innocuous to the student. But in this respect he does not differ from the historian of the industrial revolution who also knows that what looks obvious has been questioned. Those of us who teach a subject like aesthetics know how ready teachers of Englishare to propounda theory unaware that the theoryitself has a history of defence and refutation. Stroll is not, I think, either Wittgensteinian enough or pluralist enough both here, and as I shall argue, elsewhere. Neither does Stroll seem to have considered that a justification is essentially context?bound. We ask for justifications in situations where the claims made seem controvertible. This connects with a more basic and perhaps more significant mistake. The error is to think of `criteria for belief? as though this is a matter to be settled at a general level. It is not even clear that such a notion makes any sense. Ofcourse, one might present the anodyne, `Believe what you have good evidence for? , as a maxim but this manages to be both boring and dangerous. Not only does it not register the way beliefs connect with loyalties and commitments but observing it, were it possible, would be destructive. Do I trust my friends only as far as the evidence supports that trust? In such cases the interconnection between my belief that my friend is reliable and my moral commitment to him makes the matter of my assessment of any evidence that he is untrustworthy a much more complex matter than merely looking at evidence. Evidence that I would take at face value if he were somebody else, I may not accept because he is my friend. Imaylook for other and more generous explanations of his behaviour. Stroll is well aware of cases where no justification for a belief claim is required. His examples of what we might know but cannot justify are cases like somebody with perfect pitch identifying a note, or an idiot savant coming up withthe square root of 1,973. Consider some other cases. I know what my name is, straight off; admittedly, there might be contexts in which I might be called upon to produce evidence; I might be called on to produce evidence if fraud is a possibility, as when Iam gettinga new passport or openinga bank account. A hospital doctor might ask me for evidence that I am whom I am if I am being admittedto casualty because of a bang on the head. Evidence would not be normally called for when introducing myself at a party. The requirement to produce evidence there suggests that your host is a maniac or a joker, or a philosophy undergraduate, or that this is a party where unwelcome gate?crashers are expected. The point is, surely, that context rules, and I am not sure that Stroll makes enough of it. I might be required to produce evidence that the earth has existed for millions of years in the Bible Belt whereas this is not a challenge I would normally expect. This connects with the discussion of Wittgenstein and Moore on certainty. Strollconsiders G. E. Moore? s attack on scepticism.Unexpectedly, he takes Moore to be asserting what he calls a `private statement? in claiming that he knows that the world existed long before he was born. The claim is initially puzzling. Does Stroll have in mind a case like a sense?datum judgment where privileged access means that I, and only I, am the authority for the claim; this looks the obvious model for a `private statement? should one exist. But this is quite unlike the examples offered by Moore. Stroll goes on to say that rather than a justification being `silent? , there seems no place for one at all in this case. (This he takes to be the gist of Wittgenstein? s remarks at On Certainty 84). A silent or private justification is impossible because of the publicityof the notion of justification (p. 21); on the basis of this he goes on to argue that there cannot be relevant evidence for the judgment if it is private. Stroll certainly misunderstands Wittgenstein here because Wittgenstein would not have levelled that objection at Moore, as Stroll seems to think. The context I think makes it clear that, when Wittgenstein says that it is of no interest what particular things Moore knows but how these are known, it is not the question of justification to which Wittgenstein alludes to so much as the fact, as he goes on to stress, that these are things we all know but cannot say how. The issue is a general one. In one of those rhetorical questions which needs to be taken as an assertion, Wittgenstein goes on to saythat this is a truism which is never called into question (OC 87); it is part of our `frame of reference? (OC 83). The presentation suggests that Stroll thinks that this has something to do withthe private?language argument (p. 21)but, if he does, he is wrong. Note, as well, that this example is different from the sorts of example which Stroll himself uses, that I see straight off that one shade is darker than another. Stroll is correct in what he says about this but it is not the sort of example Wittgenstein discusses in this passage. Wittgenstein is concerned with judgments which form a sort of bed rock, such as the earth is round or that it has existed for a verylongtime, but these beliefs mightnot be shared by other cultures. The second part of the discussion of scepticism attempts to unpin the Cartesian dream argument. Stroll begins withthe observation that Descartes? s presentation assumes that he knows, on occasion, that he is not asleep (p. 28); Stroll thinks this undermines the argument. For the argument proceeds from the premise that sometimes I have been asleep and dreamt that I was awake to the conclusion that, at any time, I might be dreaming, unbeknown to myself. 2 Stroll? s is what Curley called the `procedural argument? ? that if the conclusion is true Descartes could not assert its premise, and it is a very familiar objection to the Cartesian argument from dreaming. But, as Curley argues, it surely is enough cause for concern that the conclusion of the dream argument proceeds validly from premises we believed on re? ection to be true. Descartes is concerned with the possibility that he thinks that he is awake but is in fact asleep (something he later discovers). At any point he may think he is awake but not be. The scepticalthrust is unblunted and it preserves its point in the face of Stroll? s argument that `dwellingcarefully? , whichwe are called on to do, is not something which can intelligiblybe done whilst asleep (p. 30). Certainly if I do this, as opposed to dreaming that I do it, I am not asleep, but the possibility that I dream that I dwell carefully is not ruled out. This is a close relative of the familiar point that I cannot assert whilst I am asleep, which does not, of course, rule out the possibility that I dream that I assert. So Bernard Williams? s objection that Descartes seems to think that in dreaming 3 we make rational judgments but on the wrong basis identifies a mistake in Descartes? s presentation but not one which has any purchase on the sceptical force of it. If I dream that Imake rational judgments rather than make rational judgments on the basis of what I dream, the sceptical argument can still carry through. The problem with Stroll? s presentation is that he seems not to grasp the sceptical method Descartes uses; it is, like so many forms of scepticism, a matter of beginning with the assumption that we know and then, showing, by argument, that our confidence is misplaced. It is an assemblage of familiar considerations which leads to scepticism. The argument is a reductio. I know that I am awake but, since I have sometimes been mistaken about this, I have to allow the possibility that I am now asleep and that the judgments I now make are not judgments or assertions at all. Rather that I dream that I judge that Iam awake. What mistake Descartes makes is concealed in the thesis that there are no certain signs that I am awake rather than dreaming. For the situation is not symmetrical. Awake, I know I am not dreaming and there are tests I can apply, such as pinching myself. Kenny said that in a lecture J. L. Austin claimed there were fifty such indications. One of these is described by Descartes himself, the continuity of waking life as opposed to the discontinuity of dreaming. There are, then, tests which establish that I am awake, providing one does not demand, as Descartes presumably would, unreasonably high standards for `certain? . There are no tests which I can apply which establish that I am asleep for, in sleep, I can make no judgments. There follows a pair of essays critical of Putnam and Kripke? s well?known account of the meaning of natural?kind terms. Stroll? s counters are so ingenuous that it is hard to understand why they have not been considered before. (Perhaps they have, but Stroll does not cite any previous sources and the criticisms are certainly new to me.) The thesis that water is identical with H2O and that `water? both means H2O and meant H2O before that fact was discovered is open to the following objections. 1. Firstofall, water is not identicalwith H2O. There are various isotopes of water. Pure water is a mixture of molecules other than H2O such as D2O, HDO and T2O which have different boiling points and masses. One of them, `heavy water?is familiar by name to the general public. 2.By thetransitivity of identity,if water =H2O and H2O =icethen water = ice, which is false. He concludes that neither identity is true. (These mistakes are notconfinedto Putnam and Kripke;theyare also found in J. J. C. Smart.) 3. There are substances known as isomers whose constituents are the same but since the arrangements of atoms and molecules are different, the substances differ. Ethyl alcohol (C2H5OH) and methyl ether (CH3OCH3) are each composed of two carbon, six hydrogen and one oxygen atom but the structure in each case differs. Well, we can all have horse laughs at the expense of Putnam and Kripke who, for all their kowtowing to the natural sciences, were apparently ignorant of some very elementary high?school chemistry and apparently did not bother to check. (In my case, I feel embarrassed at repeating these errors to generations of students, and puzzled why none of them told me.) But what does all this show? Putnam and Kripke will simply replace their initial statement of identity with a more complex form which might be disjunctive. Still, a few knockabout arguments are fun and the discomforture of the mightyis always agreeable. The positive thesis of Stroll, that the phenomenal character of water is the basis for its meaning and takes precedence over the microphysical is independent of these particular swipes. Children, he says, could not otherwise learn the meaning of `water? since they cannot perceive the microstructure (pp. 56?57, 60). At this point Stroll, exhilarated by the chase, will pick up anything to hurl at the retreating figures of Putnam and Kripke. No!, the child cannot see the microstructure. But it is no skin off Putnam and Kripke? s noses to allow that children learn the use of water through its phenomenal properties and then be corrected if the sample of water failed to have the usual microstructural properties. This happens with a distinction like that between gold and fool? s gold. I cannot tell the difference any more than I can between 18 and 9 carat gold, or between grades of diamond. Ofcourse, even ifwe need instruments to check, the distinction is ultimately phenomenal in that we can, eventually, learn how to use the right instruments to check and see for ourselves; for all I know, there may be simple tests to be done on the spot which show the difference between gold and iron pyrites. This is only an extension of something familiar. I am not very good at picking out different sorts of trees or birds where other people are very good at it. There is a role for the expert or the knowledgeable here. I know this is a bird but don? t know that it is a willow warbler though I do know that a willow warbler is a bird. (My books on ornithology make it clear that it is very hard to tell members of the warbler family apart. For example, the Marsh Warbler is described as `very like the Reed Warbler in appearance but less common? .) Stroll neglects the intuitively correct and important notion ofthe linguistic division of labour. The important question is surely whether or not we speak of the meaning of water having changed as more precise criteria become available. Itcertainly strikes me as implausible to suppose that medieval man, unbeknown to himself, operated with a concept of water that was microstructural before chemical analysis was practised or even thought about. Linguistic competence cannot be like that, surely. The point ought to be that knowingwhat water means is not easily distinguished from knowing the facts about water, and some of these facts may be more or less arcane. The fact?language gap is surely a philosophical miscalculation and we owe to Quine the realization that the matter of what a word means shades imperceptibly into general facts about those things to which it applies. In the next chapter, Stroll introduces a functional element in the definition ofsome natural kindwords such as `jade? or `water? ;this plays at least some role in the definition though it is less prominent than in the case of words for artefacts like `table? or `knife? . Such natural kind words have been labelled `rigid designators? . Ifwater is a rigid designator, Strollreasons, then it follows that if `water?vanished from the face ofthe earth, we would lose the meaning of `water? . We would not, hence `water? is not a rigid designator. But the theory of rigid designation does not carry this consequence. It applies also to proper names where it at least succeeds in accountingfor the fact that we refer to Homer without even knowing whether the one thing traditionally attributed to him, the authorship of the Iliad, is true. And Homer is no longer with us. In the event of water disappearing, `water? would act more like a proper name. Naturally enough, the discussion progresses to the matter of fictional statements via some consideration of Kripke? s notion of rigid designators ? or `tagging? , as we should perhaps call it in deference to Ruth Barcan Marcus. What do we do about the names offictionalcharacters andobjects? Go for a fictional operator allowing that fictive objects `exist in fiction? and therefore can be tagged, presumably, in another possible world? Abandon the idea that proper names are rigid designators because here nothing designated exists? Abandon the reference theory for fictional objects in favour of the Russellian account for fictional names, that in fiction a proper name may equal a description? Hold that fictional names are secondary or derivative? Stroll is convincedthat the intuitive view is that these are proper names and that this common?sense view is right. He argues for this via a discussion of Strawson? s `On Referring? , whose contribution to the topic he believes is undulyneglected. Strawson denies thatfictionalsentences are either false or nonsensical. Rather these sentences are used to make statements that are neither true nor false (their truth or falsity `does not arise? ). Strawson is obviously wrong here and Stroll obviously right.Thus it is true to say that Lady Dedlock was the mother of Esther Summerson and false to saythat Mrs Jellyby was childless. The principle that you cannot refer to what is non?existent is either trivial or false. On the ordinary understanding of `refer? we regularly refer to fictional objects. To make the trivial point that fictional people and places do not exist by means of the observation `Fictional names do not refer?merely adjusts the sense of `refer?in a tendentious way.That wonderfulfilm byTruffaut, Fahrenheit 451, crystalizes the role played on our lives byfiction. To say that we cannot so refer trivializes the question byan arbitrary redefinition of `refer? . So far Stroll seems to me to have hit the bull? s?eye; but his general remarks on fictional statements are less well judged. The strength of Strawson? s analysis, he thinks, is the suggestion that works of fiction deal with `make?believe? (p. 74) and that, in making this move, Strawson had shifted the centre of interest from the question of ontology to the question of the nature offictional discourse.(He does not mention Kendall Walton in this connection, the philosopher most closely associated with the idea of `make?believe? ). What work does `make?believe? do here? My absorption in the narrative of a story is not preceded by any act of making?believe on my part. There is no such intention and `make?believe? , unlike being deceived, is something consciouslydone. Ifindthat, once I `get into? a story, Isimplyfollow it. When I saythat Icannot do this, what I mean is that the storydoes not grip, it seems artificial and preposterous, but this is pretty rare. I follow even a bad film. So `make?believe? , as an account of how we followfictionalnarratives and as a basis for fictional reference, is either trivial or false for precisely the same reasons as the original thesis that we cannot refer to fictional objects. On the ordinary understanding of `make?believe? this is not what we do when we watch a film or a play. On a revised definition, we have something made trivially true through the definition. In an original discussion of natural languages (English, of course), Stroll touches on a topic which has broader resonances still, for it links with the programme of analysis which has dominated philosophy for so much of this century. In Plato? s handling of participation and mimesis, the two concepts which articulate different versions of the theory of Forms, Plato assumes that universals are not identical with particulars because they are of a higher type (pp. 66?67). In a lengthy and somewhat Austinian taxonomic discussion, Stroll distinguishes three major groupings in natural languages, `clusters? , `chains? , and `rings? . Imagine a word association procedure motivated by linguistic interests in synonymy and partial synonymy rather than by psycho?analytic theories and you will get the picture. In some cases you will be led back to the word with which you began or a synonym of it. This is what he calls a `chain? . In some cases the two terms, though differing in sound, are interdefinable.These constitute a `ring? .Chainshave rules which determine membership. The clarification of these ideas considers concepts like `example? , `instance? , `model? , `copy? , `reproduction? etc. and leads Stroll to conclude that Plato con? ates a number of different usages (p. 141). To give just one or two of his examples, a copy is a copy of some particular thing and a copy is made by man; it follows that the only particulars which could be copies of forms would be artefacts and, furthermore, that the Forms themselves, if copied, must be particulars. Now there some objections to this general thesis which Stroll does not consider. The upshot is that usage is even more variegated than Stroll? s particularism seems to comprehend. For the thesis carries the implication that if I copy Beethoven? s `Les Adieux? sonata then I copy this printed or manuscript token but not the sonata itself, which seems to me counter?intuitive; at the very least, I find myself under no pressure to say the one rather than the other. I am equally comfortable saying I copied the sonata or copied this particular token. Again, Stroll suggests that it cannot follow that a copy must be of a `different order?from the original.Quite the reverse, he thinks; a duplicate key is also a key. But here again, the arts provide counter?examples; amongst aestheticians Stuart Hampshire? s point is often recapitulated; in general, a copy of a work of art is not itself a work of art. Copy a work of art and you do not have something which is, in the important respect, of the same order as the original. Secondly Stroll argues that samples and examples differ, not in ontology but in the uses to which they are put. (Stroll says `logic? but `ontology? better captures the thought.) Thus a piece ofcloth can be an example ofcloth (if the person to whom it is being shown does not know what cloth is) or a sample (if he wants to see a swatch of this particular weave). Stroll draws the powerfully anti?Platonist conclusion that if you do not know what x is, an example will show you; you will not be helped by being referred to a universal which cannot be inspected. This might look like a mere restatement of empiricism but empiricism is placed in a new light by this approach. However we might be troubled by two thoughts. First, why should we think Plato would be concerned with departures from the logic of ordinary talk? Certainly he ought to be fazed by the last point, but I am not sure he would be. His reaction would almost certainly be `so much the worse for ordinary distinctions? ; secondly, he would presumably argue that something like intellectual intuition accounts for mathematical knowledge. Other forms of knowledge are inferior and ought not to be privileged in philosophical enterprises. He could certainly have adopted a revisionist approach and would have done so had the objection been put to him. Finally, to assume that such conceptual entanglements carry through from English into Greek is probably already to concede some sort of realism, because it can only be on the basis that facts dictate our conceptual ordering that this is likely to be so. Otherwise, if they do correspond, it is a mere accident that they correspond. Lastly, why should we trust these classifications into chains, rings etc.? They are based, as Stroll says, on the use of a decent dictionary like Websters. But what authority has Websters that it should be used in this way? The classification of words is, after all, not like the classification of natural kinds. That, too, can go drastically wrongbut there are principled ways of correcting it through our grasp of natural kinds and our understanding of the causal order which puts whales in a differentclade fromfish. (Which is not to disregardthe fact that the taxonomy of living species is the occasion of more ill?tempered controversies than most ? puzzling when you consider that these disagree?ments seem in principle settleable.) But there are no parallel principled ways in which we can be reasonably confident that a cluster or a ring contains all the relevant words. As I remarked at the beginning, Stroll favours a piecemeal approach to concepts. That a particularist descriptive approach to philosophy is the correct one suggests that our concepts may not, generally, be systematic. By this I mean that there will, for most of our concepts, be no clear boundaries within which they apply and beyond which they do not. This is particularly so with concepts like knowledge, belief, justification, human identity, art, right action, ? indeed pretty well all the concepts in which philosophers are especially interested. Borderline cases usually require a decision and, though there may be grounds for that decision, those grounds may also con? ict and the decision may be arbitrary. Stroll has no theory to account for his piecemeal, Wittgensteinish, example?based approach. Should he? Well, of course, in one way even a higher level picture of why we have our concepts of reference, knowledge, universals etc. runs counter to his avowedly descriptivist approach and he may shy away from any justification of it. But I think something can be said ? though not by an echt?Wittgensteinian. The dedicated followers of Wittgenstein eschew any attempt at generalization at all and in making some tentative suggestions I adulterate the pure milk of the doctrine. I am going to suggest that a justification for Stroll? s approach can be found in the theory of action. Austin pointed the way towards making the theory of action primitive in philosophical explanation by making it basic to what this century has regarded as the fundamental philosophical discipline, the philosophy of language. (There is an interesting book to be written on the way that different branches of philosophy have been, in turn, privileged; philosophy of religion, epistemology, philosophy of language, philosophy of mind, perhaps aesthetics for a short period in German romanticism.) For Austin, through the idea of speech?acts, elaborated the Wittgensteinian idea that `in the beginning is the deed? . Such a picture does not, of itself, imply that our concepts outrun our capacity to determine them in a formal way, but it is easy to see why such a theory should suggest itself. A fundamental distinction in philosophy of action is between what we consciously intend and what we do, either foreseeing it and not intending, or do as the unforeseen consequence of our actions. The words we use determine the concepts we possess in a straightforward way. Our concept of a planet consists in a number of features of planets which trivially follow from the use of the word together with some general truths which might or might not be part of the meaning. (There is no way of clearly distinguishing questions of fact and language where the matters of fact are pretty well universally known.) What is the domain of a concept like the concept of a planet? In this case it seems likely to be determinate. But there are many concepts whose domain is notoriously indeterminate, such as that of a tort or, paradigma?tically, a work of art. If a previously unencountered artefact is declared a work of art, of course it might be unequivocally, and with universal consent, declared thus even though nobody has considered the case before. The case is unforeseen but it happens that the lines of the concept have been drawn by implication from previous usages. But then again there will be cases where previous usage is no guide to what we should say about this case. Then we make a decision (or some one individual decides); there may be a preponderance of reasons in favour of counting this an instance of the concept in question or there may not. The degree of fiat in the decision will vary. My point is that there is no reason to suppose our concepts to be more articulated than we currently require, nor that they have any more consistency than is required for us to acquire the language which determines these concepts. (And the amount of time wasted in philosophical analysis over the last half century in an endeavour to show that this case counts as knowledge and that case counts as mere belief ? to take just one example ? shows how much the Platonism which underlies the programme of philosophical analysis has corrupted the discipline.) I ought not to conclude without saying that this well?produced book is a pleasure to read. Stroll writes in an elegant and civilized manner. It is a pleasing fact about contemporary philosophythat the books which one enjoys and to which one returns are often written by thinkers who are not usually ranked amongst the so?called superstars. Stroll is consistently thought?provoking. The standard of argument varies but there is a freshness and individuality about his approach which is very attractive. Stroll? s heart is in the right place even if too often it beats to the rhythm of his opponents. N OT E S 1 SeeJ.O.Urmson? s classicpaper `OnGrading? , inA.G.N.Flew(ed.), Logic and Language, second series (Oxford: Blackwell, 1959). 2 E.M.Curley, Descartes against the Sceptics (Oxford: Blackwell, 1978), p.48. 3 Bernard Williams, Descartes; the Project of Pure Enquiry (Harmondsworth: Pelican Books, 1978), App. Received 30 October 1998 R. A. Sharpe, Department of Philosophy, University of Wales Lampeter, Lampeter, Ceredigion SA48 7ED, UK.E?mail: R.A.Sharpe at Lamp.ac.UK From checker at panix.com Wed May 18 22:53:55 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:53:55 -0400 (EDT) Subject: [Paleopsych] Seifudein Adem Hussien: On the End of History and the Clash of Civilization: A Dissenter's View Message-ID: Seifudein Adem Hussien: On the End of History and the Clash of Civilization: A Dissenter's View Journal of Muslim Minority Affairs, Vol. 21, No. 1, 2001. Introduction At the end of the Cold War, many leading analysts of international politics began in earne st the task of ?theorizing? where we were headed. Outstanding among such endeavors, especially in relation to attempts to develop a new and more comprehensive understanding of the future of world affairs, are two well?known works, Francis Fukuyama?s ?The End of History??1 and Samuel Huntington?s ?The Clash of Civiliza?tions??.2 It is the attention these analysts attracted as well as the grandiose nature of the subject they tackle that motivated us to take a closer look at the epistemological foundation of their theses, their general implications and the extent to which they stood the test of time. Taken together, these two views are in a sense mutually contradictory in their prophecy of what was lying ahead in the post?Cold War era. For Fukuyama, world politics becomes less anarchic, whereas Huntington believe s inter?civilizational con. icts would replace the traditional inter?state con.icts, engendering a new and more dangerous type of international anarchy. Both Fukuyama and Huntington raise a number of interesting and thought?provok?ing issues and deserve credit for their contribution to the scholarship in this respect. In a sense intended neither to disparage them nor belittle their contribution, our task in this paper is to put their ideas under a brief theoretical, philosophical as well as empirical scrutiny and point out what appears to be some of the most outstanding weaknesses in their theses. We shall start with a brief assessment of the whole idea of the end of history. Similarly, an appraisal will then be made of Samuel Huntington?s ?clash of civilizations?. The essay employs critical method to single out the .aws in the analyses of the aforementioned scholars. Toward the end of the essay, we shall attempt to offer an alternative view and informed speculation s. And yet our main concern will be to clear out errors, confusion and false assumptions in relation to the two theses. It is worthwhile noting from the outset that the clash of civilization s and the end of history theses represent hypotheses that are poles apart in spite of their tacit commit?ment to a dualist and objectivist epistemology and ?realistic? ontology. The question that may hence arise is whether it is justi. able to deal with such mutually deviating hypotheses in a single researchnote.3Indeed, this essay does not attempt to strictly compare the two but not so much because they are not comparable. Instead the reason is simply because our preferred focus is different. In theory, the fact that two theories are logicall y incompatible does not make them ipso facto incomparable.4 The End of History Francis Fukuyama?s main thesis was that the collapse of communism af. rms ?the unabashed victory of economic and political liberalism ?.5 Fukuyama did qualify his assertion by saying: ?[t]his is not to say that there will no longer be events to .ll the pages of Foreign Affairs? yearly summaries of international relations, for the victory of liberalism has occurred primarily in the realm of ideas or consciousness and is as yet incomplete in the real or material world?.6Our major interest here is in the main issues that radiate from the aforementioned proposition. True, communism has collapsed. Fukuyama was on a less solid ground, however, when he assertively implied,just because of the collapse of communism, liberalism has proved its superiority over the other ideologies and that with the collapse of commu?nism the world is increasingly moving toward the ideology of economic and political liberalism. He was implying the superiority of the liberal values when he wrote: What we may be witnessing is the end point of mankind?s ideological evol? ution and the universalization of Western liberal democracy as the .nal form of human government.7 Before advancing further, it is important to note that one can identify at least two major .aws with the aforementioned conjecture and line of reasoning: one analytic and the other empirical . Not only does Fukuyama allude to the ?superiority? of liberal values but also he seemed to have taken the truth of this hypothesis as self?evident. Needless to say, not everybody would accept this withoutsuf.cientclari.cation, substantiation and quali.cation. Similarly, it is open to question if, since the end of the Cold War, more and more people are embracin g (or have embraced)liberal democracy, understood and de.ned by Fukuyama as popular sovereignty, along with a formal guarantee and protection of individual rights.8 A few years ago L. J. Diamond9 argued that it is essential to differentiate between what he labeled ?electoral democracy? and ?liberal democracy?. According to him, these are the two visibly divergent trends following the collapse of communism in Eastern Europe and Africa, the spread of electoral rights and the continued disrespect for liberties that are supposed to be a postulation for a meaningful exercise of them. Diamond put together an array of empirical data to demonstrate that even the spread of ?democracy? cannot be equated with the spread of ?liberalism?. Mass participatio n in the political process can also at times challenge certain liberal values as recently demonstrated in Algeria and Turkey. James Rosenau and Mary Durfee were perhaps closer to the mark than was Fukuyama when they wrote: ?[t]he world?s peoples are not so much converging around the same values as they are sharing a greater ability to recognize and articulate their values?.10A caveat is in order here. Rosenau and Durfee had the bene. t of hindsight, whereas what Fukuyama was engaged in six years earlier was a predictive, and by implication, prescriptive, endeavor that naturally allowed relatively little latitude. To the aforementioned questionability of the empirical validity of Fukuyama?s argument, we can also add a critique of the logic of his analysis, that is, in respect to the fact that his arguments proceed from an unwarranted assumption to a foregone conclusion. His assumption is that communism was defeated by liberal democracy. His conclusion is, as mentioned above, that liberal democracy is superior to all other ideologies. It can be arguedthat communism?s defeatwas due more to its inadequacy to sustain itself and achieve its ideals than to its exhaustion subsequent to putting up a good . ght. As we all know, communism, while opposing liberalism , strove to perfect it. No wonder then some viewed the Cold War as ?a civil war within the Western ideology?.11 It does not, therefore, follow that one ideology is superior to the other. The assumption as well as the conclusion that presumably follow from it is also problematic for, notwithstanding their familiarity , they have not undergone rigorous tests. Implicitl y as well as explicitly, for Fukuyama, consciousness takes primacy over matter. This is whatone would be led to believe after reading his contention that ?...the victory of liberalism has occurred primarily in the realm of ideas or consciousness and is as yet incomplete in the real materia l world.?12 It is in the same sense that one would understand him when he writes: ?one unfortunate legacy of Marxism is our tendency to retreat into materialist or utilitarian explanatio n of political or historical phenomena, and our disinclination to believe in the autonomous power of ideas?.13 Yet Fukuyama appears to contradict himself on this issue when he refers on the .rst page of his essay to ...the ineluctable spread of consumerist Western culture in such diverse contexts as the peasants? markets and color television sets now omnipresent throughout China, the cooperative restaurants and clothing stores opened in the past year in Moscow, the Beethoven piped into Japanese department stores, and the rock music enjoyed alike in Prague, Rangoon and Tehran.14 presumably, these being both the catalyst and the manifestation of an unabashed victory of economic and political liberalism. While he openly favors the Hegelian conception ofthe relationship between matter and consciousness, his practical examples therefore seem to suggest the truism of Marxism in this regard. Fukuyama?s attempt also to underscore the primacy of the ideal over the material especially in reference to the reform movements in Russia, Eastern Europe and China is less than convincing and lacks coherence. He writes, for instance: ?[t]hat changes were in no way made inevitabl e by the materia l conditions in which either country found itself on the eve of the reform, but instead came about as the result of the victory of one idea over another?.15 The question that arises is whether it might not be the case that reformist ideas (consciousness)were conceived in the . rst place because of the bad state of the economy (matter). Take also the concrete examples he mentions with respect to the effect materia l factors can have on ideas with reference to Burma vs. Singapore16 and China vs. Taiwan.17 In general, it appears that the relationship between matter and consciousness is circular rather than linear, as Fukuyama?s analysis seem to suggest. In another book, Trust: The Social Virtues and the Creation of Wealth, which was published after The End of History and the Last Man became a bestseller, Fukuyama advanced another ?theory?that seemed to be deeply . awed logically in a similar way. The central focus of Trust was the presumed relationship between culture and develop?ment: ...the most importan t lesson we can learn from an examination of economic life is thata nation?s well?being, as well as its ability to compete, is conditioned by a single, pervasive characteristic : the level of trust in one society.18 A causal linkage between culture and development thus formed the fundamental premise of the entire book. Fukuyama?s conclusion was that there is a one way, direct and positive relationship between the two variables . However, he did not adequately address rival hypotheses that are the reverse of his thesis, such as that whether or not the level of wealth does affect the level of trust in a society, rather than the other way around and that even if the relationship between the two variables holds true, how we could ascertain that it is linear? At least in theory, it is possible that wealth may positively affect the level of trust but only up to a certain point beyond which it would begin to yield a diminishing return. In any event, one ought to remember that linear thinkin g such as Fukuyama?s view of the relationship between matter and conscious?ness precludes the theoretical possibility that a given effect may be the result of many causes, and in turn produces still further effects, one cause reinforcing another. Again, to place Fukuyama?s arguments in a proper philosophical context, we shall digress for a moment and brie. y compare them with that of Immanuel Kant. For Francis Fukuyama, humankind?s historical advancement is progressively linear until it reaches the end point: the end of history. Kant also had a roughly similar idea with respect to the direction of human progress: ?...nature follows a regular course in leading our species gradually upwards from the lower level of animality to the highest level of humanity?.19The very idea of progres s as a directionalchange for the better is itself not unsusceptible to a counter?argument. It is true, one may say for instance, that humankind has witnessed signi. cant ?progress?, especially in the materialistic sense of the term. At the same time, humanity has also borne witness to events, or chain of events, that can unequivocally be regarded as historical ?retrogression?. The birth and consolidation of Fascism and Nazism during the . rst half of the last century and the recent revival of tribalism in different parts of the world do not seem to provide a ringing endorsement of the idea of ?a regular progression of history??let alone ?the end of history? itself. A strong disagreement to the notion of progres s also comes from an increasingly large number of ecologists or environmentalists. Kant thus elaborates his idea of historical progress in more concrete terms: ...the earlier generations seem to perform their laboriou s tasks only for the sake of the latter ones, so as to prepare for them a further stage from which 20 they can raise still higher the structure intended by nature ... In this sense, and in light of what transpired in the last century, humanity?s history can be viewed as either retrogressive or, at best, a progressive history replete with aberra?tion. For both Kant and Fukuyama, historical progres s is the result of the (natural)law of negation. The law of dialectics governs the transition from a lower to a higher stage of human development. In Kant?s own words: ?[t]he means which nature employs to bring about the development of innate capacities is that of antagonism within society, in so far as this antagonism becomes in the long run the cause of a law?governedsocial order?.21 Despite the similaritie s in this respect, Kant and Fukuyama diverge on a number of important issues. For instance, for Kant it is the formation of a confederation or union of states, not the ?triumph? of one ideology in a group of countries, which would ultimatel y herald ?the end of history?. This stage could only be reached by ...abandoning a lawless state of savagery and entering a federation of peoples in whicheverystate, even the smallest, could expect to derive its security and rights not from its own power or its own legal judgment, but solely from this great federation ...from a united power and the law?governed decisions of a united will.22 As for its achievability, Kant was cautiously optimistic: ...this cycle of events seems to take so long a time to complete, that the small part of it traversed by mankind up till now does not allow us to determine with certainty the shape of the whole cycle, and the relation of its parts to the whole21 but after the transforming effects of many revolutions ...the highest purpose of nature, a universal cosmopolitan existence, will at last be realized as the matrix within which all the original capacities of the human race may develop.24 In short, Fukuyama?s ?end of history? thesis appears to have been anchored in a shaky logical and philosophical ground. What theory and empirical evidence seem to bear out with regard to the future of global affairs will be treated more fully in the concluding section of the essay. It wouldsuf.ce here to say that we now know that the triumphant claim of the end of history was at best premature. The Clash of Civilizations Like Francis Fukuyama?s ?The End of History??, Samuel Huntington?s ?The Clash of Civilizations? ? is an essay that purported to predict where we were headed. For the same reason it sparked much debate and discussions. By way of preface to this sub?section, let us try to delineate the philosophical boundaries of Huntington?s ideas in compari?son, again, to Immanuel Kant?s writings on the subject. Clearly, it is dif. cult to compare Kant?s ideas with those of Huntington?s, since their premises as well as conclusions are poles apart. The following contrastive points can, however, be made with respect to the philosophical foundations of the two. Kant?s views are universalistic in that their point of reference is ?human species? in contrast to ?distinct civilizations? of Huntington. Huntington?s philosophy is particularistic, for he not only believes that ?Western civilizatio n is a superior form of civilization?,25 but also he prescribes ways as to how this superiority can be preserved vis?a`?vis the ?other? civilizations. On civiliza?tion, Kant wrote: ?[w]e are civilized to the point of excess in all kinds of social courtesies and properties. But we are still a long way from the point where we could consider ourselves morally mature?.26It appears therefore that for Kant moral maturityconsti?tutes an importan t dimension of a civilization. Let us now brie. y consider the logical and empirical problems associated with Huntington?s idea of ?the clash of civilizations?. ?The fundamentalsource of con.ict in the new worldwill not be primarily ideological or primaril y economic. The clash of civilizations will dominate global politics?,27 thus hypothesized Huntington. After elaborating and re.ning his hypothesis, he asked the following key question: why do civilizations clash? His answer is short and simple and could be logically re?ordered in the following way: (1)there are fundamental differences between civilization s (which he classi. ed into seven or eight);28 (2) as a result of globalizatio n there will be more interaction between them and this will lead to increased civilizatio n consciousness; and (3)therefore theywould clash. One fallacywith this line of argument may be the absence of historical or logical evidence which supports the view that increased consciousness about one?s civilizational identity in itself would automatically lead to civilizational con. ict. If the preceding statement is correct, one may justi.ably wonder if the same phenomenon, i.e. increased interactions among civilizations, does not lead to mutual respect rather than confrontation between civiliza?tions. Samuel Huntington goes on to state that modernization andsocial change weakens the nation?state as a source of identity.29 Even though this statement is not central to Huntington?s thesis of the clash of civilizations, it has a profound and wider theoretical implication, the discussion of whichwe shall defer for the concluding section. However, it needs to be mentioned here that the role of the nation?state as a source of identity is perhaps one of the most resilient aspects of the function of state that promises to outlive even the challenges of globalization.30 To use an ordinary example, when two individ?uals meet for the . rst time, one of the initial questions they exchange is not to which religion/culture or civilizatio n one belongs to but it is where one is from. Of course, when a person introduces himself as X from country Y, more often, what he/she means or intends to mean goes beyond mere labeling of oneself. It also includes the desire to ensure one?s ?ontological? security (Alexander Wendt?s term)31 by implying the relative place of each in relation to the other. Essentially, this is the effect that the speaker intends to produce in the hearer. In other words, when X introduces himself/herself to Y as coming from country Z, the introduction serves two functions. The .rst is the simple function of identi. cation useful merely for ease of communication. The second and subtler function relates to the speaker?s intention to produce a more complex meaning. To speak of oneself as being from country Z in this case is to let the other know how the speaker wants to be treated by virtue of his country of origin. This function also serves the speaker to rule out or at least minimiz e the cognitive dissonance that would likely arise as the result of not exactly knowing ?where? the other is from. Huntington also touches upon the dual role that the West plays in enhancing civilizatio n consciousness.32 The view that civilizatio n consciousness has increased is dif. cult to disagree with. The principal problem instead pertains to the related idea that this would automatically lead to increased violence and con.ict among civilizations. Huntington notes that de?Westernization and indigenization of the elite is occurring in many non?Western countries at the same time that Western, usually American, cultures, styles and habits become more popular among the masses.33Contrary to what Huntington might like us to believe, this seems to support the argument that increased civilizatio n consciousness does not necessarily lead to civilizationa l seclusion and eventual clash. And many historians of civilizatio n do agree that cultures, styles and habits constitute the core elements of any civilization . It can thus be argued, perhaps more convincingly, that increased interaction between civilizations would lead to co?option rather than collision.With a view to giving his idea a scienti.c .avor, Huntington then mentions the proportions by which the intra?regional trade rose between 1980 and 1989.34 It is worth noting, however, that the time frame of the data is altogether irrelevan t, in fact, so irrelevant as to be misleading. If what he was trying to do is to show how ?the Velvet curtain of culture has replaced the Iron curtain of ideology?,35 why use data for the period between 1980 and 1989? Similarly, Huntington surmises that culture and religion form the basis of economic cooperation and mentions the case of 10 non?Arab Muslim countries.36 On the contrary, the reason why these countries would come together may have more to do with common economic interests than culturalsimilarity. True, there may be no way of .guring out what exactly was in the minds of these elite in signing a mutual economic cooperation treaty. Despite this fact, or because of it, Huntington?s interpretation can only be considered just that, his own interpretation . With the Cold War over, Huntington goes on writing, the underlying (civilizational?) differences between China and the United States have reasserted themselves.37 That may have been so for the few years before Huntington?s article was published but over the past few years China and the US have, if anything, had more cordial relation s in decades. Of course, it remains to be seen if this is a short?term trend or a long?term pattern. With respect to Japan and the US, Huntington has also this to say: ?[p]eople on each side allege racism on the other, but at least on the American side the antipathies are not racial but cultural?.38Here, there is a fact that Huntington either fails to see or chooses to ignore: that is that, acknowledged or not, cultural bias reinforces racial bias or racism and the difference between the two is academic rather than practical . With regard to the occasional trade disputes between Japan and the US, the way the disputes are perceived and handled is undoubtedly a function of the culture and history of each country. But does this give culture primacy over economics, as Huntington suggests? Huntington?s idea of ?kin?country syndrome?, which he tries to substantiate taking the case of the Gulf War, is similarly .awed.39 Had civilizational fault lines been the major lines along which the post?Cold War battles were to be fought, as Huntington?s main hypothesis hints, it would be inconceivable for a Sunni Muslim Iraq to invade a fellow Sunni Muslim Kuwait in the .rst place. Iraqi invasion of Kuwait was un?justi. able. And yet the con.ict and the acrimonious relationship between the two representeda quarrel between two Arabstates or, to put it even more precisely, a family quarrel within one ?nation?: an Arab nation. One should also note, in this regard, that the very idea of ?state? as a . xed political border was alien to Islamic thought. From a strictly Islamic point of view, nationhood is a self?contained and indivisible legal and sociopoliticalentity. Iraq?s invasion of Kuwait at another level points to the primacy of economics rather than culture or civilization . In this sense, the Gulf War is at best a double?edged sword and at worst refutes Huntington?s conjecture and interpretation putting into question his idea of ?kin?country? syndrome. By the same token, during the war in former Yugoslavia, US policy was not as one?sidedly pro?Serbian as Huntington describes.40 To say otherwise is to discredit the US and, in general, the West?s effort to halt and punish the atrocities committed by Bosnian Serb leaders. Another case that makes the cogency of the ?kin?country? theory dubious is Turkey. In one of his recent writings Huntington discusses ?Turkey?s rejection of Mecca only to be rejected, in turn, by Brussels?.41If civilizational identity was the major factor de.ning the orientation as well as behavior of states, then is it not logical to expect Turkey, a successor state to the most recent Islamic empire, to turn its face around and embrace Mecca and reject Brussels? How can one explain such anomaly from a clash of civilization s perspective, and what is the implication of this for its explanatory potential? For Huntington, countries with large numbers of peoples of different civilization s are in the future candidates for dismemberment.42 Our view is that it is not civilizational diversity in a country per se, but it is how the diversity is handled?or mishandled? which in. uence the dismemberment of multi?civilizational states. In other words, it is when the crisis of legitimacy and citizenship reaches an acute level that such states become candidates for dismemberment. Toward the end of his essay, Huntington declared: ?[a]Confucian?Islamic militaryconnection has thus come into being...and the . ow of weapons and weapons technology is generally from East Asia to the Middle East?.43Huntington does not give us facts to substantiate hisjudgment. If the Confu?cian?Islamic connection has indeed come into being, as Huntington claimed , it would undermine his kin?country argument mentioned above. Similarly, in terms of the value of weapons, the West by far surpasses East Asia as a major source of arms to Islamic countries.44 In either case, one of the central propositions of the Huntingtonian idea of the clash of civilizations would be seriously undermined. It may be also argued that neither the presence of a Confucian?Islamic military connection nor its absence need be philosophized, in terms of civilization or otherwise. The occasional groupings and af.nity, or lack thereof, could merely re.ect either the convergence or divergence of interests among states for a short or long duration of time. While empirical evidence does not support the claim that the Islamic?Confucian connection has come into being, civilizational logic does not also point to the possibility for that to happen even in the future. In fact, should civilization s become the de. ning factor of transnational links, the Islamic?Christian or Islamic?Jewish connection is more likely to emerge than the Islamic?Confucian one. Here we may quote Erich Weede?s observation to illustrate our point: From a Muslim perspective, for example, the clash between Islam and the rest of the world is not equally serious on all fronts. The prophet himself reserved a special place for the peoples of the book, i.e. for the adherents of monothe?istic religions, for Jews and Christians. At the level of pure doctrine?not least the practice of the prophet himself?Muslim toleration of Jews and Christians looks much easier than Muslim toleration of essentially agnostic or secular Confucianism or polytheistic Hinduism with its relativistic conception of transcendental truth.45 It ought also to be pointed out that Huntington?s approac h lacks objectivity in that it is openly anti?Islamic . We do think that no matter what his personal values, if he disassociates himself from his preferences and presents his observation/.ndings in a neutral fashion, the contribution of his ?theory? to our knowledge, no matter whether it is or is not true, will be enhanced greatly. When he writes, ?Islam has bloody borders?,46 for instance, the way the sentence is formulated itself reveals a lack of objectivity in his approach. The matter here is not just the morphology of the language. Why not write, for instance, ?the borders between Islam and other civilization s are bloody?. After all, when we talk about a border our points of reference are two or more phenomena--the one within the border and the one without, or the one on this side and the other which is on the other side. Comparing the two statements, one will be left with the impression, after reading the . rst statement, that Islamic civilizatio n is inherently violence?prone. The message Huntington?s statementseeks to conveyseems also to bejust that. In the case of the secondstatement, one willnot be led to the prima facie assumption that one civilizatio n is--or will be--more violent than the other. Underlying the normative foundation of his narrativ e is his belief that other ?civiliza?tions? are inferior or inimical to that of the West and should be kept in check by any means. To this end, in fact, he offers a piece of Machiavellian advice to his compatriots: ?exploit differences and con. icts among Confucian and Islamic states?.47 In short, Huntington?s argument as to what the 21st century would look like are based on reasoning by too few examples, some of which even undermine rather than support his argument. In addition, he seriously poisons his method by mixingscience with politics. An Alternative View In the preceding pages we have tried to demonstrate that not only do ?the end of history? and ?the clash of civilizations? theses have serious philosophical and logical defects from their inception, but also the facts on the ground do not unequivocally bear them out. In this section, we will . rst place ?the end of history? and ?the clash of civilizations? in the broader theoretical context. We will then attempt to offer an alternative understanding of the future of world affairs in light of what has transpired over the last 10 years. Emphases and shades of meaning may vary, but virtually all theories of international relations share the assumption of anarchy in world politics.48 Political realism asserts that international politics is anarchic. Neoliberalism also concedes that it is so. Even social constructivism, arguably the most radical and progressive school of thought, acknowledges, perhaps regretfully, that ?international system is not a very social place?.49 The three most importan t elements embodied in the concept of international anarchy are: (1)absence of world government, (2)sovereignty of states, and (3)egoistic andself?servingnature of state interests. Taken together, we are told, these constituent elements engender a potential for inter?state violence, thereby making the international system incurably anarchic. Where do the ?end of history? and the ?clash of civilizations?. t in this theoretical spectrum? Certainly Fukuyama?s thesis is an indirect attack on the assumption of international anarchy since what he foresaw was a peaceful and prosperous era of liberal democracy in which inter?state wars would become obsolete, or at least unnecessary. Like other mainstream theories in the . eld, the ?end of history? does not however challenge the role ofstate as the primary actor in world politics. Ironically, ?the clash of civilizations? despite its implicit endorsement of the notion of anarchy seriously challenges one of the core assumptions of political realism--namely, state as a unitary anda primary actor in international politics. There is also an irony in the fact that the Huntingtonian thesis at the same time shares with realism its aggressiveness and belligerency.50 However, realism does not buy into the idea of intra?civilizational solidarity or inter?civilizational clash since, as we indicated above, from its vantage point states are by de.nition self?serving and egoistic, and genuine and long?term cooperation among them is dif.cult, if not totally impossible. Developments over the past 10 years seem to indicate signi. cantly reduced inter?state and inter?civilizational con. icts, contrary to what has been suggested by neoreal?ists like Kenneth Waltz and ?clashofcivilization? proponents like Samuel Huntington.51 One explanatio n for this perhaps lies in the . awed assumption of anarchy in inter?national relation s which is shared by both schools of thought. The absence of world government is taken as an empirical equivalent of the ?reign? of anarchy as if the prevalence of ?more? order was also necessarily presupposed by the existence of a central government. We propose to argue that over the years world politics has despite the absence of a government progressively become more orderly than its domestic counterpart. A partial reason for this is that hierarchy rather than anarchy characterizes contemporary world politics. And this hierarchy is based on the inter?subjective understanding among states rather than on enforcement from any external body. But why has contemporary world politics become hierarchical and orderly and why progres?sively so? We can look again at Francis Fukuyama?s thesis in our endeavor to disentan?gle the issues revolving around this question. Whereas it is true that history has not quite ended as Fukuyama hadclaimed, there are, as indicated above, more states in contemporary international system that are ?democratic? compared to any period in human history. This appears to provide a congenial atmosphere for enhancements of the trend towards more elaborate hierarchy and orderliness in the international system and, at the same time, provides a more fertile ground for more domestic disorder. Let us look at each of these formidable propositions one at a time. International hierarchy is in part an extension of an innate human predisposition. Human beings naturally tend to rank and order events, peoples, states and other collectivities, however more or less systematic the process may be. This in turn may be due to the human proclivity for stabilit y in their interaction, a notion not totally unrelated to ?ontological security?--a concept described earlier in the discussion. In any case, there is ample empirical evidence that human perception operates in a context of hierarchy--imagined or real. It could thus make sense for Dumont to argue that we should refer to ourselves as ?Homo?Hierarchicus?.52 Even though we have argued that hierarchy rather than anarchy characterizes international politics, it is however wrong to assume that there is one, universally agreed?upon hierarchy of states. Different sets of hierarchies have existed in different issue areas and at different times. Hierarchy also emanates from what internationalrelations scholars call regimes. The main function of a regime is ?the creation of a pattern within sets of issue areas which approximate legal liability whereby states conform to agreed rules due to converging expectations and due to the enhancement of coordinated sanctions against defectors?.53 Racial, geographic , economic and cultural indicators as well as so?called national character had also been in use for bestowing upon or withholding from a political entity a status in the international system. One set of such an indicator, namely, the quality of health, education and welfare, constitutes the ?developmental hierarchy of nations?.54 There are also less explicit sources of hierarch y in contemporary international system. What we should like to stress here is that there are relatively durable hierarchies virtually in all areas of potential con.ict and cooperation among states. Hierarchies are established through ?voluntary agreements? or through ?tests of will and strength? among rivals. Sometimes, the place a state occupies could simply be bestowed upon it and the status thus attaine d or assigned could be more or less attuned to what a particular state would wish it to be. Alexander Wendt has recently argued that widespread compliance by states to international rules and norms is attributable to coercion, self?interest and legitimacy.55 These same factors in. uence adherence by states to international hierarchy . True, there are, and will always be, instances where a set of hierarchy is contested, and sometimes forcefully, for it is ?shared knowledge? rather than an ?external body? that regulates and restrains interstate behavior. But, in the .nal analysis, it is true that most states do indeed follow most international laws most of the time.56 Thus far we have attempted to substantiate our argument in favor of the view that world politics is marke d by a feature that is closer to hierarchy than anarchy. We shall now turn to the related question of why domestic politics is becoming relatively more anarchic than world politics. It is importan t to note, .rst, that in domestic politics there is an alarmin g lack of ?shared knowledge? as to one?s place. All ?citizens?, regardles s of their economic, ethnic as well as political standing, seriously regard themselves as equals, while unfortunately the sad fact is that they are not. Some are richer than others or more educated than others and so on. This distinction also carries with it broad ranging consequences both for the social status and the privilege s of individuals as wellas for con. ict and anarchy. To say that such inequality of ?citizens? in the face of a legally ?guaranteed? equality is crucial, however, is only to state the obvious, even if the obvious is often ignored. In contrast, despite the principle of ?sovereign equality? of states ingrained in the UN Charter, no state seriouslyconsiders itself as equal to others. Each state fully realizes that the principle of sovereign equality does not work outside the General Assembly Hall of the UN. Hence a greater potential for anarchy in domestic politics. Instructive empirical evidence also suggests that domestic politics is progressively becoming more anarchic . It may indeed be an extreme case, but according to a recent nation?wide poll, 70% of Colombians said that they are afraid of going out at night because they feel insecure.57 One wonders if there is a single state, weak or strong, that worries in these terms for its survival. In addition, individuals are currently undergoing what James Rosenau has called a ?skill revolution?, as a result of which they are now capable of assessing competently ?where they .t in international affairs and how their behavior can be aggregate d into signi.cant collective outcomes?.58But ?although citizens now have greater awareness of their circumstances and their rights, there is nothing inherent in the skill revolution that leads people more in democratic direc?tion?.59 A related issue pertains to a state?s legitimac y understood here simply as ?the ability to evoke compliance short of coercion?.60 Two interconnected venues exist for ascer?taining whether or not a government is legitimate. One is by considering how the government came into being. To this end, we ask whether the leaders who hold/held of. ce came to power through a legal/constitutional means or otherwise. Legitimacy could also be judged on the basis of the policy outputs of those who govern. In this case, the regime or the leaders provide the stimuli, . rst in the form of policies improving citizens? welfare and later in the form of symbolic materials which function as secondary reinforcements, and the followers provide the responses, in the form of favorabl e attitude towards the stimulators.61 When we refer to policy outputs, or political outputs, as James N. Danziger puts it, we therefore mean the issues pertaining to what values will be allocate d and who will bene. t from and who will be burdened by the particular con. guration of value allocation .62In this sense, the notion of popular reaction to stimulators does not contradict the aforementioned idea of the ?skill revolution?. Historically, authority structures have been founded on traditional criteria of legitimac y derived from constitutional and legal sources. The sources of authority have now shifted from traditional to performance criteria of legitimacy . Again, partly as a result of ?the skill revolution?--and the resultant marked change in the analytica l capacity of individuals--future challenges to the legitimac y of state are likely to be signi. cantly different from the past in that they would be more concerted and more powerful. D. Rothchild and A. J. Groth?s observation clarify the factors behind this transformation: With changes in communications ensuring a ready . ow of news across state boundaries, ideas on national self?determination, racial equality, inter?group con. ict, and political liberalizatio n are readily diffused to an international audience. Such a diffusion (or contagion) effect spreads information on domestic political demands and con. ict relations to an international audience by example rather than by deliberate action, initiating an international learn?ing process with enormous potential for con. ict creation.63 Conclusion We may conclude that democratization makes it easier to assess the intentions and therefore predict the behavior of states rather than of individuals. Indeed the structure of corporate ?minds? is typically written down in organizational charts that specify the functions and goals of their constituent elements, and their ?thoughts? can often be heard or seen in the public debates and statements of decision?makers.64 Intriguingly enough, democratizatio n of political systems appears at the same time to engender more anarchy domestically while enhancing order in the realm of inter?state relation s. This is also consistent with the result of a recent study, which found that autocracies are much less vulnerable to state failure than are partial democracies. In the sub?Saharan Africa, for instance, the study concluded, other things being equal, partial democracies were on average 11 times more likely to fail than autocracies.65 Our tentative conclusions are that history has not yet ended, as Fukuyama had claimed; that the end of history, should it somehow happen, would be a bane for domestic politics and a boon for world politics. As regards the clash of civilizations, our conclusion is that such a clash does not appear imminent for, among other things, states rather than civilizations continue to provide individuals with a badge of identity. In the preceding pages, we have also called into question both the logic and empirical validity of the assumption of anarchy in international relation s. Since this assumption consti?tutes the bedrock of contemporary international relations theories and raises wider questions in relation to the end of history and the clash of civilizations, it may be pro. table for both the theoreticians of international relations and its practitioners to adequately analyze it from a variety of approaches. NOTES 1. Francis Fukuyama, ?The End of History??, National Interest, Summer 1989, pp. 3?19. For a brief account of the historical evolution of the idea of the end of history, see E. H. Carr, What is History? London: Penguin Books, 1990, especially pp. 110?119. 2. Samuel Huntington, ?The Clash of Civilizations? ?, Foreign Affairs, Summer 1993, pp. 22?49. 3. In fairness to Fukuyama and Huntington, it should be pointed out that our assess ment of the two theses is based on their short essays and not on their more elaborate and expanded ideas later published in book form. Here objections are likely to arise by those who deny the propriety of our approach. To such an objection our answer is that while the books are certainly richer in empirical and theoretical details, the central arguments and their logic re.ect essentially the same line of reasoning and argument and that the trunc ated version would therefore be more useful for our purpose . 4 In fact, two hypotheses that, logically and philosophically , are different from one another could both be true. See Bertrand Russell, The Art of Philosophizing and Other Essays, Totowa: Little.eld, 1974, p. 58. A good example is a theory in psycholog y known as alpha, beta, gamma hypothesis, according to which three different hypotheses relating to learning had been supported under differentexperimentalcircumstances. The alpha hypothesis states that the frequency with which a behavior is performedenhances learning. The beta hypothesis states that repetition frequency has no effect on learning. The gamma hypothesis states that repetition frequency hinders learning. See Jennifer Bothamley, Dictionar y of Theories, London: Gale Research International, 1993, p. 20 5. Fukuyama, ?The End of History??, op. cit., p. 1. 6. Ibid., p. 4. 7. Ibid. 8. This is not to deny the empirical fact that more than half of the world?s populatio n today live under ?democratic? governments. In Robert Dahl?s count, for example, there were 86 ?democratic? countries in 1997 as compared with only eight in 1900. See Robert Dahl, ?The Shifting Boundarie s of Democr atic Governments?, Social Research, Vol. 66, 1999, pp. 921?923. 9. L. J. Diamond, ?Is the Third Wave Over??, Journal of Democracy, Vol. 7, No. 3, 1996, pp. 20?37. 10. James N. Rosenau and M. Durfee, Thinking Theory Thoroughly: Coherent Approaches to an Incoherent World, Boulder: Westview Press, 1995, p. 36. 11. Chris Brown, ?History Ends, Worlds Collide?, Review of International Studies, Vol. 25, 1999, p. 52. The renowned African political scientist, Ali A. Mazrui, made a similar observation nearly a decade before the end of the Cold War: ?Marxism ...is a child of the West. Karl Marx and Friedrich Engles were themselves Westerners and their theories and ideas emerged out of Western intelle c?tual and economic history. In that sense, the confrontation between Marxism and Western civilization is between a parent and the offspring; it is an intergenerational con.ict in the realm of ideas and values? (Italics original). See Ali A. Mazrui, The Moving Cultural Frontier of World Order: From Monotheism to North--South Relations, World Order Models Project, Working Paper No. 18, New York: Institute for World Order, 1982, p. 18. See also Ali A. Mazrui in this issue (Editor). 12. Fukuyama, ?The End of History??, op. cit., p. 4. 13. Ibid., p. 6. 14. Ibid., p. 3. 15. Ibid., pp. 7?8. 16. Ibid., p. 11. 17. Ibid., pp. 11?12. 18. Francis Fukuyama , Trust: The Social Virtues and the Creation of Prosperity, London: Hamish Hamilton, 1995, p. 7. 19. Immanue l Kant, Political Writings, 2nd edn, ed. Hans Reiss, Cambridge: Cambridge University Press, 1991, p. 48. 20. Ibid., p. 44. 21. Ibid. 22. Ibid., p. 47. 23. Ibid., p. 50. 24. Ibid., p. 51. 25. The tendency both to classify as distinct and refer to one as superior to another is very problematic. The reasoning involved is that different ?civilizations? had been intermingling and borrowing ideas from one another so much so that it becomes hard to talk about their distinctiveness. Robert W. Cox, for instance, reminds us in regard to the relationship that had existed between the Islamic and Western civilizations in these terms: ?It was through contact with the higher culture of Islam that the Christian West recovered knowledge of Greek philosophy ?. See Robert W. Cox, ?Towards a Post?hegemonic Conceptualization of World Order: Re.ections on the Relevancy of Ibn Khaldun?, in Governance Without Government: Order and Change in World Politics, eds James N. Rosenau and Ernst?Otto Czempiel, New York: Cambridge University Press, 1992, p. 151. 26. Kant, Political Writings, op. cit., p. 49. 27. Huntington, ?The Clash of Civilizations? ?, op. cit, p. 22. 28. Ibid., p. 25. 29. Ibid., p. 26. 30. Haruhiro Fukui, ?The Changing State in the Changing World?, International Political Economy (Kokusai Seiji Keizaigaku Kenkyu), Vol. 1, March 1998, pp. 1?10. 31. Wendt de.nes ontological security as ?the human predisposition for a relatively stable expectations about the world around them ...along with the need for physical security, this pushes human beings in a conservative homeostatic direction, andto seek out recognition of their standing from their society?. See Alexander Wendt, Social Theory of International Politics, Cambridge: Cambridge University Press, 1999, p. 31. 32. Huntington, ?The Clash of Civilizations? ?, op. cit., p. 26. 33. Ibid., p. 27. 34. Ibid. 35. Ibid., p. 31. 36. Ibid., p. 28. 37. Ibid., p. 34. 38. Ibid. 39. Ibid., pp. 35?36. 40. Ibid., p. 37. 41. Huntington, ?The West: Unique Not Universal?, Foreign Affairs, Vol. 75, 1996, pp. 28?46. 42. Huntington, ?The Clash of Civilizations? ?, op. cit., p. 42. 43. Ibid., p. 47. 44. Between 1994 and 1998, the top four suppliers of conventional weapons to Egypt, Iran, Kuwait, Oman, Qatar, Saudi Arabia and UAE were, in descending order, USA, Russia, France, UK and Germany. See SIPIRI Yearbook. Armaments, Disarm ament and International Security, New York: Oxford University Press, 1999, p. 426. 45. Erich Weede, ?Islam and the West: How Likely Is the Clash of Civilizations ??, International Review of Sociolog y, Vol. 8, No. 2, 1998, pp. 185?186. 46. Huntington, ?Clash of Civiliz ations??, op. cit., p. 35. 47. Ibid., p. 49. 48. It is perhaps considerations such as this that prompted some analysts to declare that the assumptio n of anarchy sets international relations from other disciplines (rather than settingmerely one brand of international relations theory from an other). See Hans Mouritzen, ?Kenneth Waltz: A Critical Rationalis t between International Politics and Foreign Policy?, in The Future of International Relation s. Masters in the Making? eds Iver B. Neumann and Ole Waever, London and New York: Columbia University Press, 1997, p. 79. 49. Wendt, ?Social Theory?, op. cit., p. 2. 50. Here our reference is especially to what Gidon Rose has recently called ?aggressive realists?. See Gideon Rose, ?Neoclass ical Realism and Theories of Foreign Policy?, World Politics, Vol. 51, 1998, p. 146. 51. In 1998 out of the 27 major con.icts in the world all but two were domestic. See SIPIRI, 1999, p. 7. In fact, there has been a steady decline in the number of inter?state wars in the international system since 1648. See Kal J. Holsti, Peace and War: Armed Con. icts and International Order, Cambridge: Cambridge University Press, 1991; and Kal J. Holsti, The State, War and the State of War, Cambridge: Cambridge University Press, 1996. 52. L. Dumont, Homo Herarchicus, Chicago: University of Chicago Press, 1980. 53. Attributed to Robert Keohane, in M. Suhr, ?Robert O. Leohane: A Contemporary Classic?, in The Future of International Relations. Masters in the Making? eds Iver B. Neumann and Ole Waeve r, London and New York: Routledge, 1998, p. 98. 54. This taxonomy is from H. Barbera, Rich Nations and Poor in Peace and War. Continuity and Change in the Development Hierarchy of Seventy Nations from 1913 through 1952, Lexington, Toronto and London: Lexington Books, 1973, p. 1. 55. Wendt, op. cit., p. 286. 56. Hedley Bull, The Anarchical Societ y. A Study of Order in World Politics, London: Macmillan, 1977, p. 42. 57. San Francisco Chronicle, 10 February 2000. 58. James Rosenau, ?Security in a Turbulent World?, Current History, May 1995, pp. 194?195. 59. Ibid. 60. A. C. Janos, ?Authority and Violence: The Political Framework of Internal War?, in Internal War: Problems and Approaches, ed. Harry Eckstein, New York: The Free Press, 1964, p. 151. 61. J. H. Schaar, ?Legitimacy in the Modern State?, in Legitimacy and the State, ed. W. Connolly, Oxford: Basil Blackwell, 1984, p. 109. 62. James N. Danziger, Understanding the Political World. An Introduction to Political Science, New York and London: Longman, 1991, p. 374. 63. Donald Rothchild and A. J. Groth, ?Pathologica l Dimensions of Domestic and International Ethnicity?, Political Science Quarterly, Vol. 110, No. 1, 1995, p. 78. 64. Wendt, op. cit., p. 222. 65. The study de.ned partial democracy as a country which has some democratic characteristic?such as election?but also have some autocratic characteristics, such as a chief executive with almost no constraints on his/her power, sharp limits on political competition, a state restrained press, or a cowed or dependentjudiciary. See Ted R. Gurr et al., ?State Failure Task Force: Phase II Findings?, Environmental Change and Security Project Report, Vol. 5, No. 49, 1999, pp. 52?55. From checker at panix.com Wed May 18 22:54:19 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:54:19 -0400 (EDT) Subject: [Paleopsych] =?iso-8859-1?q?Esteban_Buch=3A_Ein_deutsches_Requie?= =?iso-8859-1?q?m=3A_Between_Borges_and_Furtw=E4ngler_?= Message-ID: Esteban Buch: Ein deutsches Requiem: Between Borges and Furtw?ngler Journal of Latin American Cultural Studies, Vol. 11, No. 1 (2002) 'Many things will have to be destroyed so that we can construct the New Order: we know now that Germany will be one of them'.1 So says Otto Dietrich zur Linde in Deutsches Requiem, the story that Borges published in Sur in February 1946, just as the Nuremburg Trials were being held. The title of Brahms's opus 45, Ein deutsche s Requiem, but now with the inde.nite article amputated, was transferred to a .ction about the defeat of Nazism. The dimension of mourning of the original thus found a contemporary inscription with the imminent execut ion of the story's narrator amidst the ruins of Germany. The double shadow of this historical experience, where the destinies of individual and nation coincide, is also projected on to the postwar world: 'Tomorrow I will die, but I am a symbol of the generations to come'(p. 174). Borges's title, which remains enigmatic given that the author does not reference its source, creates multiple echoes from the association, itself enigmatic, between death and what is German. A German requiem, a requiem in German, a requiem for a German, a requiem for Germany.... These are all possible meanings for the service for the dead, originally a Catholic sung mass, which Brahms transposes into the Lutheran ethos. The requiem asks for eternal rest for the dead and in doing so promises earthly peace for the people or things that survive. Death, immortal without hope of consolation, can escape into the regime of requiem aeterna m. But what lives on here is violence, the only truth of Nazi doctrine: 'What matters is that violence reigns, not this servile Christian timidity. If victory, injustice and happiness are not for Germany, then let them belong to other nations. Let heaven exist, even though our dwelling place is in hell' (p. 179). So injustice is compatible with happiness . Deutsche s Requiem treats Nazism as a moral problem, as a 'act of morality' as the narrator says (p. 176), and this would include the apparent paradox that so disturbed Thomas Mann and George Steiner, of the coexistence of absolute evil with high culture. In the Borges story, this tension is embodied in the .gure of zur Linde himself, the sub?commandant of a concentration camp. He is crippled, impotent and cruel, and at the same time is devotedto Schopenhauer, Shakespeare and Brahms. 'He who pauses in wonder, moved by tenderness and gratitude, before any aspect of the works of these auspicious creators, let him know that I also paused there, Ithe abominable' (p. 174). The speaker is a man 'repelled by violence' (p. 174), but one who, with the austere renunciation and commitment of an executioner engage d in redemptive sacri.ce, carries out the most brutal acts on a daily basis. This image is in.nitely more terrible than the cliche? of the Wagnerian Nazi, who reads out passages of the Anti?Christ to his victims. The narrator's preferenc e for the 'pure music' of the Germanic tradition moves the moral question onto an abstract plane. Is it a call to think about the common humanit y shared by ordinary people and someone seen as 'abominable'? Or is it a condemnation of the inhuman that is always lying in wait for a normal human being capable of 'wonder'? Or is it perhaps an allusion to what is abominable in the work of the 'auspicious'? In fact, the passion for high culture is not the reverse side of the monster but a constitutive element of his moral personality. Brahms is not the residue of morality in the immoral man, not even a neutral site for the suspension of practical judgement, but a vector in the perversion of that judgement. The character's autobiographical remarks reveal that the very people who led him away from Christianity and towards his belief in the primacy of violence were precisely his three culture heroes: 'Schopenhauer , with direct arguments; Shakespeare and Brahms for the in.nite variety of their worlds' (p. 174). It is not coincidental that this last sentence carries an echo of Borges's own reading of Schopenhauer : 'The in.nite number of possible melodies correspond to the in.nite varietyof individuals , faces and lives that Nature produces'.2The de.nition of Nazism as a cult of violence dates back at least to a 1940 article that Borges wrote attacking Argentine 'Germanophiles' and in which Schopenhauer 's name was invoked in an argument against barbaris m (Brahms's name, by contrast, was absent).3 Deutsche s Requiem is a .ction, but the obligation to read it in moral or political terms lies in a complex mesh of displacements and recon.gurations, which include other writings by Borges as well as differen t traces of historical and literary reality.4 There is an inter?textual work repre?sented inside the story itself, in the counterpoint between the narrator's mono?logue and the footnotes of an unknown editor, who not only points out the narrator's omissions but also leaves out, or censors, the description of zur Linde's torture of the Jewish poet David Jerusalem, thereby acting as a sort of moral voice. Revealing some of the strands of this fabric might throw light on the historical inscription of the text. For example, there really was a man called Otto zur Linde,5who was an 'almost forgotten'Expressionist poet. In May 1933, a party was given to celebrate his sixtieth birthday by a group of young National Socialists. This was, according to the Nazi newspaper, the Vo?lkischer Beobachter, because his work was 'dedicated to the people and the Fatherland ' and 'was prophetic in announcing the birth of a new human race'.6Whether Borges knew about this particular episode or not, the skewedmoralitythat takes shape in his storyseems already inscribed in the trajectory of this obscure real?lif e .gure. The reconstitution of such elements, be they intentional or not, means that items of historical information go beyond their relevanc e to the sources of the .ction to become data for a contemporary interpretation. In the story, Otto zur Linde carries out his tasks in Tarnowitz concentration camp. The place never existed, but its name evokes the Polish village of Tarnow, where a local uprising was crushed by the Nazis in 1943.7Likewise, the .rst editorial footnote points out the omission in the narrator's ancestry of the 'theologian and Hebrew scholar' Johannes Forkel, without mentioning that this imaginary name recalls that of Johann Nikolaus Forkel, who was Johann Sebastian Bach's .rst biographer. Perhaps this would be insigni.cant for a political history of literature, but from a musicological perspective it links the mention of Brahms to the whole canon of Germanic music. But, more importantly, the editor indicates in another of his notes that the apocryphal poet David Jerusalem is not so much a 'made?up character' but 'a symbol of various individuals', adding that 'many Jewish intellectuals, among them the pianist Emma Rosenzweig' were tortured at Tarnowitz on zur Linde's orders. Thus, the editor attributes to zur Linde what is in fact Borges's own literary strategy, the latter, for example, giving to his imaginar y pianist the name of the real writer Franz Rosenzweig, the author of Der Stern der Erlo?sung (The Star of Redemption, a book which stands in complex relation to the work of Martin Heidegger), and that of Arthur Rosenzweig, the head of the Jewish Council in the Krakow Ghetto, murdered by the Nazis in May 1941. Numerous pianists died in the camps, among them Renee Gartner?Geiringer, who was deported from Vienna to Theresienstadt, where she would give concerts of Brahms's music, before being later murdered in Auschwitz in 1944.8 The surname Geiringer also belongs to one of Brahms's chief biographers, a li?brarian at Vienna's Gesellschaft der Musikfreunde, who went into exile after the Anschluss . There is another echo of Schopenhauer in the description of the pianist as intellectual, since the philosopher had resolved the polarity of the narrator's emotions, 'music and metaphysics', by attributing a metaphysical meaning to music, long before the nationalist musicologists of the Third Reich came to celebrate music as 'the most German of the arts'.9 In taking account of a 'Jewish intellectual' annihilated twice over, .rst by torture and then by the omission of her name, the text con.rms its critical distance from the Nazi project to make Brahms and Theresienstadt, or 'Brahms' and 'Tarnowitz', exist side by side. Only the title itself and the epigraph from the Book of Job?'Though he slay me, yet I will trust in him'?stand outside this dialogical game between narrator and editor. The status of these two elements is uncertain. The quo?tation from the Book of Job in particular suggests a specular game between the character's faith and the Judeo?Christian tradition, condemning Nazi doctrine as no more than an inverted re.ection of its enemies' beliefs. Years later, Borges would explain that 'the protagonist zur Linde, is a kind of saint, evil and mad, a saint whose mission is abhorrent'.10 But how did Borges get to the German Requiem? Why this particular title? There are no indications that Borges had a chance to hear it in Buenos Aires during that period; had he done so, the circumstances might have given the gesture a certain public relevance. Nor did the work have a private relevance, then or at any other time: 'I was aware that Brahms had written a work with that title, but I didn't know it. I chose the title simply because it seemed to .t'.11 So the writer exploited the emblematic potential of the piece never having heard it, and thus having no aesthetic experienc e of the music. The work becomes a political symbol at the cost of its insigni.cance, or its silence. Can one ask: if Borges had known the work, would his tale have been any different, would it have been any better? The question is banal since there is no way of answering it, but it gives rise to the possibility that, in all ignorance, Borges might have echoed something which was present in the non?.ctional reception of Brahms' work?not in Buenos Aires, but in Germany. Might he not have reproduced a gesture already present in the actual musical practices of that key moment in the collapse of the Third Reich, Year Zero of democratic Germany? * * * The polysemy in Borges's title is directly related to Brahms's project. To begin with, the words Ein deutsches Requiem nach Worten der heiligen Schrift, fur Soli, Chor und Orchester (Orgelad lib.) point up the originality of a mass for the dead in German, based on texts of the Lutheran Bible chosen by the composer. The work is inscribed in a Catholic tradition, de.ned liturgically by the Latin text and legitimized aesthetically by the canonical place awarded to the Mozart Requiem, but constitutes a development of that form, becoming an original, one might say, almost modern gesture, yet at the same time one that is close to the German tradition of works for the dead, such as those of Heinrich Schutz (Teutsch e Begrabnissmissa) or Johann Sebastian Bach (Actus Tragicus BWV 106). However, the range of meanings of the term 'deutsch' in the Germanyof 1868 is irreducible to this descriptive dimension. Brahms was not only conscious of this fact but saw it as a source of misgivings. This concern appears in a letter written the previous year to Carl Reinthaler, who was then preparing the work for its .rst performance in Bremen. 'As far as the text is concerned, I confess I would be happy to leave out the word ?Deutsch?, and replace it by the word ?Menschen? (Human)'.12 Extracted from the letter that was only published in 1908, this sentence would become crucial to the work's reception, and in general it would provide a justi.cation for its anti?nationalist interpretation, a gesture that would be extended to the stylistic plane by John Eliot Gardiner, to warrant a 'de?Wagnerized ' version.13 But it is only with some dif.culty that Brahms's misgivings can be explained by his critical distance from nationalism. The opposition between cultural nationalism and a universalist humanism, so characteristic of the decades after the Second World War, makes for anachronis?tic judgements when projected back to Brahms's own period. As Daniel Beller?McKenna has demonstrated,14 at the time of the Requiem's composition the adjective 'German' might have had unwanted connotations. For example, it might have indicated a preference for Prussia, something quite contrary to Brahms's own feelings as a native of Hamburg who was attracted by Vienna, even after Austria's defeat in the war with Prussia in 1866. Or it might have suggested a Protestantism far too militant for a man like Brahms who was receptive to less dogmatic versions of religion. Nevertheless, the risk of lan?guage, politics andreligion interferin gwiththe interpretation of the work in a potentially uncontrollable way did not dissuade Brahms from keeping the word 'deutsch'in its title, thoughone symptom of Brahms's continuing unease is his referringto the piece as 'my so?called [sogennante]German Requiem'. The work remains open, too, from a generic and stylistic point of view. As a major composition by a 'young maestro' (Brahms was then 35 years old) it succeeded in combining some of the principal opposing tendencies of the period: a knowledge of counterpoint with an audacious sense of harmony; a command of form with subjective expression; tradition and modernity. It is certainly true that the work is lackin g in an instrumental dimension that would make it a symbol of 'absolute music'. It also lacks the legitimizatio n that Wagner brought to vocal music when he transposed the secular rite of opera into tragic terms. On the other hand, as a major composition for soloists, chorus and orchestra it takes its place in the oratorio tradition, which, from Handel, Haydn and Mendelssohn onwards, had made it possible for the professional and amateur musical worlds to come together. In other words, it brought together in an altogether decisive manner the .eldofsacred music andthe space of the profane concert. This is well illustrated in an analysis ofone of its mostcharacteristic moments: the second movement, Denn alles Fleisch es ist wie Gras ('For all Flesh is as Grass'). This begins with an instrumental funeral march, repetitive, almost obsessive, over which the choir intones a melody, which in its impressive simplicity immediately recalls a chorale. In fact, the composer was actually inspired by a chorale, though in a quite differen t way than Mendelssohn had been when making his backward?looking gesture in the oratorios. The allianc e between the vocal form, characteristic of the Protestant service of communal singing within the closed space of the church, and the instrumental passages, which mark the moment of collective mourning in the open spaces of the city, illustrate the way Brahms articulates the semantics of musical genres. He does not simply quote, nor does he negate tradition, but, rather, he produces a synthesis and a recon.guration of their elements. This is con.rmed, moreover, by the abrupt passage from the circular logic of the .rst part of the movement, in B .at minor, to the contrapuntal joy of the second part in D major. This dramatizatio n of the polarityof the Christian discourse on death (pain on earth, glory in the beyond) shows that cultivated technique does not lead to a sacri.ce of intelligibility of expression. At the same time, the delicate modulation of the melody of the chorale (a change of mode in the descending section of the phrase) reveals a desire to subordinate the evocation of traditional material to an enrichment of the harmonic vocabulary and thus reveal the originality of the new maestro. This is the modern dimension, necessary for the composer to be included in the line that begins with Bach and Beethoven. Schumann had already announced Brahms's place in 1853, in.ecting his prophecywitha commentabout genre: 'If his magic wand can summon from the abyss forces that contain the potential of the masses and harness them in choir and orchestra, then we can expect marvellous new visions of the mysterious world of the spirit'.15 The programme was accomplished: Ein deutsche s Requiem would ensure Brahms a rapid entry into the national pantheon as a great composer, who would .gure in Hans von Bulow's famous triad Bach-Beethoven-Brahms. This establishmentof Brahms as a greatnationalcomposer was connected to the Franco?Prussian War, and more speci.callyto the memorial ceremonies for the soldiers who had died during the con.ict. Ein deutsche s Requiem had indeed had a partial premiere in Vienna in 1867, and then a full premiere in Bremen in1868, and had then been performed in various cities in Germany during 1869. But there then followed a year's silence, until 10 November 1870, when the work was heard in Cologne, and for the .rst time as 'fu? r in die Kriege Gefallenen'. Other commemorations followed, including, in September 1871, a partial version of the Requiem in Plauen for the Sedanfeier.16 So, in these concerts in memory of the war dead, the 'German Requiem' echoes as a 'Requiem for the Germans'?a national?ity de.ned here in relation to the outside, to the foreign, to the enemy, as be.ts the context of a war that was also seen as an ideological confrontation. But not, on the other hand, a 'Requiem for Germany': not only had Prussia won the war, but its victory had opened the gates to the foundation of the German Empire. To perform the Requiem for the fallen brought it closer to the dialecti c of sacri.ce of the soldier?martyr: they died so that the Fatherland might live. Of course, the nationalist resonance did not completely obliterate the relative autonomy of musical practice. But a heteronomous hearing of this work would be further reinforced by a new piece by Brahms that would openly aspire to the category of political music. The Triumphlied , opus 55, with its original dedication to 'The victory of German Arms', and then to the foundation of the Reich and the Emperor Wilhelm I, would lead the 'composer of the German Requiem' to be awarded the status of 'the most important German composer alive today', at least by the anti?Wagnerians.17 This ranking would be preserved beyond the conjuncture of 1871 and the .eeting transformation of Ein deutsche s Requiem into a political symbol. Subse?quently, Brahms's opus 45 would become an important work in the repertoire, but without its political interpretations being in any way coherent. And if this were true in Germany, there was even more reason for it to be so in countries like England and France. The work was .rst performed in London in 1871 (in an arrangement for two pianos, known as the 'London Version')and was absorbed into the tradition of Handelian oratorio and the choral works of Johann Sebas?tian Bach, and any political symbolism in the work eluded its English audience , never much inclined to search for one in the .rst instance. In Paris, the work was conducted for the .rst time by Jules Pasdeloup in 1876. Until the First World War, the work's presence in the repertoire continuously provoked negative comment, which was consonant with the low estimation of all of Brahms's work in France at the time. The composer was caught in the cross.re between those who wished to reduce him to the .gure of an anti?Wagnerian conservative, and those who wanted to see him as the vector of an aggressive Germanism. Visible in the external politics of the Reich, this was also uncovered in the musical works that Paris was regularly exposed to, in part thanks to the frequent visits of conductors of German orchestras. But there were very few who saw the Requiem as more than just one example amongst many of the 'heaviness' of recent German music. Only the critic of the Mercur e de France, Jean Marnold, seems to have metaphorically exploited the fact that the work was a requiem, linking it in an article he wrote in 1905 to 'the learnedmentalityofa people who would appear incapable of leaving their dream of the sentimental ?lied? without falling into an indigestible didacticism, which is either empty or banal, and whose spirit, burned out after so long as the epoch's shining sun, can now only manage to light candles in amphitheatres or mausolea'.18 After the First World War, the reception of the Requiem was in line with its composer's loss of prestige as an anti?romantic reaction took hold. Even the attempts of Schoenberg and Adorno to rehabilitat e Brahms during the centenary of his birth in 1933 failed to arrest his waning reputation.19 But at the same time his work provided the compositional model?and anti?model?in two works that in their own ways representedthe polarities of German musical life prior to the Second World War: Kurt Weil's Berliner Requiem (1929), settings of poems by Brecht and a modernist critique of of.cial memory of the First World War, and Gottfried Muller's Deutsche Heldenrequiem (1934), which set a text of Klaus Niedner, and was a neo?Romantic apology for the Nazi doctrine of heroic sacri.ce, projected backwards onto those who fell in the War of 1914-18. During the Third Reich itself, Johannes Brahms preserved his place at the heart of the German cultural Pantheon, as a representative of a musical patrimony that was regularly invoked as an antidote to critical or modernist pretensions. But in the face of the Wagner cult and the enthronement of his historical rival, the great symphonist Bruckner, Brahms, the friend of the Jew Joseph Joachim, could expect no special honours. And military events would not inspire the Nazi authorities to evoke a Requiem that was meant to inspire resignation in the face of death rather than martial values or a willingness for self?sacri.ce. * * * Nor was the work used in any systematic way after the war to give a ritual form to the work of mourning or to offer a critical revision of history, of the sort that Borges's character would attempt. However, we can link the .ction to certain episodes, so as to yield, if not a coherent totality, then at least some signi.cant relations. On 30 March 1945, the very day on which the Red Army entered Austrian territory and two weeks before the capital was taken, the Vienna Philharmonic under Clemens Kraus gave a performance of Ein deutsches Requiem in the Gesellschaft der Musikfreunde. A few days after the cancellat ion of an 'Anschluss concert' becaus e ofthe shelling of the Staatsoper, this would be the lastconcert that the Philharmonic gave during the Nazi period, in its incarnation as a 'German' orchestra seriously committed to the Hitler regime. The historian Clemens Hellberg gives an account of this episode, based on a 'war record' that the orchestra's management put together, and describes it as 'without doubt particularlyemotional'.20But beyond this, there is no other testimony from the period that would allow one to give a political interpretation other than the very fact of historical inscription. Things were differen t in the Bremen performances of 19, 20 and 21 November 1945. The concerts markedthe resumption of musical life under the auspices of the North American authorities, and a well?known music teacher from the city commented in the local paper: 'This solemn mass for the dead occupies a particular place in the hearts of Bremen music?lovers, since it was here that it had its .rst performance on 10 April 1868.... And it was the cathedral choir that once again gave a performance of the highest artistic calibre, to commemorate the 75th anniversary of that notable event, on 10 April 1945. Becaus e of this, the choir has made very effort to give several performances of this work, so resonant of the desire for peace.'21 Despite his nodding to 'the desire for peace' resonant in the work, the writer of the article was much more interested in underlini ng a continuity with the past?from the foundation of the Second Reich to the culmination of the Third?than with making the recent catastrophe an occasion for rupture and renovation. Despite its apparentmodesty, sucha project seemed to use Brahms's work as the basis for a veritable memorial programme for Year Zero, something which Otto Dietrich zur Linde would surely have counte?nanced, given his own desire to associate his ancestrywith the whole of German military history since the eighteent h century, including the Franco?Prussian War of 1870. More enigmatic considerations are suggested by the careers of two .gures, who, each in his own way, crystallize the moral problem posed by the role of music in the Third Reich. On 20 August 1947, Ein deutsches Requiem was performed at the Lucerne Festival: Elizabeth Schwarzkopf and Hans Hotter sang the solo roles, and the performance was conducted by Wilhelm Furtw?ngler. Two months later, the same soloists sang on a recording of the work by the Vienna Philharmonic, conducted by Herbert von Karajan. The Lucerne per?formance was one of the .rst concerts given by Furtw?ngler after his exoner?ation by two Denazi.cation Hearings in Vienna and Berlin. The Karajan recording, on the other hand, preceded his de.nitive exculpation, thanks to the skills of the producer Walter Legge in convincing the Occupation authorities that the ban on Karajan giving concerts did not prevent him from making records. Neither Furtw?ngler nor Karajan seems to have left any account of his feelings, re.ections or intentions in these truly historical circumstances. And if there were commentators who remarked on the exceptional nature of these occasions, none of them didso in a political key.22 Denialor simple indifference ? In any case, the real political meaning of the musical performances is a matter of speculation. For Karajan's biographer, Richard Osborne, the 1947 Requiem is 'sincere and moving to the point where it becomes unendurable '23 and reveals that his hero had 'acutely perceived the events of the time'. But Osborne does not explain the basis for his particular account of the work, and on the previous page denies that the recordings that Karajan made of Johann Strauss's works during the Third Reich contain any audible trace of the conductor's understand?ingwith the regime?the understanding of a man who had joined the Nazi party in 1933, but whose brilliant career had been checked after his marriage in 1942 to a woman of Jewish origin. The question of what Karajan really felt at that moment remains open. The same is true of the meaning of the performance of the Requiem that Furtw?ngler conducted in Lucerne, despite the fact that the characteristic fea?tures of his style are perfectly identi.able: the slow construction of the initial crescendo, the impressive presence of the timpani in the second movement, the interminable pause at the .nale of the third movement, the dramatic explosion in the sixth movement. But perhaps the most striking political dimension of this version is its incomplete and imperfect recording, which, as the recent recon?struction by the Paris Furtw?ngler Society has revealed, bears the ambiguous splendour of ruins in sound?ruined sound, a technological trace of a Europe that had itself become a huge .eld of ruins as a consequence of the 'violence and faith in the sword' proclaimed by Otto Dietrich zur Linde. The composer Paul Hindemithsaw that this momentrepresenteda rupture in the history of the German nation. His response was the 1946 work written in the United States, whose original title, An American Requiem, was a directallusion to the Brahms Requiem. With settings of poems by Walt Whitman, When lilacs last in the dooryard bloomed, A Requiem 'For those we love' has a Jewish melody in its eighth movement which could be interpreted as a subtle referenc e to the Holocaust. One can recognize something of the composer's own historical experienc e in this passage from the withered grass ofthe Old Testament to the New World's .ower?covered .elds. Thus, after the catastrophe the political legacy of Brahms's work would sound from the other side of the Atlantic, through the mouth of one of the principal representatives of the 'German music' that the Nazi regime had condemned as 'degenerate '. It was this same Hindemith whom even Furtw?ngler's intervention with Josef Goebbels failed to save from disgrace in 1934, an episode that must surely rank, however, as one of the most digni.ed moments in the tortuous and tortured career of the most famous conductor of the Third Reich. But it was on the ruins of Hitler's Germany that the future, the world of postwar Europe, would be constructed. 'The world believe d that after defeating the demon that was Hitler's Germany good would triumph and order would be re?established . But it was only the .rst incarnation ofa demon that stillsurvives, ever more angry...' Furtw?ngler writes in his notebook in 1946.24At exactlythe same moment, Borges has his character say: 'An implacable era is hovering over the world. We forged it, we who are already its victims. What does it matter if England is the hammer and we are the anvil?' (p. 179). Of course, the torturer praises what the musician laments. However, the moral differenc e does not eliminate the fact that the two ideas converge: both the .ctional and the historical .gures believe that from now on violence will reign rather than 'servile, Christian timidity'. Although the conductor was not talking aboutmusic here, it is as if Brahms's Ein deutsche s Requiem echoes in his words, with all the emotional charge of Borges's Deutsches Requiem. It is as though, for all his ignorance and his deafness, the Argentine writer was like a sleepwalker and, in a work that he did not even know, divined the historical opening it represented, in all its tragedy and chaos. Borges's lack of interest in music was suf.ciently blatant to discourage any thought of making him engage in dialogue with the history of music, and with classical music in particular. For all his pleasure in listening to Brahms during his meetings with Bioy Casares, Borges never seemed to have let the music stop him working: Brahms, as opposed to Debussy, was just background noise. On the other hand, music as an art had a conceptual place in his universe; at least it did so after he had read The World as Will and Representation. Thus one can read in 'History of the Tango': 'Schopenhaue r has written that music is no less immediate that the world itself: without the world, without a common fund of memories that can be evoked by language, there would certainly be no literature, but music can do without the world: there could be music and no world'.25 The quotation continues: 'Music is will, passion: the tango, as music, would directly transmit thatwar?likejoy whose verbal expression was the aim of Greek and Germanic rhapsodies in remote times'.26 We can leave on one side for the moment to what extent the tango was an expression of will or passion for Borges. As for classical music it is clear that he saw it not as an immediate experienc e but, in an eminently non?Schopenhaueria n way, as a 'common fund of memories that can be evoked by language '. In other words as literature?or as history. This is something that might create legitimate doubts and even a sense of disapproval in music lovers, but is not so far distant from the way in which works of music and their titles go on living in the shared space of the public world. They are reproduced as parts of canonical formulae and become partof the historicalmemoryof nations andregister collective horrors. Theyare transformed into symbols. An intuition that Borges, although describing himself as an 'intruder' in the domain of music, would con.rm in 1976, in the poem To Johannes Brahms, the musician who, according to the text, knew how to lavish his 'gardens' on 'the plural memory of the future'.27 Translated by Philip Derbyshire Notes 1. Jorge Luis Borges, 'Deutsche s Requiem', Sur, No. 136 (February 1946), pp. 7-14; reprinted in El Aleph in Obras completas I (Buenos Aires: Emece?, 1996), pp. 576-581. Translated by Julian Palley in Labyrinths, ed. by Donald A. Yates and James E. Irby, with a preface by Andre? Maurois (Harmondsworth: Penguin Books, 1970), pp. 173-179. (Translation amended.) . 2. Arthur Schopenhauer, The World as Will and Representati on, trans. by E.F.J. Payne (New York: Dover Publications, 1966). . 3. Jorge Luis Borges, 'De.nicio? n de un germano? .lo', El Hogar (Buenos Aires, 13 December 1940). . 4. See Annick Luis, 'Besando a Judas. Notas alrededor de ?Deutsche s Requiem? ' (Kissing Judas: Notes on 'Deutsche s Requiem') in Jorge Luis Borges: Intervenciones sobre pensamient o y literatura, ed. by C. Canaparo, W. Rowe and A. Louis (Buenos Aires: Paido? s, 2000), pp. 61-67. . 5. Thanks to Annick Louis for pointing out Otto zur Linde's Expressionist connection. . 6. Vo?lkischer Beobachter, Berlin (10 May 1933). . 7. Aharon Weiss, 'Tarnow', in Encyclopedia of the Holocaust, ed by I. Gutman (New York and London: Macmillan, 1990), vol. 4, pp. 1451-1454. . 8. Joza Karas, La musique a Terezin, 1941-1945 (Paris: Gallimard, 1993), p. 147. . 9. See Pamela M. Potter, Most German of Arts: Musicology and Society from the Weimar Republic to the End of Hitler's Reich (New Haven, London: Yale University Press, 1998). . 10. Interview with James E. Irby [January 1960], in Jorge Luis Borges (Paris: Cahiers de l'Herne, 1964), p. 395. 11. Ibid., p. 396. . 12. Letter of 6 October 1867, Johannes Brahms im Briefwechsel mit Carl Reinthaler (Berlin, 1908), pp. 7-12, quoted in Daniel Beller?McKenna , 'How deutsch a Requiem? Absolute Music, Univer?sality and the Reception of Brahms's Ein deutsches Requiem, opus 45', in 19th Century Music XXII/I (Summer 1998), University of California Press, p. 5. . 13. John Eliot Gardiner, 'Brahms and the 'Human' Requiem', Gramophone , 68/815 (April 1991), pp. 1809-1810. Quoted in ibid., p. 4. . 14. Article cited. . 15. Robert Schumann, 'Voies nouvelles' [1853], Sur les musiciens (Paris: Stock, 1979), p. 285. . 16. See Max Kalbeck, Johannes Brahms. Band II, 1862-1873 (Berlin: Deutsche Brahms?G esellschaft, Berlin, 1921), reprint Hans Schneider, Tutzing, pp. 284-286; and Klaus Blum, hundert Jahre Ein Deutsches Requiem von Johannes Brahms (Vienna: Tutzing, 1971), pp. 109-112. . 17. Franz Gehring, Algemeine musicalische Zeitung, No. 26 (26 June 1872). . 18. Jean Marnold, Mercure de France (15 June 1906), p. 610. . 19. Arnold Schoenberg, 'Vortrag, zu halten in Frankfurt am Main, 12.II.1933', in Verteidigung des musikalischen Fortschritts. Brahms unde Schonberg, ed. by A. Dumling (Hamburg: Argument, 1990), pp. 162-170. Theodor W. Adorno, 'Brahms Aktuell ' [1934], in Gesammelte Schriften 18, Musikalis?che Schriften V (Frankfurt: Suhrkamp, 1984), pp. 200-223. . 20. Clemens Hellsberg, Les grandes heures du Philharmonique de Vienne (Paris: Editions du May, 1992), p. 369. 21. Ernest Kretschmer, 'Ein deutsches Requiem', Weser Kurier, Year 1, No. 2 (22 September 1945). . 22. See for example R. de. C., 'Semaines musicales de Lucerne', Gazette de Lausanne (6 September 1947). . 23. Herbert von Karajan. Une vie pour la musique. Entretiens avec Richard Osborne (Paris: Archipel, 1999), pp. 25-26. . 24. Wilhelm Furtw?ngler, Carnets 1924-1954. Ecrits fragmentaires (Geneva: Georg Editeur, 1995, 1946), p. 84. . 25. Jorge Luis Borges, Evaristo Carriego, in Obras completas I, op. cit., p. 161. . 26. Ibid. . 27. Jorge Luis Borges, 'A Johannes Brahms', La Moneda de Hierro (The Iron Coin), in Obras completas III (Buenos Aires: Emece?, 1996), p. 139. From checker at panix.com Wed May 18 22:56:13 2005 From: checker at panix.com (Premise Checker) Date: Wed, 18 May 2005 18:56:13 -0400 (EDT) Subject: [Paleopsych] Satoshi Kanazawa: Why productivity fades with age: The crime-genius connection Message-ID: Satoshi Kanazawa: Why productivity fades with age: The crime-genius connection Journal of Research in Personality 37 (2003) 257-272 Department of Psychology, University of Canterbury, Private Bag 4800, Christchurch, Canterbury, New Zealand Abstract The biographies of 280 scientists indicate that the distribution of their age at the time of their greatest scienti.c contributions in their careers (age-genius curve) is similar to the age distribution of criminals (age-crime curve). The age-genius curves among jazz musicians, painters and authors are also similar to the age-crime curve. Further, marriage has a strong desistance e.ect on both crime and genius. I argue that this is because both crime and genius stem from men's evolved psychological mechanism which compels them to be highly compet?itive in early adulthood but "turns o." when they get married and have children. Fluctuating levels of testosterone, which decreases when men get married and have children, can provide the biochemical microfoundation for this psychological mechanism. If crime and genius have the same underlying cause, then it is unlikely that social control theory (or any other theory speci.c to criminal behavior) can explain why men commit crimes and why they desist. 1. Introduction A person who has not made his great contribution to science before the age of thirty will never do so. Albert Einstein (Brodetsky, 1942, p. 699) * Fax: +64-3-364-2181.E-mail address: Satoshi.Kanazawa at canterbury.ac.nz. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Anecdotal evidence abounds that artistic genius or productivity fades with age. Paul McCartney has not written a hit song in years, and now spends his time painting. J.D. Salinger now lives as a total recluse and has not published anything in more than three decades. Orson Welles was mere 26 when he wrote, produced, directed and starred in Citizen Kane, which many consider to be the greatest movie ever made. The relationship between age and genius appears to be the same in science. It is often said that physics and mathematics are young men's games, and physicists and mathematicians tend to think they are over the hill at age 25 (Mukerjee, 1996). John von Neumann, putatively the most brilliant scientist who ever lived, used to assert brashly when he was young that mathematical powers decline after the age of 26, and only the bene.ts of experience conceal the decline?for a time anyway. (As von Neumann himself aged, however, he raised this limiting age.) (Poundstone, 1992, p. 16). James D. Watson made the greatest discovery in biology in the 20th century at the age of 25, winning the Nobel prize for it, but has not made any other signi.cant scienti.c contribution for the rest of his career. This paper addresses two questions. Does productivity truly fade with age? If so, what explains this phenomenon? While the question of why productivity fades with age in itself may be of trivial scienti.c importance, I will argue that the study of the age trajectories of scientists and other geniuses illuminates a very important question in behavioral science: Why men commit crimes and why they desist. I will note that the relationship between age and genius, not only among scientists but among mu?sicians, painters, and authors as well, is very similar to the relationship between age and criminality, and suggest that this is because the same mechanism produces the expressions of both genius and criminality. I will further note that marriage has the same negative e.ect on both genius and criminality, and thus any criminological theory that explains the desistance e.ect of marriage purely in terms of social control is not su.cient (because scientists, unlike criminals, are not subject to social control, and because scienti.c work is not illegal or deviant in any way). 2. Does productivity really fade with age? In order to examine the relationship between age and scienti.c productivity, I study a random sample of the biographies of 280 scientists (mathematicians, physi?cists, chemists, and biologists) from The Biographical Dictionary of Scientists (Porter, 1994). There are a few scientists from the 16th and 17th centuries, but the over?whelming majority comes from the 18th century to the present. The biography of each scientist in this dictionary follows the same format. The .rst, brief paragraph lists the scientist's full name, years of birth and death, his nationality and .eld of re?search, and the most signi.cant scienti.c contribution in his entire career. (97.8% of the scientists in my sample are male.) For most Nobel laureates, this is the discovery or research for which they won the Nobel prize. The next one or two paragraphs detail the scientist's educational career and the history of institutional a.liations?where he received his degrees and which positions he held at what institutions. Then the next few paragraphs summarize the research career of the scientist, enumerating the dates of major discoveries and publications. I use the date of the discovery or experiment which is listed in the .rst paragraph as the scientist's most signi.cant contribution in his career to denote the peak of his career. If the date of the discovery or experiment is di.erent from the date of its publication, I use the former date. Then I calculate the scientist's age at the peak of his career, by subtracting the year of his birth from that of his peak. Fig. 1 presents the distribution of the peak age among the 280 scientists in my sample. It is apparent from the histogram that scienti.c productivity indeed fades very rapidly with age. Nearly a quarter (23.6%) of all scientists makes their most sig?ni.cant contribution in their career during the .ve years around age 30. Two-thirds (65.0%) will have made their most signi.cant contributions before their midthirties; 80% will have done so before their early forties. The mean age for the peak of scien?ti.c career is 35.4; the median is 34.0. Most signi.cantly, the interquartile range (the distance between the 75th and 25th percentile, encompassing the middle half of the Fig. 1. The age of peak scienti.c achievement, 280 scientists. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 distribution) is merely 12 years. Peak scienti.c productivity appears to occur in a quick burst within a few years of the scientists' lives around the age 30. My data replicate Lehman's (1953) classic study of the history of scienti.c discov?eries, which shows that more signi.cant discoveries are made by younger scientists than by older ones, and thus the age of the scientist has a negative e.ect on the like?lihood of making a signi.cant discovery. My data are also consistent with Cole's (1973) and Levin and Stephan's (1991) studies of representative samples of contem?porary scientists, which show that scienti.c productivity rapidly increases shortly af?ter the Ph.D. and gradually declines thereafter. Taken together, the evidence does seem to indicate that scienti.c productivity appears to fade with age. 3. What about other types of productivity? Fig. 1 demonstrates the age distribution of scienti.c productivity, but what about other types of productivity? Scienti.c discoveries are not the only way genius ex?presses itself. What about more artistic forms of genius? Music? Literature? Fig. 2 presents the relationship between age and productivity in jazz music (Mill?er, 1999, Fig. 5.1). It plots, separately for men and women, the age at which 719 jazz musicians released their 1892 albums. (Unlike the age distribution of the greatest sci?enti.c discoveries in Fig. 1, the distributions in Fig. 2 counts the same musician more than once. However, Simonton's (1988, 1997) equal-odds rule asserts that scientists make the most signi.cant contributions when they make the largest number of con?tributions. If Simonton is correct, then these two measures, one of quantity and the other of quantity, are equivalent.) Fig. 2 shows that the relationship between age and productivity in jazz music among male musicians is virtually identical to the relation?ship between age and scienti.c discoveries among largely male scientists in Fig. 1. There appears to be no discernible relationship between age and jazz productivity among female musicians. In this random sample of jazz albums produced between Fig. 2. The age-genius curve among jazz musicians. Source: Miller (1999). S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 the 1940s and 1980s in the United States or Britain, the male musicians outnumber the female musicians by 20 to 1 (male:female ? 685:34). Fig. 3 presents the same relationship among modern painters (Miller, 1999, Fig. 5.2). It plots, separately for men and women, the age at which 739 artists painted 3274 painting. Once again, Fig. 3 clearly shows that the relationship between age and productivity in modern paintings among male artists is virtually identical to the age distribution of scienti.c discoveries in Fig. 1. Once again, the same relation?ship does not hold among female painters. In this exhaustive sample of every datable painting owned by the Tate Gallery, London, as of 1984, where the artist's last name begins with A through K, the male artists outnumber the female artists by roughly seven to one (male:female ? 644:95). Fig. 3. The age-genius curve among painters. Source: Miller (1999). Fig. 4. The age-genius curve among authors. Source: Miller (1999). S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Finally, Fig. 4 presents the same relationship among authors (Miller, 1999, Fig. 5.3). It plots, separately for men and women, the age at which 229 writers published 2837 books. Once again, Fig. 4 demonstrates that the relationship between age and literary productivity among male authors is virtually identical to the age distribution of scienti.c genius in Fig. 1. The same relationship among female authors, if it exists at all, is far weaker and seems to peak somewhat later. In this random sample of 20th century English-language .ctions and non.ctions, the male authors outnumber fe?male authors by roughly four to one (male:female ? 180:49). Thus the relationships between age and productivity in .elds as varied as science, music, art and literature share two characteristic in common. First, in all .elds, the age distribution among male practitioners has the virtually identical form. Second, in all .elds, men far outnumber the women. What can possibly explain these common features in the age distribution of genius in such varied .elds? 4. The crime-genius connection The most curious aspect of the relationship between age and genius represented in Figs. 1-4 is that these distributions (which I would like to call the "age-genius curves") very closely resemble another very well-known age distribution: The invari?ant age-crime curve (Hirschi & Gottfredson, 1983), presented in Fig. 5. Criminolo?gists widely recognize that criminal behavior, especially among men, rapidly rises during adolescence, peaks in late adolescence and early adulthood, and then equally rapidly declines through adulthood, reaching a plateau at a very low level around Fig. 5. The age-crime curve. Source: Kanazawa and Still (2000, p.435, Fig. 1). S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 263 age 40. (For empirical illustrations of the invariant age-crime curve, see Blumstein, 1995, Figs. 2 and 3; Daly & Wilson, 1990, Fig. 1; Hirschi & Gottfredson, 1983, Figs. 1-78). While the validity and universality of the invariant age-crime curve, with some minor variations, are beyond dispute in the criminological literature, there cur?rently is no satisfactory theory that can explain why the relationship between age and criminal behavior takes the shape that it does.1 Kanazawa and Still (2000) o.er an evolutionary psychological explanation for the invariant age-crime curve. They extend Daly and Wilson's (1988, 1990) theory of ho?micide and explain all types of violent and property crimes as consequences of young men's competition for access to women's reproductive resources. The theory posits that young men become rapidly violent and criminal during the years right after pu?berty. There is no point for prepubertal boys to compete for women, but the repro?ductive bene.ts of competition quickly rises after puberty, since post-pubertal men can translate increased access to women's reproductive resources into greater repro?ductive success (see Fig. 6a). The theory also explains the rapid decline in criminal behavior among adult men as a function of increased costs of competition and its potentially harmful e.ects on reproductive success (see Fig. 6b). While men can al?ways increase their reproductive success by gaining greater access to women's repro?ductive resources, competition for women can result in their own death or injury, which would be detrimental to the welfare of their existing o.spring. In other words, while the reproductive bene.ts of competition (interpersonal violence and property malappropriation) remain high for men for their entire lives (as Fig. 6a shows), the reproductive costs of such competition quickly increase after they have had chil?dren (as Fig. 6b shows). Their children will su.er if they are injured or killed in the course of the competition. Kanazawa and Still argue that this is why men desist quickly during early adulthood, when they were likely to have had their children in the ancestral environment. The age-crime curve is the mathematical di.erence be?tween the reproductive bene.ts and costs of competition (see Fig. 6c). It is important to keep in mind two signi.cant points in any discussion of evolu?tionary psychological theory of human behavior (Kanazawa, 2001). First, evolved psychological mechanisms, such as the ones that compel young men to act violently 1 There is another uncanny resemblance between crime and scienti.c productivity. Cole's (1979) study of a representative sample of contemporary mathematicians in the United States demonstrates that, while the career trajectories of a majority of mathematicians follow what I call the "age-genius curve," where their productivity, measured both by the quality and quantity of their publications, peaks very early in their careers and gradually declines thereafter, there is a small minority of mathematicians who produce a large quantity of high-quality work throughout their careers. This dichotomy of mathematicians is reminiscent of Mo.tt's (1993) taxonomy of "adolescence-limiteds" and "life-course persistents" among criminals. Mo.tt argues that most men's antisocial behavior peaks in adolescence and then declines throughout the rest of their lives (following the age-crime curve), while there is a small minority of career criminals who continue to engage in anti-social behavior throughout their lives. While my focus in this paper is on the majority of scientists and criminals whose expressions of genius and criminality follow a predictable life-course pattern, I would not be surprised if the same hormonal factors underlie the behavior of what Cole (1979) calls life-long "strong publishers" and that of what Mo.tt calls "life-course persistents." S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Fig. 6. The bene.ts and costs of competition and the age-crime (and age-genius) curve. (a) Reproductive bene.ts of competition. (b) Reproductive costs of competition. (c) Propensity toward competition ? ben?e.ts ) costs. toward each other, operate mostly behindconscious thinking. Young men feel like act?ing violently or want to steal others' property, but they do not know why. Organisms (including humans) are usually not privy to the evolutionary logic that placed the psychological mechanisms in the brain to solve adaptive problems. Criminals them?selves are therefore unaware of the ultimate causes of their behavior; they are not consciously pursuing reproductive success when they engage in criminal behavior. Their preferences and desire for violence and crime serve as the proximate causes of their behavior. Second, all evolved psychological mechanisms are adapted to the ancestral envi?ronment where humans evolved for millions of years. Behavior that stems from evolved psychological mechanisms (such as criminal behavior) is therefore often mal?adaptive in the current environment, which is so vastly di.erent from the ancestral environment. In particular, the psychological mechanism that compels young men to be violent and steal from others assume that there are no third-party enforcers of norms in the form of the police and the courts (because such things did not exist in the ancestral environment). The fact that criminals today can have lower repro?ductive success than law-abiding citizens is immaterial for the claim that the psycho?logical mechanism that produces criminal behavior was once adaptive in the ancestral environment. The logic of the theory requires that this psychological mechanism have evolved before informal norms against violence and theft emerged in the protohuman pri?mate society in the course of evolution. Such psychological mechanism could not S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 265 have emerged after the emergence of norms against violence and theft, because then men would not be able to attract mates by eliminating competitors through violence and accumulating resources through theft. In the context of such informal norms, men with tendencies toward violence and theft would be ostracized and would not have attained greater reproductive success.2 In fact, the norms against violence and theft probably emerged in response to men's evolved psychological mechanism that compels them to behave in antisocial ways. The fact that violent and predatory acts that would be classi.ed as criminal if committed by humans are quite common among nonhuman species that do not have informal norms against such acts (Ellis, 1998) supports this speculation. I suggest that the age-genius curve looks similar to the age-crime curve because the same psychological mechanism that compels men to commit crimes also compel them to make great scienti.c contributions and express their genius in other forms. This also explains why men far outnumber women both in crime and in various ex?pressions of genius. Miller (1999, 2000) argues that the production of jazz music, modern paintings and books is an example of "cultural display" designed to attract mates. I contend, counterintuitive though it might sound at .rst, that the same psy?chological mechanism that compels men to engage in cultural display in order to at?tract mates, by producing cultural products or making scienti.c discoveries, also compel other men to engage in criminal activities. Both crime andgenius are expres?sions of young men?s proximate competitive desires, whose ultimate function in the an?cestral environment wouldhave been to increase reproductive success. I contend that productivity (observable expressions of genius such as scienti.c dis?coveries, jazz albums, paintings, and books) is a function of two components: Genius and e.ort. Genius (or talent in some endeavor), while unobservable, clearly varies be?tween individuals. Some have it, others do not. Further, di.erent people have genius in di.erent endeavors. J.D. Salinger could not have been the .fth Beatle; Paul Mc?Cartney could not have written The Catcher in the Rye. E.ort, I contend, results from competitiveness, and all men have the universal age pro.le of competitiveness, which is probably identical to the age-crime curve and peaks in late adolescence and early adulthood. From this perspective, genius per se does not have to decline with age. It is instead the life-course .uctuations in e.ort (competitiveness) that makes productivity fade with age. Paul McCartney probably still has the genius which would allow him to write another Yesterday; he just does not feel like it, especially after his recent remarriage (see below). Crime may be thought of as the "default" expression of male competitiveness, in two senses. First, unlike scienti.c and artistic endeavors, crime (young men killing each other to get access to available women) probably happened in the ancestral en?vironment. (Our ancestors might have had primitive art and music, but they certainly did not produce CDs, portraits, and books.) Second, once again unlike scienti.c and artistic endeavors, criminal behavior does not require any special talent (or "Genius" in the equation: Productivity ? Genius + E.ort). This is why I believe the age-crime 2 I thank Barbara J. Costello and Allan Mazur for independently making this point. curve more closely resembles the age pro.le of competitiveness in men's life course than the age-genius curves. Crime is the product of men's competitiveness when they have no genius (that is, when genius ? 0 in the equation Productivity ? Genius + E.ort). This is consistent with the well-known fact that criminals on average have lower intelligence than noncriminals. Today, men can express their competitiveness ("e.ort") in evolutionarily novel ways in science, music, art and literature, if they have talent ("genius") in these endeavors. This is probably why the age-genius curves (in Figs. 1-4) peak some?what later than the age-crime curve (Fig. 5). Productivity in arts and sciences, un?like crime, requires men to respond to evolutionarily novel stimuli and situations, and their response to such evolutionarily novel environments might be delayed. Their evolved psychological mechanism (competitive urge) may not respond to evolutionarily novel pursuits such as science and art as quickly or reliably. This is similar to the fact that our desire to reproduce, which we share with and inherit from our ancestors, is expressed much later in our lives (in terms of actual repro?duction), compared to our ancestors, in the evolutionarily novel environment of post-industrial, monogamous society with compulsory education and reliable con?traception. Likewise, the competitive urge of men who lack talent in any endeavors is expressed earlier in the evolutionarily familiar, default form of crime and vio?lence, but the same competitive urge of men who have talent in some endeavors is expressed somewhat later in evolutionarily novel forms of science, music, art and literature. Consistent with this reasoning, there is evidence to show that criminals, whose productivity peaks early, also marry earlier than noncriminals. In their prospective longitudinal study of 500 delinquents and 500 nondelinquents in the Boston area, Glueck and Glueck (1968) show that delinquent men on average marry earlier than their nondelinquent counterparts. For instance, more than twice as many delin?quents marry at age 18 or younger as nondelinquents do (7.4% vs. 3.6%) while a lar?ger proportion of nondelinquents postpone their .rst marriage until after 25 than do delinquents (33.8% vs. 28.1%) (v2 ? 11:01; p <:05) (Glueck & Glueck, 1968, p. 82, Table VIII-3).3 In the ancestral environment, most (if not all) competition between men was phys?ical and its potential costs included death and physical injury. This is why men be?come increasingly less competitive as they age, because they must shift their reproductive e.ort from mating to parenting once they have children, and dead or injured men do not make good fathers (see Fig. 6). This is no longer true in the cur?rent environment, where men compete in scienti.c and artistic endeavors. There are no physical costs to competition in these evolutionarily novel endeavors; scientists do not literally perish when they fail to publish. However, men's competitive urge, adapted to the ancestral environment and the default form of competition (crime 3 One reviewer points out that criminals mostly pursue resources, not status, whereas artists and scientists mostly pursue status, not resources. This di.erence in reproductive strategy can also potentially account for the di.erence in age peaks between crime and genius curves, if it takes men longer to attain status than resources. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 and violence) nonetheless compels them to desist from competition as they get older, if more gradually than was the case in the ancestral environment. Their evolved psy?chological mechanism compels them to act as if competition always carries physical costs. Miller (1999, 2000) argues that women judge men's underlying genetic quality by their "cultural displays" of artistic expressions. In the course of sexual selection, wo?men have been selected to be attracted to men whose competitive urge manifests it?self in arts and sciences. Men who can win the Nobel prize or the Grammy are obviously more capable than those who cannot. These men will, therefore, make bet?ter fathers and providers for their o.spring, even though their competitive urge will soon decline after marriage and parenthood, and their productivity will fade. How?ever, fathers do not have to win the Nobel prize or the Grammy every year to earn su.cient resources to make parental investment into the o.spring. Their superior ge?netic quality has already been demonstrated when they were young and highly com?petitive. This is why highly competitive and successful men (in whatever endeavor) attract mates; they can bring in more resources and be better fathers even when they are not being highly competitive later in life. 5. The comparable e.ect of marriage on crime and genius Crime and genius share something else in common: Marriage depresses both. Fig. 7 presents the age-genius curve separately for scientists who were married sometime in their lives (n ? 186) and for scientists who remained unmarried for their entire lives (n ? 72). (I used Debus (1968) and Gillispie (1970-1980) to obtain information on the scientists' marital history, but I was not able to ascertain the marital history of 22 scientists.) The histograms clearly show that the age-genius curve holds only for married scientists. The age-genius curve among these scientists is essentially the same as that for the entire sample, but the peak occurs a bit earlier in an even quicker burst (mean ? 33.9, median ? 32.5; IQR ? 11.3). In contrast, expressions of genius among scientists who never married do not de?cline sharply. Half as many (50.0%) unmarried scientists make their greatest contri?butions in their late 50s as they do in their late 20s. The corresponding percentage among the married scientists is 4.2%. The mean peak age among the unmarried sci?entists is 40.0, the median is 38.5, and the IQR is 16.8. The di.erence in the mean age between the married and unmarried scientists is statistically signi.cant (t ? 4:83; p <:0001). Given that science did not exist in the ancestral environment, men's evolved psy?chological mechanism appears to be rather precisely tuned to marriage as a cue to "desistance." Nearly a quarter (23.4%) of all married scientists make their greatest contributions, and thus "desist," within .ve years after their marriage. The mean de?lay (the di.erence between their marriage and their peak) is mere 2.6 years; the me?dian is 3.0 years. It, therefore, appears that scientists rather quickly desist after their marriage, while unmarried scientists continue to make great scienti.c contributions later in their lives. Similarly, Hargens, McCann, and Reskin's (1978) study demon? S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Fig. 7. The age-genius curve among the married and unmarried scientists. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 269 strates that childless research chemists are more productive than those with chil-dren.4 This is exactly the pattern observed among criminals. Criminologists have known that one of the strongest predictors of desistance from criminal careers is good mar?riage (Laub, Nagin, & Sampson, 1998; Sampson & Laub, 1993). Criminals who get married, and especially those who maintain strong marital bonds to their wives, sub?sequently stop committing crime, whereas criminals at the same age who remain un?married tend to continue their criminal careers. Sampson and Laub (1993) and Laub et al. (1998) explain the strong desistance ef?fect of marriage from the social control perspective (Hirschi, 1969). Marriage creates a bond to the conventional society, and investment in this bond, in the form of a strong marriage, makes it less likely that the criminal would want to remain in the criminal career, which is incompatible with the conventional life. Marriage also in?creases the scope and e.ciency of social control. Now there is someone living in the same house and monitoring the criminal's behavior at all times. It would be more di.cult for the criminal to escape the wife's watchful eye and engage in illicit activ?ities. However, Sampson and Laub's social control theory, and its explanation of the desistance e.ect of marriage, could not be the whole answerif marriage has the same desistance e.ect on scientists. Unlike criminal behavior, scienti.c activities are com?pletely within the conventional society, and are thus not at all incompatible with marriage and other strong bonds to conventional society. Unlike criminals, scientists are not subject to social control (by their wives or otherwise) since scienti.c activities are not illegal or deviant in any way. I believe an evolutionary psychological theory provides a more parsimonious ex?planation for the desistance e.ect of marriage for both crime and science in the form of a single psychological mechanism that compels young men to compete and excel early in their adulthood but subsequently turns o. after the birth of their children. Further, there seems to be a biochemical microfoundation to the desistance e.ect of marriage. David Gubernick's unpublished experiment (discussed in Blum, 1997, p. 116) demonstrates that the testosterone levels of expectant fathers precipitously drop right after the birth of their children. Mazur and Michalek (1998) show that marriage decreases, and divorce increases, men's testosterone levels. If high levels 4 Contemporary readers might suggest that unmarried scientists continue to make scienti.c contributions much later in their lives because they have more time to devote to their careers. Unmarried, and therefore childless, scientists do not have to spend time taking care of their children, driving them back and forth between their soccer practices and ballet lessons, or doing half of the household chores, and that's why unmarried scientists can continue making great contributions whereas married scientists must desist. This is precisely Hargens et al.'s (1978) interpretation of the negative correlation between parenthood and productivity among research chemists. I would remind the readers, however, that almost all the scientists in my sample lived in the 18th and 19th century, when married men made very little contribution in the domestic sphere and their wives did not have their own careers. Hargens et al.'s data come from 1969 and 1970, when this was probably still true to a large extent. I would, therefore, contend that, if anything, married scientists probably had more (rather than less) time to devote to science, because they had someone to take care of their domestic needs at all times. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 of testosterone predispose men to be more competitive, then the sudden drop in tes?tosterone after their marriage and the birth of their children might provide the bio?chemical reason why men's psychological mechanism to commit crime or make great scienti.c discoveries "turns o." when they get married and become fathers, and si?multaneously why the same mechanism does not "turn o." when the men (be they criminals or scientists) do not get married. Now there are other phenomena which exhibit similar age distributions, such as automobile accidents, and other risk-taking behavior. In fact, men who engage in crime and deviance are also prone to have accidents and engage in risk-taking behav?ior (Hirschi & Gottfredson, 1994). Criminologists have known that criminals do not specialize; men who engage in one type of crime also engage in many others. I believe it is entirely possible that di.erent types of crime and deviance, accidents and other forms of risk-taking behavior are all manifestations of the same underlying psycho?logical mechanism that compels young men to be highly competitive. For one thing, we know from automobile insurance statistics that marriage depresses men's ten?dency to have automobile accidents. 6. Conclusion Perhaps the tragic life of the French mathematician E Evariste Galois (1811-1832) best illustrates my argument (Singh, 1997, pp. 210-228). Despite the fact that he died at age 20, Galois made a large number of signi.cant contributions to mathematics. (His work was integral to Andrew Wiles' celebrated proof of Fermat's Last Theorem in 1994.) Galois was involved in an a.air, and the woman's.ance challenged him to a duel. The night before the duel, Galois stayed up all night and wrote down all of his mathematical ideas on paper. (It is due to these notes, written on the last night of his life, that many of Galois' ideas survived to the posterity.) From other comments written on the paper, next to a series of mathematical notations, however, it is clear that Galois spent the night, intensely thinking about the woman over whom he was to have a duel the next morning. Something compelled this young man of 20 to pro?duce so many brilliant mathematical ideas in one night and then go to a duel the next morning, ready to kill or be killed over a woman. It is my contention that the same psychological mechanism was responsible for both. If the age-crime curve and the age-genius curve have similar shapes, and if mar?riage has the desistance e.ect on both crime and genius, then it is highly unlikely that social control theory of criminal behavior and desistance (Laub et al., 1998; Sampson & Laub, 1993), or, for that matter, any theory that is speci.c to criminal behavior, can hold the whole key to why men commit crimes and why they desist. Following Daly and Wilson (1988) and Kanazawa and Still (2000), I argue that a single psychological mechanism is responsible for making young men highly competitive during early adulthood and then quickly making them desist after their marriage in later adult?hood. It is my contention that both crime and genius are manifestations of young men's competitive desires to gain access to women's reproductive resources, which, in the ancestral environment, would have increased their reproductive success. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Acknowledgments I thank Barbara J. Costello, Steven W. Gangestad, Travis Hirschi, Rosemary L. Hopcroft, Christine Horne, Alan S. Miller, Joanne Savage, and Dean Keith Simon- ton for their comments on earlier drafts. References Blum, D. (1997). Sex on the brain: The biological di.erences between men and women. New York: Penguin. Blumstein, A. (1995). Youth violence, guns, and the illicit-drug industry. Journal of Criminal Law and Criminology, 86, 10-36. Brodetsky, S. (1942). Newton: Scientist and man. Nature, 150, 698-699. Cole, S. (1979). Age and scienti.c performance. American Journal of Sociology, 84, 958-977. Daly, M., & Wilson, M. (1988). Homicide. New York: De Gruyter. Daly, M., & Wilson, M. (1990). Killing the competition: Female/female and male/male homicide. Human Nature, 1, 81-107. Debus, A. G. (Ed.). (1968). Worldwho's who in science: A biographical dictionary of notable scientists from antiquity to the present. Chicago: A.N. Marquis. Ellis, L. (1998). Neodarwinian theories of violent criminality and antisocial behavior: Photographic evidence from nonhuman animals and a review of the literature. Aggression andViolent Behavior, 3, 61-110. Gillispie, C.C., (Editor in Chief.) 1970-1980. Dictionary of scienti.c biography (16 Volumes.) New York: Charles Scribner's Sons. Glueck, S., & Glueck, E. (1968). Delinquents andnond elinquents in perspective. Cambridge: Harvard University Press. Hargens, L. L., McCann, J. C., & Reskin, B. F. (1978). Productivity and reproductivity: Fertility and professional achievement among research scientists. Social Forces, 57, 154-163. Hirschi, T. (1969). Causes of delinquency. Berkeley: University of California Press. Hirschi, T., & Gottfredson, M. (1983). Age and the explanation of crime. American Journal of Sociology, 89, 552-584. Hirschi, T., & Gottfredson, M. R. (Eds.). (1994). The generality of deviance. New Brunswick: Transaction Publishers. Kanazawa, S. (2001). De Gustibus Est Disputandum. Social Forces, 79, 1131-1163. Kanazawa, S., & Still, M. C. (2000). Why men commit crimes (and why they desist). Sociological Theory, 18, 434-447. Laub, J. H., Nagin, D. S., & Sampson, R. J. (1998). Trajectories of change in criminal o.ending: Good marriages and the desistance process. American Sociological Review, 63, 225-238. Lehman, H. C. (1953). Age andachievement. Princeton: Princeton University Press. Levin, S. G., & Stephan, P. E. (1991). Research productivity over the life cycle: Evidence for academic scientists. American Economic Review, 81, 114-132. Mazur, A., & Michalek, J. (1998). Marriage, divorce, and male testosterone. Social Forces, 77, 315-330. Miller, G. F. (1999). Sexual selection for cultural display. In R. Dunbar, C. Knight, & C. Power (Eds.), The evolution of culture: An interdisciplinary view (pp. 71-91). New Brunswick: Rutgers University Press. Miller, G. F. (2000). The mating mind: How sexual choice shaped the evolution of human nature. New York: Doubleday. Mo.tt, T. E. (1993). Adolescence-limited and life-course-persistent antisocial behavior: A developmental taxonomy. Psychological Review, 100, 674-701. Mukerjee, M. (1996). Explaining everything. Scienti.c American, 274, 88-94. Porter, R. (Ed.). (1994). The biographical dictionary of scientists (2nd ed). New York: Oxford University Press. S. Kanazawa / Journal of Research in Personality 37 (2003) 257-272 Poundstone, W. (1992). Prisoner's dilemma. New York: Anchor. Sampson, R. J., & Laub, J. H. (1993). Crime in the making: Pathways andturning points through life. Cambridge: Harvard University Press. Simonton, D. K. (1988). Age and outstanding achievement: What do we know after a century of research? Psychological Bulletin, 104, 251-267. Simonton, D. K. (1997). Creative productivity: A predictive and explanatory model of career trajectories and landmarks. Psychological Review, 104, 66-89. Singh, S. (1997). Fermat's enigma: The epic quest to solve the world's greatest mathematical problem. New York: Anchor. From HowlBloom at aol.com Thu May 19 06:13:11 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Thu, 19 May 2005 02:13:11 EDT Subject: [Paleopsych] Pavel--Metaphor--plus Ted Message-ID: <145.4593271c.2fbd8877@aol.com> I've looked at George's site and, yes, he and you and I should talk. We are following entwined trains of thought. But our central team in my opinion remains you and me. A Russian trip, alas, will be rough because I have no funding for the trip and no way to pursue funding that I can think of. But here's my current view on what George, you, and I are trying to achieve. The math and the metaphors we are using are provisional. They're the best we have for now. If we have to use three metaphors simultaneously to get the feel for something as simple as light, so be it. If we have to use 20 metaphors to understand a cell, then let us use them all. Someday the metaphor will arrive that will encompass all the metaphors and math we use, and it will encompass all of them in a single vision. But our metaphors, our visions, depend on two things: 1) Metaphor depends on our technology. Computer metaphors were impossible until 1950. Now they are ordinary. Mandelbrot could not have worked out his fractals without the computers he had access to as an academic outcast, an employee of IBM in the 1970s. But thanks to those computers, George, you, and I now have fractals and strange attractors. 2) Metaphor depends on our understanding of ourselves, our cosmos, and our biology. Metaphor begets metaphor. New machines give us new visions of the "mechanisms" of things. New mechanisms give us new world views. Those new world-pictures give us new metaphors. Right now I'm absorbed in the calculations made by our muscles with every step we take to keep us upright, defying gravity, and to move us another step forward without breaking our toes and the bones of our feet. I suspect that these analog calculations can provide us with new understandings, new math, and new metaphors. Lawrence Berger, the sculptor, and I are working on this. He is working on it with his hands, by sculpting me in the process of thinking. I am working on it with my mind, trying to grasp the nature of thought and all that it has achieved with my limited computer, poetry, art, religion, and math metaphors. But, Pavel, I know as sure as sure can be that if we do not annihilate ourselves or drive ourselves into a new dark age, 200 years from now new metaphors will flow that will paste all of our scattered insights into a single ball and give new generations new tools to rebel and chafe against, new tools from which to build the basic steps to yet more metaphors. This is Edward Witten, a professor of physics at The Institute for Advanced Studies in Princeton, NJ, who has been called by the Scientific American, 'Probably the smartest man in the world.' Witten made the following comments while being interviewed for STEPHEN HAWKING'S UNIVERSE "On the Dark Side" episode which aired on PBS 11/03/97: "String Theory, as developed by the mid-eighties, was characterized by the fact that there were five theories we knew about. And that raised a rather curious question, that was always a little bit embarrassing. If one of those theories describes our universe, then who lives in the other four universes? We've come to understand that those five theories we've been studying are all parts of a bigger picture. In the last couple of years the picture has really changed to something which is called Duality. Duality, is a relationship between two different theories which isn't obvious. If it's obvious you don't dignify it by the name duality. So, we have different pictures and it's not that one is correct and the other isn't correct; one of them is more useful for answering one set of questions, the other is more useful in other sets of questions. And the power of theory comes largely from understanding that these different points of view which sound like they're about different universes actually work together in describing one model. So, those theories turn out to all be one, so it's a big conceptual upheaval to understand that there's only one theory, which is uncanny in nature." In the bonds--Howard In a message dated 5/18/2005 12:44:13 A.M. Pacific Standard Time, kurakin1970 at yandex.ru writes: > > >In a message dated 5/16/2005 12:43:51 A.M. Pacific Standard Time, >kurakin1970 at yandex.ru writes: > >www.neuroquantology.com/ > > >it look very interesting. I'd suspect that we have the seeds of some joint >pieces for them in our correspondence. What do you think? > >You know that I'm a quantum skeptic. I believe that our math is primitive. >The best math we've been able to conceive to get a handle on quantum >particles is probabilistic. Which means it's cloudy. It's filled with multiple >choices. But that's the problem of our math, not of the cosmos. With more >precise math I think we could make more precise predictions. > >And with far more flexible math, we could model large-scale things like >bio-molecules, big ones, genomes, proteins and their interactions. With a really >robust and mature math we could model thought and brains. But that math is >many centuries and many perceptual breakthroughs away. Maybe yes and maybe no. Roger Penrose discusses in his "New Emperor's Mind" that some physical processes can in in principle be out of possibilities of mathematics to describe them. All that concerns such un-computibility is of special interest for George. Now he interested in DNA computions: http://www.keldysh.ru/departments/dpt_17/gmalin/pr.files/frame.htm These slides are in Russian but images speak themselves. I hope that if You come to join me at QI-2005 at Zelenograd, we discuss this vast. And coffee, lots of coffee at nights. >As mathematicians, we are still in the early stone age. > >But what I've said above has a kink I've hidden from view. It implies that >there's a math that would model the cosmos in a totally deterministic way. >And life is not deterministic. We DO have free will. Free will means >multiple choices, doesn't it? And multiple choices are what the Copenhagen School's >probabilistic equations are all about? > >How could the concept of free will be right and the assumptions behind the >equations of Quantum Mechanics be wrong? Good question. Yet I'm certain that >we do have free will. And I'm certain that our current quantum concepts are >based on the primitive metaphors underlying our existing forms of math. >Which means there are other metaphors ahead of us that will make for a more >robust math and that will square free will with determinism in some radically new >way. > > > >Now the question is, what could those new metaphors be? > > >I, by the way, have a theory about how free will works in the brain. > >Does this sound like something we could propose as a paper and something >that we could carry across the finish line by using the technique you've >invented for interlacing and taming the force of our two minds? The kurakin-bloom >email conversation technique? > >Onward--Howard > ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendulf at shaw.ca Thu May 19 16:13:49 2005 From: kendulf at shaw.ca (Val Geist) Date: Thu, 19 May 2005 09:13:49 -0700 Subject: [Paleopsych] free wills and quantum won'ts References: <7b.4568eb14.2fbabcc3@aol.com> Message-ID: <00e701c55c8d$be3c1580$873e4346@yourjqn2mvdn7x> Dear Howard, I am up to my ears in work, but am working towards "that book". I could not help notice the discussion below, for "free will", has a few biological kink of its own. One can make a fair case that "free will" is in good part a delusion and that we are far more programmed, far more tightly controlled by unconscious programs than we care to admit. We have no difficulty admitting that when, say, we walk that automatic systems do most of the work and we provide at best a little guidance. When walking up a stairs we have spatial control so pat, that our toes miss the steps by less than a cm. And its easy to test for: raise one step by half a cm and see everybody catch their toe and fall on their face. In decades past the physiologist Sherrington showed on decerebrated dogs that - without brain-control - the dog, when tickled, led its foot with great precision for a scratch. That is, there were complex programs of motion in the spinal cord and we called them then reflexes. Studies of Sherrington's type showed a hierarchy of systems reaching from the spinal chord into the brain-stem and, finally, on into the cerebral cortex. The great advance by Lorenzian ethology was to make us aware that large, innate behavioral programs controlled much of what we did, and from studies of large mammals it became very clear that we shared mega-systems of behavior of all mammals with some of them leading on into reptiles, amphibians and fishes. My next book will probably be entitled "Condemned to Art", and one can show how hopelessly - and haplessly! - we are tied to, blindly, follow internal esthetic programs (mostly adaptive!), that we make decisions based on these that defy reason, and are quite embarrassing when we, so to say, find out! It's embarrassing, but Paleopsychology at its best! It also fills one with apprehension to discover that we can duplicate tools used by people of our Cro-Magnon etc lineage, but that we are stumped figuring out - most - tools used by Neanderthal! These physically quite different people thought (with their huge brains) so differently from us, that we have great difficulties determining just how some of their tools were used, let alone have the physical strength to duplicate the wear-patterns. Somehow we can climb out of what in the past I called "Darwin's cage", that is the innate structures that so readily channel our thoughts, but its tough to determine just when we are climbing out! On another, parallel matter: in re-examining past and present evidence about our most ancient past when Neanderthal and we formed two lineages, it becomes more and more evident that we, the moderns, were an insignificant side-branch playing second fiddle to the powerful and ubiquitous Neanderthal, who always took the best for himself forcing us to make do with what Neanderthal was not interested in. It's eerie how the pattern is falling out. Cheers, Val Geist ----- Original Message ----- From: HowlBloom at aol.com To: paleopsych at paleopsych.org Sent: Monday, May 16, 2005 8:19 PM Subject: [Paleopsych] free wills and quantum won'ts This is from a dialog Pavel Kurakin and I are having behind the scenes. I wanted to see what you all thought of it. Howard You know that I'm a quantum skeptic. I believe that our math is primitive. The best math we've been able to conceive to get a handle on quantum particles is probabilistic. Which means it's cloudy. It's filled with multiple choices. But that's the problem of our math, not of the cosmos. With more precise math I think we could make more precise predictions. And with far more flexible math, we could model large-scale things like bio-molecules, big ones, genomes, proteins and their interactions. With a really robust and mature math we could model thought and brains. But that math is many centuries and many perceptual breakthroughs away. As mathematicians, we are still in the early stone age. But what I've said above has a kink I've hidden from view. It implies that there's a math that would model the cosmos in a totally deterministic way. And life is not deterministic. We DO have free will. Free will means multiple choices, doesn't it? And multiple choices are what the Copenhagen School's probabilistic equations are all about? How could the concept of free will be right and the assumptions behind the equations of Quantum Mechanics be wrong? Good question. Yet I'm certain that we do have free will. And I'm certain that our current quantum concepts are based on the primitive metaphors underlying our existing forms of math. Which means there are other metaphors ahead of us that will make for a more robust math and that will square free will with determinism in some radically new way. Now the question is, what could those new metaphors be? Howard ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.11 - Release Date: 5/16/2005 -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Thu May 19 19:04:23 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:04:23 -0400 (EDT) Subject: [Paleopsych] NYT: When Richer Weds Poorer, Money Isn't the Only Difference Message-ID: When Richer Weds Poorer, Money Isn't the Only Difference Class Matters - Social Class and Marriage in the United States of America http://www.nytimes.com/2005/05/19/national/class/MARRIAGE-FINAL.html [Third of a series] By [2]TAMAR LEWIN NORTHFIELD, Mass. - When Dan Croteau met Cate Woolner six years ago, he was selling cars at the Keene, N.H., Mitsubishi lot and she was pretending to be a customer, test driving a black Montero while she and her 11-year-old son, Jonah, waited for their car to be serviced. The test drive lasted an hour and a half. Jonah got to see how the vehicle performed in off-road mud puddles. And Mr. Croteau and Ms. Woolner hit it off so well that she later sent him a note, suggesting that if he was not involved with someone, not a Republican and not an alien life form, maybe they could meet for coffee. Mr. Croteau dithered about the propriety of dating a customer, but when he finally responded, they talked on the phone from 10 p.m. to 5 a.m. They had a lot in common. Each had two failed marriages and two children. Both love dancing, motorcycles, Bob Dylan, bad puns, liberal politics and National Public Radio. But when they began dating, they found differences, too. The religious difference - he is Roman Catholic, she is Jewish - posed no problem. The real gap between them, both say, is more subtle: Mr. Croteau comes from the working class, and Ms. Woolner from money. Mr. Croteau, who will be 50 in June, grew up in Keene, an old mill town in southern New Hampshire. His father was a factory worker whose education ended at the eighth grade; his mother had some factory jobs, too. Mr. Croteau had a difficult childhood and quit school at 16. He then left home, joined the Navy and drifted through a long series of jobs without finding any real calling. He married his pregnant 19-year-old girlfriend and had two daughters, Lael and Maggie, by the time he was 24. "I was raised in a family where my grandma lived next door, my uncles lived on the next road over, my dad's two brothers lived next to each other, and I pretty much played with my cousins," he said. "The whole concept of life was that you should try to get a good job in the factory. My mother tried to encourage me. She'd say, 'Dan's bright; ask him a question.' But if I'd said I wanted to go to college, it would have been like saying I wanted to grow gills and breathe underwater." He always felt that the rich people in town, "the ones with their names on the buildings," as he put it, lived in another world. Ms. Woolner, 54, comes from that other world. The daughter of a doctor and a dancer, she grew up in a comfortable home in Hartsdale, N.Y., with the summer camps, vacations and college education that wealthy Westchester County families can take for granted. She was always uncomfortable with her money; when she came into a modest inheritance at 21, she ignored the monthly bank statements for several years, until she learned to channel her unease into philanthropy benefiting social causes. She was in her mid-30's and married to a psychotherapist when Isaac and Jonah were born. "My mother's father had a Rolls-Royce and a butler and a second home in Florida," Ms. Woolner said, "and from as far back as I can remember, I was always aware that I had more than other people, and I was uncomfortable about it because it didn't feel fair. When I was little, what I fixated on with my girlfriends was how I had more pajamas than they did. So when I'd go to birthday sleepovers, I'd always take them a pair of pajamas as a present." Marriages that cross class boundaries may not present as obvious a set of challenges as those that cross the lines of race or nationality. But in a quiet way, people who marry across class lines are also moving outside their comfort zones, into the uncharted territory of partners with a different level of wealth and education, and often, a different set of assumptions about things like manners, food, child-rearing, gift-giving and how to spend vacations. In cross-class marriages, one partner will usually have more money, more options and, almost inevitably, more power in the relationship. It is not possible to say how many cross-class marriages there are. But to the extent that education serves as a proxy for class, they seem to be declining. Even as more people marry across racial and religious lines, often to partners who match them closely in other respects, fewer are choosing partners with a different level of education. While most of those marriages used to involve men marrying women with less education, studies have found, lately that pattern has flipped, so that by 2000, the majority involved women, like Ms. Woolner, marrying men with less schooling - the combination most likely to end in divorce. "It's definitely more complicated, given the cultural scripts we've all grown up with," said Ms. Woolner, who has a master's degree in counseling and radiates a thoughtful sincerity. "We've all been taught it's supposed to be the man who has the money and the status and the power." Bias on Both Sides When he met Ms. Woolner, Mr. Croteau had recently stopped drinking and was looking to change his life. But when she told him, soon after they began dating, that she had money, it did not land as good news. "I wished she had waited a little," Mr. Croteau said. "When she told me, my first thought was, uh oh, this is a complication. From that moment I had to begin questioning my motivations. You don't want to feel like a gold digger. You have to tell yourself, here's this person that I love, and here's this quality that comes with the package. Cate's very generous, and she thinks a lot about what's fair and works very hard to level things out, but she also has a lot of baggage around that quality. She has all kinds of choices I don't have. And she does the lion's share of the decision-making." Before introducing Ms. Woolner to his family, Mr. Croteau warned them about her background. "I said, 'Mom, I want you to know Cate and her family are rich,' " he recalled. "And she said, 'Well, don't hold that against her; she's probably very nice anyway.' I thought that was amazing." There were biases on the other side too. Just last summer, Mr. Croteau said, when they were at Ms. Woolner's mother's house on Martha's Vineyard, his mother-in-law confessed to him that she had initially been embarrassed that he was a car salesman and worried that her daughter was taking him on as a kind of do-good project. Still, the relationship moved quickly. Mr. Croteau met Ms. Woolner in the fall of 1998 and moved into her comfortable home in Northfield the next spring, after meeting her condition that he sell his gun. Even before Mr. Croteau moved in, Ms. Woolner gave him money to buy a new car and pay off some debts. "I wanted to give him the money," she said. "I hadn't sweated it. I told him that this was money that had just come to me for being born into one class, while he was born into another class." And when he lost his job not long after, Ms. Woolner began paying him a monthly stipend - he sometimes refers to it as an allowance - that continued, at a smaller level, until last November, when she quit her longstanding job at a local antipoverty agency. She also agreed to pay for a $10,000 computer course that helped prepare him for his current job as a software analyst at the Cheshire Medical Center in Keene. From the beginning, the balance of power in the relationship was a sufficiently touchy issue that at Ms. Woolner's urging, a few months before their wedding in August 2001, they joined a series of workshops on cross-class relationships. "I had abject terror at the idea of the group," said Mr. Croteau, who is blunt and intellectually engaging. "It's certainly an upper-class luxury to pay to tell someone your troubles, and with all the problems in the world, it felt a little strange to sit around talking about your relationship. But it was useful. It was a relief to hear people talk about the same kinds of issues we were facing, about who had power in the relationship and how they used it. I think we would have made it anyway, but we would have had a rockier time without the group." It is still accepted truth within the household that Ms. Woolner's status has given her the upper hand in the marriage. At dinner one night, when her son Isaac said baldly, "I always think of my mom as having the power in the relationship," Mr. Croteau did not flinch. He is fully aware that in this relationship he is the one whose life has been most changed. Confusing Differences The Woolner-Croteau household is just up the hill from the groomed fields of Northfield Mount Hermon prep school - a constant local reminder to Mr. Croteau of just how differently his wife's sons and his daughters have been educated. Jonah is now a senior there. Isaac, who also attended the school, is now back at Lewis & Clark College in Oregon after taking a couple of semesters away to study in India and to attend massage school while working in a deli near home. By contrast, Mr. Croteau's adult daughters - who have never lived with the couple - made their way through the Keene public schools. "I sometimes think Jonah and Isaac need a dose of reality, that a couple years in public school would have shown them something different," Mr. Croteau said. "On the other hand I sometimes wish I'd been able to give Maggie and Lael what they had. My kids didn't have the same kind of privilege and the same kind of schools. They didn't have teachers concerned about their tender growing egos. It was catch-as-catch-can for them, and that still shows in their personalities." Mr. Croteau had another experience of Northfield Mount Hermon as well. He briefly had a job as its communications manager, but could not adjust to its culture. "There were all these Ivy Leaguers," he said. "I didn't understand their nuances, and I didn't make a single friend there. In working-class life, people tell you things directly, they're not subtle. At N.M.H., I didn't get how they did things. When a vendor didn't meet the deadline, I called and said, 'Where's the job?' When he said, 'We bumped you, we'll have it next week,' I said, 'What do you mean, next week? We have a deadline, you can't do business like that.' It got back to my supervisor, who came and said, 'We don't yell at vendors.' The idea seemed to be that there weren't deadlines in that world, just guidelines." Mr. Croteau says he is far more comfortable at the hospital. "I deal mostly with nurses and other computer nerds and they come from the same kind of world I do, so we know how to talk to each other," he said. But in dealing with Ms. Woolner's family, especially during the annual visits to Martha's Vineyard, Mr. Croteau said, he sometimes finds himself back in class bewilderment, feeling again that he does not get the nuances. "They're incredibly gracious to me, very well bred and very nice," he said, "so much so that it's hard to tell whether it's sincere, whether they really like you." Mr. Croteau still seems impressed by his wife's family, and their being among "the ones with their names on the buildings." It is he who shows a visitor the framed print of the old Woolner Distillery in Peoria, Ill., and, describing the pictures on the wall, mentions that this in-law went to Yale, and that one knew Gerald Ford. Family Divisions Mr. Croteau and Ms Woolner are not the only ones aware of the class divide within the family; so are the two sets of children. Money is continually tight for Lael Croteau, 27, who is in graduate school in educational administration at the University of Vermont, and Maggie, 25, who is working three jobs while in her second year of law school at American University. At restaurants, they ask to have the leftovers wrapped to take home. Neither could imagine taking a semester off to try out massage school, as Isaac did. They are careful about their manners, their plans, their clothes. "Who's got money, who doesn't, it's always going on in my head," Maggie said. "So I put on the armor. I have the bag. I have the shirt. I know people can't tell my background by looking." The Croteau daughters are the only ones among 12 first cousins who made it to college. Most of the others married and had babies right after high school. "They see us as different, and sometimes that can hurt," Maggie said. The daughters walk a fine line. They are deeply attached to their mother, who did most of their rearing, but they are also attracted to the Woolner world and its possibilities. Through holidays and Vineyard vacations, they have come to feel close not only to their stepbrothers, but also to Ms. Woolner's sisters' children, whose pictures are on display in Lael's house in Vermont. And they see, up close, just how different their upbringing was. "Jonah and Isaac don't have to worry about how they dress, or whether they'll have the money to finish college, or anything," Lael said. "That's a real luxury. And when one of the little kids asks, 'Why do people sneeze?' their mom will say, 'I don't know; that's a great question. Let's go to the museum, and check it out.' My mom is very smart and certainly engages us on many levels, but when we asked a difficult question, she'd say, 'Because I said so.' " The daughters' lives have been changed not only by Ms. Woolner's warm, stable presence, but also by her gifts of money for snow tires or books, the family vacations she pays for and her connections. One of Ms. Woolner's cousins, a Washington lawyer, employs Maggie both at her office and as a housesitter. For Ms. Woolner's sons, Mr. Croteau's arrival did not make nearly as much difference. They are mostly oblivious of the extended Croteau family, and have barely met the Croteau cousins, who are close to their age and live nearby but lead quite different lives. Indeed, in early February, while Ms. Woolner's Isaac was re-adjusting to college life, Mr. Croteau's nephew, another 20-year-old Isaac who had enlisted in the Marines right after high school, was shot in the face in Falluja, Iraq, and shipped to Bethesda Medical Center in Maryland. Isaac and Jonah are easygoing young men, neither of whom has any clear idea what he wants to do in life. "For a while I've been trying to find my passion," Jonah said. "But I haven't been passionately trying to find my passion." Isaac fantasizes about opening a brewery-cum-performance-space, traveling through South America or operating a sunset massage cruise in the Caribbean. He knows he is on such solid ground that he can afford fantasy. "I have the most amazing safety net a person could have," he said, "incredible, loving, involved and wealthy parents." On the rare occasions when they are all together, the daughters get on easily with the sons, though there are occasional tensions. Maggie would love to have a summer internship with a human rights group, but she needs paid work and when she graduates, with more than $100,000 of debt, she will need a law firm job, not one with a nonprofit. So when Isaac one day teased her as being a sellout, she reminded him that it was a lot easier to live your ideals when you did not need to make money to pay for them. And there are moments when the inequalities within the family are painfully obvious. "I do feel the awkwardness of helping Isaac buy a car, when I'm not helping them buy a car," Ms. Woolner said of the daughters. "We've talked about that. But I also have to be aware of overstepping. Their mother's house burned down, which was awful for them and for her and I really wanted to help. I took out my checkbook and I didn't know what was appropriate. In the end I wrote a $1,500 check. Emily Post doesn't deal with these situations." She and Mr. Croteau remain conscious of the class differences between them, and the ways in which their lives have been shaped by different experiences. On one visit to New York City, where Ms. Woolner's mother lives in the winter, Ms. Woolner lost her debit card and felt anxious about being disconnected, even briefly, from her money. For Mr. Croteau, it was a strange moment. "She had real discomfort, even though we were around the corner from her mother, and she had enough money to do anything we were likely to do, assuming she wasn't planning to buy a car or a diamond all of a sudden," he said. "So I didn't understand the problem. I know how to walk around without a safety net. I've done it all my life." Both he and his wife express pride that their marriage has withstood its particular problems and stresses. "I think we're always both amazed that we're working it out," Ms. Woolner said. But almost from the beginning they agreed on an approach to their relationship, a motto now engraved inside their wedding rings: "Press on regardless." From checker at panix.com Thu May 19 19:04:33 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:04:33 -0400 (EDT) Subject: [Paleopsych] NYT: Up From the Holler: Living in Two Worlds, at Home in Neither Message-ID: Up From the Holler: Living in Two Worlds, at Home in Neither Class Matters - Social Class in the United States of America http://www.nytimes.com/2005/05/19/national/class/DELLA-FINAL.html [I'll count this as part of the third in a series. Next installment on Sunday] By [2]TAMAR LEWIN PIKEVILLE, Ky. - Della Mae Justice stands before the jury in the Pike County Courthouse, arguing that her client's land in Greasy Creek Hollow was illegally grabbed when the neighbors expanded their cemetery behind her home. With her soft Appalachian accent, Ms. Justice leaves no doubt that she is a local girl, steeped in the culture of the old family cemeteries that dot the mountains here in East Kentucky. "I grew up in a holler, I surely did," she tells jurors as she lays out the boundary conflict. Ms. Justice is, indeed, a product of the Appalachian coal-mining country where lush mountains flank rust-colored creeks, the hollows rising so steeply that there is barely room for a house on either side of the creeks. Her family was poor, living for several years in a house without indoor plumbing. Her father was absent; her older half-brother sometimes had to hunt squirrels for the family to eat. Her mother married again when Della was 9. But the stepfather, a truck driver, was frequently on the road, and her mother, who was mentally ill, often needed the young Della to care for her. Ms. Justice was always hungry for a taste of the world beyond the mountains. Right after high school, she left Pike County, making her way through college and law school, spending time in France, Scotland and Ireland, and beginning a high-powered legal career. In just a few years she moved up the ladder from rural poverty to the high-achieving circles of the middle class. Now, at 34, she is back home. But her journey has transformed her so thoroughly that she no longer fits in easily. Her change in status has left Ms. Justice a little off balance, seeing the world from two vantage points at the same time: the one she grew up in and the one she occupies now. Far more than people who remain in the social class they are born to, surrounded by others of the same background, Ms. Justice is sensitive to the cultural significance of the cars people drive, the food they serve at parties, where they go on vacation - all the little clues that indicate social status. By every conventional measure, Ms. Justice is now solidly middle class, but she is still trying to learn how to feel middle class. Almost every time she expresses an idea, or explains herself, she checks whether she is being understood, asking, "Does that make sense?" "I think class is everything, I really do," she said recently. "When you're poor and from a low socioeconomic group, you don't have a lot of choices in life. To me, being from an upper class is all about confidence. It's knowing you have choices, knowing you set the standards, knowing you have connections." Broken Ties In Pikeville, the site of the Hatfield-McCoy feud (Ms. Justice is a Hatfield), memories are long and family roots mean a lot. Despite her success, Ms. Justice worries about what people might remember about her, especially about the time when she was 15 and her life with her mother and stepfather imploded in violence, sending her into foster care for a wretched nine months. "I was always in the lowest socioeconomic group," she said, "but foster care ratcheted it down another notch. I hate that period of my life, when for nine months I was a child with no family." While she was in foster care, Ms. Justice lived in one end of a double-wide trailer, with the foster family on the other end. She slept alongside another foster child, who wet the bed, and every morning she chose her clothes from a box of hand-me-downs. She was finally rescued when her father heard about her situation and called his nephew, Joe Justice. Joe Justice was 35 years older than Della, a successful lawyer who lived in the other Pikeville, one of the well-to-do neighborhoods on the mountain ridges. He and his wife, Virginia, had just built a four-bedroom contemporary home, complete with a swimming pool, on Cedar Gap Ridge. Joe Justice had never even met his cousin until he saw her in the trailer, but afterward he told his wife that it was "abhorrent" for a close relative to be in foster care. While poverty is common around Pikeville, foster care is something much worse: a sundering of the family ties that count for so much. So Joe and Virginia Justice took Della Mae in. She changed schools, changed address - changed worlds, in effect - and moved into an octagonal bedroom downstairs from the Justices' 2 year-old son. "The shock of going to live in wealth, with Joe and Virginia, it was like Little Orphan Annie going to live with the Rockefellers," Ms. Justice said. "It was not easy. I was shy and socially inept. For the first time, I could have had the right clothes, but I didn't have any idea what the right clothes were. I didn't know much about the world, and I was always afraid of making a wrong move. When we had a school trip for chorus, we went to a restaurant. I ordered a club sandwich, but when it came with those toothpicks on either end, I didn't know how to eat it, so I just sat there, staring at it and starving, and said I didn't feel well." Joe and Virginia Justice worried about Della Mae's social unease and her failure to mingle with other young people in their church. But they quickly sensed her intelligence and encouraged her to attend [3]Berea College, a small liberal arts institution in Kentucky that accepts students only from low-income families. Tuition is free and everybody works. For Ms. Justice, as for many other Berea students, the experience of being one among many poor people, all academically capable and encouraged to pursue big dreams, was life-altering. It was at Berea that Ms. Justice met the man who became her husband, Troy Price, the son of a tobacco farmer with a sixth-grade education. They married after graduation, and when Ms. Justice won a fellowship, the couple went to Europe for a year of independent travel and study. When Ms. Justice won a scholarship to [4]the University of Kentucky law school in Lexington, Mr. Price went with her, to graduate school in family studies. After graduating fifth in her law school class, Ms. Justice clerked for a federal judge, then joined Lexington's largest law firm, where she put in long hours in hopes of making partner. She and her husband bought a townhouse, took trips, ate in restaurants almost every night and spent many Sunday afternoons at real estate open houses in Lexington's elegant older neighborhoods. By all appearances, they were on the fast track. But Ms. Justice still felt like an outsider. Her co-editors on the law review, her fellow clerks at the court and her colleagues at the law firm all seemed to have a universe of information that had passed her by. She saw it in matters big and small - the casual references, to Che Guevara or Mount Vesuvius, that meant nothing to her; the food at dinner parties that she would not eat because it looked raw in the middle. "I couldn't play Trivial Pursuit, because I had no general knowledge of the world," she said. "And while I knew East Kentucky, they all knew a whole lot about Massachusetts and the Northeast. They all knew who was important, whose father was a federal judge. They never doubted that they had the right thing to say. They never worried about anything." Most of all, they all had connections that fed into a huge web of people with power. "Somehow, they all just knew each other," she said. Knitting a New Family Ms. Justice's life took an abrupt turn in 1999, when her half-brother, back in Pike County, called out of the blue to say that his children, Will and Anna Ratliff, who had been living with their mother, were in foster care. Ms. Justice and her brother had not been close, and she had met the children only once or twice, but the call was impossible to ignore. As her cousin Joe had years earlier, she found it intolerable to think of her flesh and blood in foster care. So over the next year, Della Mae Justice and her husband got custody of both children and went back to Pikeville, only 150 miles away but far removed from their life in Lexington. The move made all kinds of sense. Will and Anna, now 13 and 12, could stay in touch with their mother and father. Mr. Price got a better job, as executive director of Pikeville's new support center for abused children. Ms. Justice went to work for her cousin at [5]his law firm, where a flexible schedule allowed her to look after the two children. And yet for Ms. Justice the return to Pikeville has been almost as dislocating as moving out of foster care and into that octagonal bedroom all those years ago. On a rare visit recently to the hollows where she used to live, she was moved to tears when a neighbor came out, hugged her and told her how he used to pray and worry for her and how happy he was that she had done so well. But mostly, she winces when reminded of her past. "Last week, I picked up the phone in my office," she recalled, "and the woman said who she was, and then said, 'You don't remember me, do you?' And I said, 'Were you in foster care with me?' That was crazy. Why would I do that? It's not something I advertise, that I was in care." While most of her workweek is devoted to commercial law, Ms. Justice spends Mondays in family court, representing families with the kind of problems hers had. She bristles whenever she runs into any hint of class bias, or the presumption that poor people in homes heated by kerosene or without enough bedrooms cannot be good parents. "The norm is, people that are born with money have money, and people who weren't don't," she said recently. "I know that. I know that just to climb the three inches I have, which I've not gone very far, took all of my effort. I have worked hard since I was a kid and I've done nothing but work to try and pull myself out." The class a person is born into, she said, is the starting point on the continuum. "If your goal is to become, on a national scale, a very important person, you can't start way back on the continuum, because you have too much to make up in one lifetime. You have to make up the distance you can in your lifetime so that your kids can then make up the distance in their lifetime." Coming to Terms With Life Ms. Justice is still not fully at ease in the other, well-to-do Pikeville, and in many ways she and her husband had to start from scratch in finding a niche there. Church is where most people in town find friends and build their social life. But Ms. Justice and Mr. Price had trouble finding a church that was a comfortable fit; they went through five congregations, starting at the Baptist church she had attended as a child and ending up at the Disciples of Christ, an inclusive liberal church with many affluent members. The pastor and his wife, transplants to Kentucky, have become their closest friends. Others have come more slowly. "Partly the problem is that we're young, for middle-class people, to have kids as old as Will and Anna," Ms. Justice said. "And the fact that we're raising a niece and nephew, that's kind of a flag that we weren't always middle class, just like saying you went to Berea College tells everyone you were poor." And though in terms of her work Ms. Justice is now one of Pikeville's leading citizens, she is still troubled by the old doubts and insecurities. "My stomach's always in knots getting ready to go to a party, wondering if I'm wearing the right thing, if I'll know what to do," she said. "I'm always thinking: How does everybody else know that? How do they know how to act? Why do they all seem so at ease?" A lot of her energy now goes into Will and Anna. She wants to bring them up to have the middle-class ease that still eludes her. "Will and Anna know what it's like to be poor, and now we want them to be able to be just regular kids," she said. "When I was young, I always knew who were the kids at school with the involved parents that brought in the cookies, and those were the kids who got chosen for every special thing, not ones like me, who got free lunch and had to borrow clothes from their aunt if there was a chorus performance." Because Ms. Justice is self-conscious about her teeth - "the East Kentucky overbite," she says ruefully - she made sure early on that Anna got braces. She worries about the children's clothes as much as her own. "Everyone else seems to know when the khaki pants the boys need are on sale at J. C. Penney," she said. "I never know these things." As a child, Ms. Justice never had the resources for her homework projects. So when Anna was assigned to build a Navajo hogan, they headed to Wal-Mart for supplies. "We put in extra time, so she would appear like those kids with the involved parents," Ms. Justice said. "I know it's just a hogan, but making a project that looks like the other kids' projects is part of fitting in." Ms. Justice encouraged Will to join the Boy Scouts, and when he was invited to join his school's Academic Team, which competes in quiz bowls, she insisted that he try it. When he asked her whether he might become a drug addict if he took the medicine prescribed for him, she told him it was an excellent question, and at the doctor's office prompted him to ask the doctor directly. She nudges both children to talk about what happens in school, to recount the plots of the books they read and to discuss current events. It is this kind of guidance that distinguishes middle-class children from children of working-class and poor families, according to sociologists who have studied how social class affects child-rearing. While working-class parents usually teach their children, early on, to do what they are told without argument and to manage their own free time, middle-class parents tend to play an active role in shaping their children's activities, seeking out extracurricular activities to build their talents, and encouraging them to speak up and even to negotiate with authority figures. Ms. Justice's efforts are making a difference. Will found that he enjoyed Academic Team. Anna now gets evening phone calls from several friends. Both have begun to have occasional sleepovers. And gradually, Ms. Justice is coming to terms with her own life. On New Year's Eve, after years in a modest rented townhouse, she and her husband moved into a new house that reminds her of the Brady Bunch home. It has four bedrooms and a swimming pool. In a few years, when her older cousin retires, Ms. Justice will most likely take over the practice, a solid prospect, though far less lucrative, and less glamorous, than a partnership at her Lexington law firm. "I've worked very hard all my life - to have a life that's not so far from where I started out," she said. "It is different, but it's not the magical life I thought I'd get." References 1. http://www.nytimes.com/services/xml/rss/nyt/National.xml 2. http://query.nytimes.com/search/query?ppds=bylL&v1=TAMAR%20LEWIN&fdq=19960101&td=sysdate&sort=newest&ac=TAMAR%20LEWIN&inline=nyt-per 3. http://www.berea.edu/ 4. http://www.uky.edu/Law/ 5. http://www.justicelawoffice.com/ From checker at panix.com Thu May 19 19:04:44 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:04:44 -0400 (EDT) Subject: [Paleopsych] NYT: A Portrait of Class in America (7 Letters) Message-ID: A Portrait of Class in America (7 Letters) http://www.nytimes.com/2005/05/19/opinion/l19class.html To the Editor: Re "Class in America: Shadowy Lines That Still Divide" ("Class Matters" series, front page, May 15): I would hope that the reaction of my fellow Times readers to your series about the growing, hardening social class divisions in America would be not a fatalistic acceptance but a revived commitment to the ideal of a classless society. If we are to remain a democracy, we must make a determined national effort to reach a consensus on what truly constitutes the good life. We must strive to assure that not just a few but all American citizens can participate in it. Wealth, luxuries and possessions for their own sakes should not be the chief desiderata so much as adequate food, clothing and shelter; good medical care; a good education; a healthy, pleasant environment; our cherished constitutional freedoms; spiritually and intellectually rewarding work (as opposed to the soul-killing drudgery that is now the lot of many rich as well as poor Americans); and the capacity to recognize, enjoy and augment the best things that our civilization has to offer. Philip Walker Santa Barbara, Calif., May 16, 2005 To the Editor: On May 15, The New York Times shattered the American Dream. The rich are getting richer. But those streets paved with gold are turning out to be full of potholes. And the poor are being left behind. Why? Because our educational system is based on inequality of income. School districts for rich folks provide a better education than other school districts. And so, as you report, our kids' economic backgrounds are a better indicator of school performance in the United States than in Denmark, the Netherlands or France. The solution? Cut the military budget and put more money into our schools. The alternative is a third-world America. David E. Moore Rye, N.Y., May 17, 2005 To the Editor: Class is not as important socially or culturally as it once was, and almost all levels of American society can lead a relatively decent quality of life. But there is one distinct difference between the upper and lower classes in this country. The upper classes have the savings and thus financial security to weather health crises, retirement and periods of unemployment or low earnings. As a fair society, we need to fully finance programs like Social Security and Medicare that help close this one lasting class division. Dylan Lushing Malibu, Calif., May 15, 2005 To the Editor: You write about how upward mobility has declined since the 1970's. The article talks about the increased need for education and the inherited advantages and other factors without which it has become more difficult to move up the economic ladder. The article doesn't mention one of the biggest developments in that period. During the last 35 years, the "war on poverty," with its wide array of programs, was put into effect and expanded. Liberals claimed that the programs would help the poor. Opponents of the programs claimed that government assistance would stifle economic opportunity by creating a dependent underclass and burdening the middle class with heavy taxes. Based on your series, it appears that the conservatives were right. Frayda Levin Lodi, N.J., May 17, 2005 To the Editor: "Life at the Top in America Isn't Just Better, It's Longer" ("Class Matters" series, front page, May 16) reveals the true inequities in our "top notch" health care system with the comparative experiences of a heart attack for three different social classes of citizens. We spend more per capita than any other country for our health services, get less in outcome than any other country, and still extol the virtues of our system. It is our own caste system, and this kind of gold standard no one should emulate. Our system is innovative in diagnosing and designing, but it is also costly, unfair and unstable. We can do better. Carole Ferguson Lexington, Mass., May 16, 2005 The writer is a pediatric nurse practitioner. To the Editor: Contrary to the implication of "Life at the Top Isn't Just Better, It's Longer," it is not news that wealthier Americans get more and better health care than lower income people. They also drive safer cars, eat healthier food and live in safer and less polluted neighborhoods. What is newsworthy (and underreported), however, is that with risk-pooling and undifferentiated benefit packages, everyone pays very similar health insurance premiums yet only the better-off enjoy gold-plated care. You offer some helpful explanations why lower income patients get less out of their health coverage. Another is that out-of-pocket co-payments deter the poor, not the rich, from using their jointly bought insurance to the fullest. Though it might be debatable whether inequality in the distribution of health care is unjust, a system that forces the maid and the utility worker to subsidize the architect's health care unquestionably is. Barak Richman Durham, N.C., May 18, 2005 To the Editor: As a cardiologist practicing in Canada since 1985, I read with interest your May 16 front-page article about the variability in the treatment of heart attacks in the United States depending on the patient's social status. The Canadian health care system may not be perfect. But any patient, regardless of social status, wealth or ability to pay, will have an angioplasty if required during a heart attack if he consults a doctor in a major city where that service is available. The only criterion on which a treatment decision is made is based on the severity of the disease, not the size of the wallet. Patients of lower socioeconomic status, in Canada as in the United States, may have a shorter life expectancy because of their lifestyle, but not because of any difference in treatment during an acute problem. I am proud of our health care system. Michel Jarry, M.D. Montreal, May 16, 2005 From checker at panix.com Thu May 19 19:06:03 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:06:03 -0400 (EDT) Subject: [Paleopsych] LRC: Global Battle Erupts Over Vitamin Supplements by Bill Sardi Message-ID: Global Battle Erupts Over Vitamin Supplements by Bill Sardi http://www.lewrockwell.com/sardi/sardi37.html 5.5.16 In an unprecedented action, the World Health Organization (WHO), the United Nations (UNICEF), and an AIDS activist group that promotes drug therapy in South Africa, joined forces in opposing vitamin therapy that exceeds the Recommended Daily Allowance (RDA), and in particular vitamin C in doses they describe as being "far beyond safe levels." These health agencies suggest nutrients primarily be obtained from the diet and warn that supplemental doses of vitamin C that exceed a 2000 milligram per day upper limit could cause side effects such as diarrhea. The AIDS activist group also suggests patients receiving doses beyond the RDA should undergo proper counseling and informed consent before being placed on high-dose vitamin C. As outrageous as these statements sound, they burst into public view recently with an ongoing battle between Dr. Matthias Rath, a former Linus Pauling researcher, and The Treatment Action Campaign in South Africa. The public battle ensued after Dr. Rath published a full-page ad in the New York Times and the International Herald Tribune advocating vitamin therapy over anti-AIDS drug therapy. Coinciding with these full-page newspaper ads is a legal battle underway in South Africa where The Treatment Action Campaign seeks to censor statements made by Dr. Rath. Dr. Rath cites a study by Harvard Medical School researchers that showed dietary supplements slow the progression of AIDS and resulted in a significant decline in viral count. [New England Journal of Medicine 351: 23-32, 2004] Harvard researchers responded by saying vitamin therapy is important but may not replace anti-viral drug therapy. Diet promoted over supplements UNICEF and WHO advocate a balanced diet rather than supplements despite the fact AIDS patients have nutritional needs that exceed what the best diet can provide. AIDS patients often exhibit nutrient deficiencies due to malabsorption or diarrhea. Vitamin E, one of the supplemental nutrients provided in a cocktail developed by Dr. Rath for AIDS patients, is known to reduce the incidence of diarrhea. [STEP Perspectives 7:2-5, 1995] RDA for vitamin C is bogus Furthermore, the RDA for vitamin C established by the National Institutes of Health (NIH), referred to by the Treatment Action Campaign, was established using testing methods that have been proven to be inaccurate. A study published last year in the Annals of Internal Medicine by NIH scientists clearly shows much higher vitamin C levels can be achieved with oral dosing than previously thought possible. [Annals Internal Medicine 140:533-7, 2004]. Twelve noted antioxidant researchers have petitioned the Food & Nutrition Board to review the RDA for vitamin C now that it is apparent the RDA is based upon flawed research. [9]Steve Hickey Ph.D. and Hilary Roberts, pharmacology graduates of Manchester University, have authoritatively outlined the flaws in the current RDA for vitamin C. Furthermore, the RDA was established for healthy people and does not apply to patients with serious infectious disease such as AIDS patients. Health groups tip their hand This battle over vitamin supplements may be a foretaste of what will happen later this year when a worldwide body called Codex Alimentarius will meet to establish upper limits on vitamin and mineral supplements. Codex is governed under the auspices of the United Nations and World Health Organization. These health organizations are tipping their partiality for drugs over nutritional supplements. For example, Codex may establish a 2000 mg upper limit for vitamin C as previously proposed by the National Academy of Sciences, or as low as 225 mg which was recently established by German health authorities. Controlled studies do not support the use of either number. Dr. Rath is reported to recommend 4000 milligrams of daily vitamin C for AIDS patients. The amount of oral vitamin C that a patient can tolerate without diarrhea increases proportionately to the severity of their disease. [Med Hypotheses 18:61-77, 1985] AIDS patients often dont exhibit any diarrhea with extremely high-dose vitamin C therapy. Diarrhea may occur among healthy individuals following high-dose vitamin C therapy depending upon how much vitamin C is consumed at a single point in time. Divided doses taken throughout the day minimizes this problem. Huckster or helper? Dr. Rath, a renowned vitamin researcher who described a vitamin C cure for heart disease and cancer in 1990 in collaboration with Nobel prize winner Linus Pauling [Proc Natl Academy Sciences 87:9388-90, 1990], is characterized as a "wealthy vitamin salesman" by the Treatment Action Campaign in South Africa. Raths vitamin company is providing free vitamin therapy for AIDS victims in South Africa. Anti-AIDS drug therapy failing World health organizations appear to be solely backing AIDS drug therapy at a time when a highly drug-resistant strain of HIV that quickly progresses to AIDS has been reported in New York [AIDS Alert 20: 39-40, 2005], and drug resistance is a growing problem [Top HIV Medicine 13: 51-57, 2003]. Its only a matter of time till all current anti-AIDS drugs fail. Of particular interest is selenium, a trace mineral included in Dr. Raths anti-AIDS vitamin regimen, which appears to slow progression of the disease. Researchers report HIV infection has spread more rapidly in Sub-Saharan Africa than in North America primarily because Africans have low dietary intake of selenium compared to North Americans. [Medical Hypotheses 60: 611-14, 2003] Selenium appears to be a key nutrient in counteracting certain viruses and HIV infection progresses more slowly to AIDS among selenium-sufficient individuals [Proceedings Nutrition Society 61: 203-15, 2002]. The strong reaction by world health organizations against vitamin supplements causes one to wonder if they are afraid vitamin therapy will actually prove to be a viable alternative to AIDS drug therapy. Bill Sardi [[10]send him mail] is a consumer advocate and health journalist, writing from San Dimas, California. He offers a free downloadable book, The Collapse of Conventional Medicine, at [11]his website. [13]Bill Sardi Archives References 9. http://www.lulu.com/Ascorbate 10. mailto:BSardi at aol.com 11. http://www.askbillsardi.com/ 13. http://www.lewrockwell.com/sardi/sardi-arch.html From checker at panix.com Thu May 19 19:09:42 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:09:42 -0400 (EDT) Subject: [Paleopsych] BH: Sports Enhancement's Biggest Fan Message-ID: Sports Enhancement's Biggest Fan http://beta.betterhumans.com/Columns/Column/tabid/79/Column/320/Default.aspx?SkinSrc=%5bG%5dSkins%7c_default%7cNo+Skin&ContainerSrc=%5bG%5dContainers%7c_default%7cNo+Container&dnnprintmode=true&mid=501 Andy Miah promotes genetic modification of athletes not only to improve competition, but also to improve humanity 5.4.4 Baseball's drug scandal is so 20th century. While former slugger [5]Mark McGwire pleads the fifth over steroids, today's athletes plead with researchers for genetic tweaks. It's widely assumed they'll have them by the 2008 Olympics--[6]if not sooner. Anti-doping agencies have reacted predictably, with the World Anti-Doping Association (WADA) banning "gene doping" in 2003. But the advent of genetic enhancement has also provided a crowbar for prying open debate over enhancement-prohibition, genetic and otherwise. At the forefront of this debate is ethicist [7]Andy Miah, author of the book [8]Genetically Modified Athletes. Against the backdrop of baseball's steroid brouhaha, Miah recently landed in Toronto where he made the case to a cramped room in the University of Toronto's Athletic Centre. To Miah, there's far more than medals at stake. How we handle genetic enhancement in sports, he argues, will affect how we handle genetic enhancements in general. "The gene doping debate is about what kinds of humans count," he says. "Sports offer a way for enhancements to become embedded in society." Athletic hypocrisy At just over five feet with spiky black hair and goatee, the baby-faced Miah could easily be mistaken for a student at the Starbuck's where we met a day before his talk. (Confession: at about the same height and baby-facedness, the same could be said of me--and I can't even grow a goatee.) As a professor at the University of Paisley in Scotland, however, Miah teaches such courses as "Becoming Posthuman" and writes regularly on cyberculture, bioethics, sports and genetic enhancement. His writing has gained urgency and notoriety as genetic enhancement moves from science fiction to fact. Scientific journals now regularly report genetic advances for which it takes little imagination to see athletic application. Perhaps most famously, geneticist [9]Lee Sweeney at the University of Pennsylvania discovered that injecting the gene for a growth factor called IGF-1 doubles muscle strength in rats. Since the discovery, Sweeney says he's been swamped by requests from athletes seeking to participate in human trials. And why shouldn't they? asks Miah. The idea of the "natural athlete," he says, is a hypocrisy, as sport--from the latest greatest running shoes to stamina-boosting altitude chambers--is inherently technological. Furthermore, he says, anti-doping actions only provide an illusion of fairness "Those athletes we pin gold medals on are the ones who avoid the drug tests," he says. "We can't test for everything." With genetic enhancement, testing will be even more difficult, as athletes will have their DNA altered to improve performance rather than use detectable drugs. Encouraging public acceptance This--the practical impossibility of enforcing gene doping bans--is just one strike against prohibition. Miah raises many more issues. For example: Would people born with genetic modifications be allowed to play sports? Should gene therapy be used to equalize genetic constitutions so that competition is based more on skill and training than the genetic lottery? Would gene therapies that help athletes hasten healing also be banned under blanket prohibition? Such questioning puts Miah is in good company. U of T Faculty of Physical Education and Health Dean Bruce Kidd, for example, agrees with Miah that the ethical foundation of anti-doping is in need of review. Miah says this hasn't happened in about 40 years. "I would say ever," says Kidd, although he supports anti-doping initiatives. Moreover, Kidd supports equality in sports. The question for people on both sides of the anti-doping debate is how best to achieve this. Miah argues that genetic enhancement is one way. He believes that genetic enhancement could generally make people more capable, and he worries that sporting bodies will exert too strong an influence over its future. Ultimately, he hopes that athletics can make genetic enhancement more acceptable to the public, while baseball's current predicament shows this hasn't happened with drugs. "Drug-using athletes are represented as monsters, mutants," says Miah. "To me this is about enhancing humanity." References 5. http://abcnews.go.com/Sports/wireStory?id=592366 6. http://www.betterhumans.com/Features/Reports/report.aspx?articleID=2004-08-09-1 7. http://www.andymiah.net/ 8. http://www.gmathletes.net/ 9. http://www.med.upenn.edu/physiol/fac/sweeney.shtml From checker at panix.com Thu May 19 19:10:12 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:10:12 -0400 (EDT) Subject: [Paleopsych] AP: No Wrong Answer: Click It Message-ID: Wired News: No Wrong Answer: Click It http://www.wired.com/news/print/0,1294,67530,00.html Associated Press 10:42 AM May. 14, 2005 PT PROVIDENCE, Rhode Island -- Professor Ross Cheit put it to the students in his Ethics and Public Policy class at Brown University: Are you morally obliged to report cheating if you know about it? The room began to hum, but no one so much as raised a hand. Still, within 90 seconds, Cheit had roughly 150 student responses displayed on an overhead screen, plotted as a multicolored bar graph -- 64 percent said yes, 35 percent, no. Several times each class, Cheit's students answer his questions using handheld wireless devices that resemble television remote controls. The devices, which the students call "clickers," are being used on hundreds of college campuses and are even finding their way into grade schools. They alter classroom dynamics, engaging students in large, impersonal lecture halls with the power of mass feedback. Clickers ease fears of giving a wrong answer in front of peers, or of expressing unpopular opinions. "I use it to take their pulse," Cheit said. "I've often found in that setting, you find yourself thinking, 'Well, what are they thinking?'" In hard science classes, the clickers -- most of which allow several possible responses -- are often used to gauge student comprehension of course material. Cheit tends to use them to solicit students' opinions. The clickers are an effective tool for spurring conversation, for getting a feel for what other students think, said Megan Schmidt, a freshman from New York City. "It forces you to be active in the discussion because you are forced to make a decision right off the bat," said Jonathan Magaziner, a sophomore in Cheit's class. Cheit prepares most questions in advance but can add questions on the fly if need be. His setup processes student responses through infrared receivers that are connected to a laptop computer. Clickers increased class participation and improved attendance after Stephen Bradforth, a professor at the University of Southern California, introduced them to an honors chemistry class there last fall, he said. Bradforth uses the clickers to get a sense of whether students are grasping the material and finds that they compel professors to think about their lesson plans differently. He says it's too early to say whether students who used the clickers are doing better on standardized tests. Eric Mazur, a Harvard University physics professor and proponent of interactive teaching, says clickers aren't essential but they are more efficient and make participation easier for shy students. Many colleges already use technology that allows teachers and students to interact more easily outside the classroom. For example, professors can now post lecture notes, quizzes and reading lists online. Several companies market software, such as Blackboard and Web CT, that provide ready-made course web pages and other course management tools. Mazur envisions students someday using their laptops, cell phones or other internet-ready devices for more interactivity than clickers offer. At least one company, Option Technologies Interactive, based in Orlando, Florida, markets software that allows any student with a handheld wireless device or laptop to log onto a website and answer questions, just as they would with a clicker. For now, the clicker systems appear to be selling. Two companies that make the systems say each of their technologies are in use on more than 600 university campuses worldwide. Some textbook publishers are even writing questions designed to be answered by clicker, and packaging the devices with their books. Versions of clickers have been available since the 1980s, but in the past six years several more have entered the market and advances in technology have made them both cheaper and more sophisticated. Most universities that use clickers require students to buy them, although at Brown they're loaned through the library. Made by companies including GTCO CalComp of Maryland, eInstruction Corp. out of Texas and Hyper Interactive Teaching Technology from Arkansas, the devices cost about $30. The clickers communicate with receivers by infrared or radio signals, which feed the results to the teacher's computer. Software allows the students' responses to be recorded, analyzed and graphed. While each company offers slightly different features, the systems typically allow instructors to display the class's results as a whole, or to record each student's individual response. The clickers themselves vary among companies but generally allow students to respond to multiple choice questions or key in a numeric answer. The clickers can also be used to give quizzes that can be graded automatically and entered in a computerized gradebook, saving professors time. But several professors said they have avoided that so students will see the handheld devices as positive, rather than punitive. At the college level, the devices originally took hold in science classes, but they are finding their way into the social sciences and humanities, where the anonymity they offer may be an advantage. Cheit said that's especially true when it comes to sensitive topics, such as affirmative action. "People that are against it will click," Cheit said, "But they might not raise their hand and say it." From checker at panix.com Thu May 19 19:10:44 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 15:10:44 -0400 (EDT) Subject: [Paleopsych] The Engineer Online: Robot swarms cloud danger Message-ID: Robot swarms cloud danger http://83.219.63.174/Articles/290822/Robot+swarms+cloud+danger.htm 5.5.18 Industry Channel: [5]Military & Defence Source: The Engineer Online Engineers at the [7]University of Pennsylvania have received a $5 million grant from the [8]US Department of Defense to develop large-scale "swarms" of robots that could work together to thoroughly search large areas from the ground and sky. The Scalable Swarms of Autonomous Robots and Sensors or the Swarms Project, as it is known takes organisational cues from the natural world where tens or even hundreds of small, independent robots work together to accomplish specific tasks, such as finding a bomb in a crowded city. Penn's General Robotics, Automation, Sensing and Perception (GRASP) Laboratory will receive the five-year grant from the federal government under the Defense Department's Multidisciplinary University Research Initiative program. The Swarms project is based upon the success of the GRASP Lab's smaller-scale Multiple Autonomous Robotics (MARS) project, which managed the movement and behaviour of about a dozen robots. "Our objective here is to develop the software framework and tools for a new generation of autonomous robots, ultimately to the point where an operator can supervise an immense swarm of small robots through unfamiliar terrain," said Vijay Kumar, director of the GRASP Lab at Penn's School of Engineering and Applied Science and principal investigator of the Swarms Project. "There is an obvious military application, to be sure, but the same principles apply whether you are looking for a terrorist in an urban environment or localising the source of a chemical spill in a city." While MARS demonstrated the feasibility of such a program, the Swarms Project will take the complexity involved to a new level. To get a better grasp of swarming behaviour, Kumar and his colleagues are looking to the natural world for inspiration. In biology, swarming behaviours arise whenever there are large numbers of individuals that lack either the communication or computational capabilities required for centralised control. The Swarms Project brings together a cross-disciplinary team of researchers with expertise in artificial intelligence, control theory, robotics, systems engineering and biology. They will take cues from the sort of group behaviours that appear in beehives, ant colonies, wolf packs, bird flocks and fish schools. But the GRASP researchers are also working with molecular and cell biologists interested in the complicated signalling processes and group behaviours that go on inside and among cells. "There are a number of interesting behaviours seen in the natural world that we'd like to incorporate, at least analogously," Kumar said. "We might want to see the stalking behaviour of a wolf pack, the searching behaviour of ants or honeybees or the quorum-sensing behaviour of bacteria. "In fact, much like ants or bees, these robots will be rather dumb individually, but collectively they'll be capable of performing very complicated tasks." While the GRASP engineers are not attempting to recreate biology, they are striving to understand what general principals in biological behaviour that might be useful in getting robots to think as a group. Eventually, Kumar and his colleagues will demonstrate their biologically inspired algorithms on practical vehicle platforms, such as the robot blimps, unmanned aerial vehicles and the small "clodbuster" four-wheeled robots already in use at GRASP. "The MARS project was really about getting robots to interact in a physical space, to see their world and react to the obstacles around them," Kumar said. "With the Swarms Project, we are going beyond the orbit of MARS in that we are getting robots to talk amongst themselves about their image of the world around them." Footage of robots in recent MARS program tests can be seen [9]here. Additional information on the Swarms Project is available [10]here. References 5. http://83.219.63.174/Channels/Default.aspx?liChannelID=10&liSlotID=115 6. http://cent.adbureau.net/adclick/SITE=TE/EARTICLE=TECHNOLOGY/AREA=ENG.NEWS/CHANNEL=MILITARY/POSITION=/AAMSZ=336x280/PAGEID=1/ACC_RANDOM=1 7. http://www.upenn.edu/ 8. http://www.defenselink.mil/ 9. http://www.cis.upenn.edu/mars/site/multimedia.htm#movies 10. http://www.swarms.org/ From Euterpel66 at aol.com Thu May 19 23:35:10 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Thu, 19 May 2005 19:35:10 EDT Subject: [Paleopsych] Prevention: Do You Have the Silent Syndrome? How boredom aff... Message-ID: <1a0.342b59e2.2fbe7cae@aol.com> In a message dated 5/17/2005 10:51:48 A.M. Eastern Daylight Time, checker at panix.com writes: Top 10 Boring Tasks(*) 1. Standing in line 2. Laundry 3. Commuting 4. Meetings 5. Dieting 6. Exercising 7. Weeding lawn or garden 8. Housework 9. Political debates 10. Opening junk mail When I looked at this list, I was confused. It seems that the titles of the activities are to blame. 1. Standing in line is a great way to meet new people, or just listening to a new point of view. 2. Hanging out laundry on the line is a great way to be outdoors and soak in the vitamin D (and putting my body on line-dried sheets--exquisite!) 3. Commuting is like playing a video game. I see that slot up there where I can fit my car. 4. I looove meetings. Maybe because I don't go to many, but again, a great place to listen and learn and communicate. 5. Dieting, what's that? Sounds like eating to me. 6. Exercising is one of my most favorite things to do. It is a time to mentally zone out, and let my brain go where it will. 7. Weeding? How does my garden grow? Like laundry---being outside, clearing space for the things I have planted. Sounds like raising kids. 8. House work is like weeding. The results make it all worth while. 9. OK, one out of ten ain't bad. 10. Who OPENS junk mail? Methinks ennui is for bores. They manufacture it and then wallow in it. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Fri May 20 01:03:17 2005 From: checker at panix.com (Premise Checker) Date: Thu, 19 May 2005 21:03:17 -0400 (EDT) Subject: [Paleopsych] Meme 041: A Complacent AntiRacist Message-ID: Meme 041: A Complacent AntiRacist by Frank Forman sent 5.5.19 I grew up on the myth that all scientific results had been confirmed in independent laboratories before being published. Cost alone, of course, precludes any such thing, but controversial results do attract further examination. Cold fusion is an excellent example: while the initial experiments of Fleishmann and Pons are widely rejected as flawed, several scientists have conducted later experiments and think there may be something out there, though they don't agree on what it might be. On the other hand, there are a number of experiments that back up what a lot of people would like to believe in but others would not that are not followed up but get endlessly cited. These results stand alone. We should develop a list of them. One is the Pygmalion Effect, where elementary school children were randomly assigned to high and low IQ groups but with the teachers knowing their supposed IQs. The unduplicated result was that children assigned to the high IQ group vastly outperformed those assigned to the low IQ group. The book is _Pygmalion in the Classroom: Teacher Expectation and Pupils' Intellectual Development_ by Robert Rosenthal and Lenore Jacobson and came out in 1968, the original experiment having been done a few years earlier. Another unduplicated experiment was made by measuring the penis arousal to erotic pictures, wherein "homophobes" responded more to pictures of homosexual acts than the rest. (I can give no citation for this, perhaps because there never was such a study.) And another is Daniel Keren, Jamie McCarthy, and Harry W. Mazal, "The Ruins of the Gas Chambers: A Forensic Investigation of Crematoriums at Auschwitz I and Auschwitz-Birkenau," _Holocaust and Genocide Studies_ 18.1 (2004): 68-103. I know of no other article that remotely approaches this one in making a scientific case for Nazi gas chambers and welcome citations to other articles. In the article below, the unduplicated study is that of the IQs of children of occupying soldiers in Germany after the end of WW II with white German women. The study found almost no difference between the children of white and black fathers. The study is Klaus Eyferth, "Leistungen verschiedener Gruppen von Besatzungskindern in Hamburg-Wechsler Intelligenztest fur Kinder (HAWIK)," Archiv fur die gesamte Psychologie 113 (1961): 222-41. I googled the author's name and found out that he is now an emeritus professor and had studied cognitive modeling and similar such technical (though by no means unimportant) specialties, but nothing so sensational as this. One site claims that the article was published in Vita Humana (now Human Development) in 1959, which would place it before the German and so is dubious. All these journals do exist, and maybe the article is genuine, but it is odd that it has not been anthologized in English repeatedly and, indeed, not been more than fleetingly discussed. And, odd also that similar studies have not been conducted or at least not reported, which can happen if studies give unwanted results. I charge complacency here. I know no one who has a material interest in differences in innate racial cognitive abilities. On the other hand, there is a potentially rich interest we all have in there being differences in approaches to the world and in temperament that are rooted in genetic differences, which are *presumably* less difficult to eliminate than those rooted in local historical circumstance (but reordering whole societies has proven to be very expensive in terms of unintended consequences, while fiddling with the genome may become very cheap). Those who think differently from white, middle-class Americans could offer novel approaches and solutions to problems and these abilities will *not* get steamrollered away as the world becomes more and more Americanized. Alas, when I inquire at gatherings to celebrate diversity about the concrete benefits of this diversity, all I get is ethnic cooking (but the Chinese are buying up Mexican restaurants in New York City and Mongolian Barbecue here has nary a Mongolian in sight. It is now opening a branch in Mongolia itself! See http://www.bdsmongolianbarbeque.com .) or ethnic folk dancing and folktales. Now, it's been alleged that "racist" interests are served by keeping the black man down and exploiting him, but this goes against the logic of free market economics. On the other hands quite easy to see material interests in fostering the belief of equality of innate racial mental capacities: it gives resources to educators and other social planners. My charge is that these educators and planners, while operating out of their own idealistic visions at first, became dominated by those who just want to hold on to jobs. In other words, they became complacent. If they were in fact concerned, while they might not want to hear any bad news that equality of outcome (almost always measured by money income, whence Mickey Kaus's term "money liberals") will be only narrowed not finally achieved, what would really be upsetting would be a finding that what they get paid to do has very little impact and that they are not needed. So let's not undertake any study to factor out genetics and BREAK UP THE VARIOUS ENVIRONMENTAL FACTORS INTO THEIR COMPONENTS. It may turn out that schooling is much less important than nutrition or removing lead paint. It is far, far safer to moan about bad schooling, bad nutrition, lead paint, etc., at equal volumes and to forestall any study that would break down the causes of racial differences in achievement into environmental components, hereditary components, and (let us not forget) free will, though the latter is elusive, at least as far as multiple regression studies go. You can claim that none of the independent variables (like IQ) means anything and that the populations studied (races) are meaningless aggregates, and (this never happens) that the dependent variable (money) is also meaningless. But you remain complacent about any actual results. The article below is the first one I have been able to locate to cite the J. Philippe Rushton and Arthur Jensen, "Thirty Years of Research on Race Differences in Cognitive Ability," Psychology, Public Policy, and Law (2005 July), but the author said the article was forthcoming and evidently had a preview copy. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] -------------------- William T. Dickens: Genetic Differences and School Readiness Future of Children 15.1 (2005) 55-69 http://muse.jhu.edu/demo/future_of_children/v015/15.1dickens.html Abstract The author considers whether differences in genetic endowment may account for racial and ethnic differences in school readiness. While acknowledging an important role for genes in explaining differences within races, he nevertheless argues that environment explains most of the gap between blacks and whites, leaving little role for genetics. Based on a wide range of direct and indirect evidence, particularly work by Klaus Eyferth and James Flynn, the author concludes that the black-white gap is not substantially genetic in orgin. In studies in 1959 and 1961, Eyferth first pointed to the near-disappearance of the black-white gap among children of black and white servicemen raised by German mothers after World War II. In the author's view, Flynn's exhaustive 1980 analysis of Eyferth's work provides close to definitive evidence that the black disadvantage is not genetic to any important degree. But even studies showing an important role for genes in explaining within-group differences, he says, do not rule out the possibility of improving the school performance of disadvantaged children through interventions aimed at improving their school readiness. Such interventions, he argues, should stand or fall on their own costs and benefits. And behavioral genetics offers some lessons in designing and evaluating interventions. Because normal differences in preschool resources or parenting practices in working- and middle-class families have only limited effects on school readiness, interventions can have large effects only if they significantly change the allocation of resources or the nature of parenting practices. The effects of most interventions on cognitive ability resemble the effect of exercise on physical conditioning: they are profound but short-lived. But if interventions make even small permanent changes in behavior that support improved cognitive ability, they can set off multiplier processes, with improved ability leading to more stimulating environments and still further improvements in ability. The best interventions, argues the author, would saturate a social group and reinforce individual multiplier effects by social multipliers and feedback effects. The aim of preschool programs, for example, should be to get students to continue to seek out the cognitive stimulation the program provides even after it ends. [End Page 55] In national tests of school readiness, black preschoolers in the United States are not doing as well as white preschoolers. Researchers find black-white gaps not only in achievement and cognitive tests, but also in measures of readiness-related behaviors such as impulse control and ability to pay attention. Could some of these differences in school readiness be the consequence of differences in genetic endowment? In what follows I will review research evidence on this question.^1 Evidence on the Role of Genetic Differences To evaluate the research findings on the role of genetic differences in cognitive ability, I begin by drawing a clear distinction between evidence that genetic endowment explains a large fraction of differences within races and evidence that it explains differences between races and ethnic groups. There can be little doubt that genetic differences are an important determinant of differences in academic achievement within racial and ethnic groups, though the size of that effect is not known precisely. Depending on the measure of achievement used, the sample studied, and the age of the subjects, estimates of the share of variance explained by genetic differences within racial and ethnic groups range from as low as 20 percent to upward of 75 percent. However, most estimates, particularly those for younger children, seem to cluster in the range of 30 to 40 percent. The fraction of variance explained by genetic differences in a population is termed the heritability of the trait for that population.^2 But the heritability of academic achievement within racial or ethnic groups says little about whether genes play a role in explaining differences between racial groups. Suppose one scatters a handful of genetically diverse seed corn in a field in Iowa and another in the Mojave Desert. Nearly all the variance in size within each group of seedlings could be due to genetic differences between the plants, but the difference between the average for those growing in the Mojave and those growing in Iowa would be almost entirely due to their different environments. If researchers were able to identify all the genes that cause individual differences in school readiness, understand the mechanism by which they affect readiness and the magnitude of those effects, and assess the relative frequency of those genes in the black and white populations, they would know precisely the extent to which genetic differences explain the black-white gap. But only a few genes that influence cognitive ability or other behaviors relevant to school readiness have been tentatively identified, and nothing is known about their frequency in different populations. Nor are such discoveries imminent. Although genetic effects on several different learning and school-related behavior disorders have been identified and many aspects of personality are known to have a genetic component, genes have their primary effect on school readiness through their effect on cognitive ability.^3 Experts believe that a hundred or more genes are responsible for individual differences in cognitive ability. Many of these genes are likely to have weak and indirect effects that will be difficult to detect. It could be decades before enough genes are identified, and their frequencies estimated, to make it possible to determine what role, if any, they play in explaining group differences. So it is necessary to turn to less direct ways of answering the question. Much has been written on this topic in the past fifty years. James Flynn's Race, IQ, and Jensen, published in 1980, remains the most thoughtful and thorough [End Page 56] treatment available.^4 More recently Richard Nisbett wrote a shorter review of this literature.^5 Both Flynn and Nisbett take the view, as do I, that genetic differences probably do not play an important role in explaining differences between the races, but the point remains controversial, and Arthur Jensen provides a recent discussion from a hereditarian perspective.^6 Here I will review the major types of evidence and explain why I think they suggest that environmental differences likely explain most, if not all, of the black-white gap in school readiness. I will concentrate entirely on the evidence on cognitive ability, as it is the most studied trait that influences school readiness, and genetically induced differences in cognitive ability account for the vast majority of genetically induced differences in school readiness within ethnic groups. Almost no studies have been done of racial differences in other traits that might influence school readiness. And I choose to focus on the black-white gap rather than to consider the role of genetic differences in determining the academic readiness of disadvantaged groups more generally, again, because it is a topic that has been more thoroughly studied. [Box: Clearing Up a Confusion] Direct Evidence on the Role of Genes: European Ancestry and Cognitive Ability Blacks in the United States have widely varying degrees of African and European ancestry. If their genetic endowment from their African ancestors is, on average, inferior to that from their European ancestors, then their cognitive ability would be expected to vary directly in proportion to the extent of their European ancestry. Some early attempts to assess this hypothesis linked skin color with test scores and found that lighter-skinned [End Page 57] blacks typically had higher scores. But skin color is not strongly related to degree of European ancestry, while socioeconomic status clearly is. Thus the differences might reflect environmental rather than genetic causes. Nearly all commentators agree that these early studies are not probative. More recent studies have looked at measures of European ancestry, such as blood groups or reported ancestry, that are not visible. Such studies have found little or no correlation between the measure of ancestry and cognitive ability, though all are subject to methodological criticisms that could explain their failure to find such a link. Thus although these studies do not provide evidence for a role for genes in explaining black-white differences, they do not provide strong evidence against it. Direct Evidence on the Role of Environment: Adoption and Cross-Fostering If there is no direct evidence of a role for genes in explaining the black-white gap, perhaps there is direct evidence that environment can or cannot account for the whole difference between blacks and whites. Several studies have shown that environmental differences between blacks and whites can, in a statistical sense, "explain" nearly all of the difference in cognitive ability between black and white children.^7 But because the studies do not completely control for the genetic endowment of either the child or the parents and because many of the variables used to explain the difference are themselves subject to genetic influence, the effect being attributed to environment may in reality be due to genetic differences. What is needed is a way to see the effect of environment without confusing it with the effect of genetic endowment. For example, randomly choosing white and black children at birth and assigning them to be fostered in either black or white families would ensure that the children's environments were not correlated with their genetic potential and would show how much difference environment makes. No existing study replicates the conditions of this experiment exactly, but some come close. The strongest evidence for both the environmentalist and hereditarian perspectives is of this sort. After the end of World War II both black and white soldiers in the occupying armies in Germany fathered children with white German women. Klaus Eyferth gathered data on a large number of these children, of mainly working-class mothers, and gave the children intelligence tests.^8 He found almost no difference between the children of white fathers and those of black fathers. The finding is remarkable given that the black children faced a somewhat more hostile environment than the white children. Hereditarians have challenged these findings by appealing to the possibility that the black soldiers who fathered these children might have been a particularly elite group. Flynn has researched the plausibility of this explanation and concludes that such selection did not play more than a small role.^9 Thus Eyferth's study suggests that the black-white gap is largely, and possibly entirely, environmental. A study similar to Eyferth's found the cognitive ability of black children raised in an orphanage in England to be slightly higher than that of white children raised there.^10 Again, critics have raised the possibility that the black children were genetically advantaged relative to other blacks, and the whites disadvantaged relative to other whites. And again, Flynn finds it unlikely that this contention explains [End Page 58] much of the disappearance of the black-white gap.^11 This study, too, suggests that the black-white gap is mainly environmental. If the black-white gap is mainly genetic in origin, children's cognitive ability should not depend on the race of their primary caregiver, comparing those of the same race. Yet two studies comparing the experience of black children raised by black or white mothers suggest that it does.^12 Here too, because the children were not randomly assigned to their caregivers, it is possible that the children raised by black mothers were of lower genetic potential, but it would be hard to make such a selection story explain more than a small fraction of the apparent environmental effect. Another transracial adoption study provides mixed evidence, but some of the strongest that genes play a role in explaining the black-white gap.^13 A group of children, some with two black parents and some with one white and one black parent, were raised in white middle-class families. When the children's cognitive ability was tested at age seven, the children with two black parents scored 95, higher than the average black child in the state (89) and only slightly below the national average for whites, while the mixed-race children scored 110, which was considerably above it.^14 On the one hand, this finding suggests a huge effect of environment on the cognitive ability of the adopted black and mixed-race children. On the other hand, the higher scores of the mixed-race children suggest that parents' genes may account for some of the difference from the black children, and that the mixed-race children may have had a better inheritance by virtue of having one white parent. Both black and mixed-race children scored worse than the biological children of their adoptive parents (who scored 116), an expected finding because the adopting parents were an elite group and likely passed on above-average genetic potential to their children. But they also scored considerably below the average of 118 for comparison white children adopted into similar homes. When the same children were retested ten years later, the results were different.^15 The scores of the children with two black parents had dropped to about the average for blacks in the state where they lived before they were adopted (89). The scores of the mixed-race children had dropped too (99), but remained intermediate between those of the children with two black parents and those of the adoptive parents' biological children, which had also declined, to 109. The scores of the white children raised in adoptive homes had dropped the most, falling to 106. The disappearance of the salutary effect of the adoptive home, however, does not mean that genes determine black-white differences. We can assume that as the children aged and moved out into the world, the effect of the home environment diminished, and both whites and blacks tended to the average for their own population because of either genetic or environmental effects. By showing how the effect of a child's home environment disappears by adolescence, this study suggests [End Page 59] that environmental disadvantages experienced by blacks as children cannot explain the deficit in their cognitive ability as adolescents and adults. But environmental disadvantages facing black adolescents and adults could still explain those deficits. The transience of environmental effects on cognitive ability is a theme to which I shall return. The persistence of the advantage of the mixed-race children over the children with two black parents is suggestive of a role for genes. It is not, though, definitive: several other explanations have been offered, including the late adoption of the children with two black parents and parental selection effects unrelated to race.^16 Indirect Evidence on the Role of Genetic Differences Although the direct evidence on the role of environment is not definitive, it mostly suggests that genetic differences are not necessary to explain racial differences. Advocates of the hereditarian position have therefore turned to indirect evidence.^17 Several authors have argued that estimates of the heritability of cognitive ability put limits on the plausible role of environment.^18 The argument is normally made in a mathematical form, but it boils down to this. First, it is now widely accepted that differences in genetic endowment explain at least 60 percent of the variance in cognitive ability among adults in the white population in the United States.^19 If all the environmental variation among U.S. whites can explain only 40 percent of the variance among whites, how could environmental differences explain the huge gap between blacks and whites? The mathematical argument implies that the average black environment would have to be worse than at least 95 percent of white environments, but observable characteristics of blacks and whites are not that different. For example, black deficits in education or in socioeconomic status place the average black below only about 60 to 70 percent of whites.^20 The heritability of cognitive ability is also crucial to a second type of indirect evidence for a role of genetic differences in explaining the black-white gap. Arthur Jensen has advanced what he calls "Spearman's Hypothesis," after the late intelligence researcher Charles Spearman, who observed that people who had large vocabularies were good at solving mazes and logic problems and were also more likely to have command of a wide range of facts. Spearman posited that a single, largely genetic, mental ability that he called g (for general mental ability) explained the correlation of people's performance across a wide range of tests of mental ability. Researchers now know that a single underlying ability cannot explain all the tendency of people who do well on one type of test to do well on another.^21 But it is possible to interpret the evidence as indicating that there is a single ability that differs among people, that is subject to genetic influence, and that explains much of the correlation across tests. Other interpretations are also possible, but this one cannot be discounted. In a series of studies Jensen and Rushton have argued that different types of tests tap this general ability to different degrees; that the more a test taps g, the more it is subject to genetic influence; and that black-white differences are largest on the tests most reflective of the underlying general ability, g.^22 Using several restrictive assumptions about the nature of genetic and environmental influence on genetic ability, researchers can use this information to estimate the fraction of the black-white gap that is due to differences in genetic endowment. The more the [End Page 60] pattern of black-white differences across different tests resembles the pattern of genetic influence on different tests, the more the statistical procedure will attribute the black-white differences to genetic differences. Using this method, David Rowe and Jensen have independently estimated that from one-half to two-thirds of the black-white gap is genetic in origin.^23 A Problem for the Indirect Arguments: Gains in Cognitive Ability over Time Over the past century, dozens of countries around the world have seen increases in measured cognitive ability over time as large as or even larger than the black-white gap.^24 The phenomenon has been christened the "Flynn Effect," after James Flynn, who did the most to investigate and popularize this worldwide trend. The score gains have been documented even between a large group of fathers and sons taking the same test only decades apart, making it impossible that the gains are due to changes in genes. Clearly environmental changes can cause huge leaps in measured cognitive ability. Although it might not seem plausible that the average black environment today is below the 5th percentile of the white distribution of environments, it is certainly plausible that the average black environment in the United States today is as deprived as the average white environment of thirty to fifty years ago--the time it took for cognitive ability to rise by an amount equal to the black-white gap in many countries. These gains in measured cognitive ability over time point to a problem in the argument that high heritability estimates for cognitive ability preclude large environmental effects. Gains in cognitive ability over time also challenge the logic of Jensen's genetic explanation for the pattern of black-white differences across different types of tests. All studies show that gains on different tests are positively correlated with measures of test score heritability, and most studies show that gains are positively correlated with the extent to which a test taps the hypothesized general cognitive ability.^25 There is little doubt that applying the same method as Rowe and Jensen used to data on gains in cognitive ability over time would show them to be partially genetic in origin, something we know cannot be true. So, what is it that is wrong with the logic of these two arguments, that the high heritability of cognitive ability limits the possible effect of the environment and that the pattern of black-white differences across different tests shows those differences to be genetic in origin? And in particular, where is the problem in the first? It is important to detect the flaw, because if the logic of the argument were sound, the case for environmental causes of black-white differences would be difficult to make, and the possibility of remedying those differences would be remote. But before I explain, I want to cite two other pieces of evidence marshaled by advocates of the hereditarian position that suggest the limited power of the environment to change cognitive ability (and therefore to explain the entire black-white gap). The first is that the heritability of cognitive ability rises with age. It does so at the expense of the effect of family environment, which disappears nearly completely in most studies of late adolescents and adults.^26 The disappearance of the effect on black children of being raised in white families, which I have already noted, is just one case of a general finding from several different types of studies. A second piece of evidence is the fade-out of the effect of preschool programs [End Page 61] on cognitive ability. Although such programs have been shown to have profound effects on the measured ability of children, the effects fade once the programs end, leaving little evidence of any effect by adolescence.^27 Is it possible to reconcile the high heritability of cognitive ability with large, but transient, environmental effects? The Interplay of Genes and the Environment To explain this puzzle, James Flynn and I have proposed a formal model in which genes and environment work together, rather than independently, in developing a person's cognitive ability.^28 The solution involves three aspects of the process by which individual ability is molded that are overlooked by the logic that implies small environmental effects. We illustrate our argument with a basketball analogy. How can genes and environment both be powerful in shaping ability? Consider a young man with a small genetic predisposition toward greater height and faster reflexes. When he is young, he is likely to be slightly better than his playmates at basketball. His reflexes will make him generally better at sports, and his height will be a particular advantage when it comes to passing, catching, and rebounding. These advantages by themselves confer only a small edge, but they may be enough to make the game more rewarding for him than for the average person and get him to play more than his friends and to improve his play more over time. After a while, he will be considerably better than the average player his age, making it likely that he will be picked first for teams and perhaps receive more attention from gym teachers. Eventually, he joins a school team where he gets exhaustive practice and professional coaching. His basketball ability is now far superior to that of his old playmates. Through a series of feedback loops, his initial minor physical advantage has been multiplied into a huge overall advantage. In contrast, a child who started life with a predisposition to be pudgy, slow, and small would be very unlikely to enjoy playing basketball, get much practice, or receive coaching. He would therefore be unlikely to improve his skills. Assuming children with a range of experience between these two extremes, scientists would find that a large fraction of the variance of basketball playing ability would be explained by differences in genetic endowment--that basketball ability was highly heritable. And they would be right to do so. But that most certainly would not mean that short kids without lightning reflexes could not improve their basketball skills enormously with practice and coaching. The basketball analogy so far illustrates two of the considerations that Flynn and I believe are important for understanding the implications of behavioral genetic studies of cognitive ability. First, genes tend to get matched to complimentary environments. When that happens, some of the power of environment is attributed to genes. Only effects of environment shared by all children in the same family and effects of environment uncorrelated with genes get counted as environmental. Second, the effect of genetic differences gets multiplied by positive feedback loops. Small initial differences are multiplied by processes where people's initially varying abilities are matched to complimentary environments that cause their abilities to diverge further. In theory this same multiplier process could be driven by small environmental differences. But to drive the multiplier to its maximum, the environmental advantage would [End Page 62] have to be as constant over time as the genetic difference, because in the absence of the initial advantage there will be a tendency for the whole process to unwind. For example, suppose that midway through high school the basketball enthusiast injures a leg, which makes him less steady and offsets his initial advantage in height and reflexes. Because of all his practice and learning, he will still be a superior player. But his small decrement in performance could mean discouragement, more bench time, or not making the cut for the varsity team. This could lead to a further deterioration of his skills and further discouragement, until he gives up playing on the team entirely. Although each individual's experience will differ, the theory that Flynn and I lay out would have people with average physical potential reverting to average ability over time, on average. The transitory nature of most environmental effects not driven by genetic differences helps explain why environmental differences do not typically drive large multipliers and produce the same large effects as genetic differences. That same transience helps explain why environment can be potent but still cause a relatively small share of the variance of cognitive ability in adults.^29 Social Multipliers and the Effect of Averaging If most external environmental influences are transitory and transitory environmental effects are unable to drive multipliers, what explains the large gains in cognitive ability over the past century? That question has two answers. One is the social multiplier process. The other is that many random transient environmental effects that lean in one direction when averaged together can substitute for a single persistent environmental cause. This is the third point missed by the argument that claims that high heritability implies small environmental effects. Another basketball analogy will help explain social multipliers. During the 1950s television entered many U.S. homes. Professional basketball, with its small arena, could not reach as wide an audience as baseball, but basketball translated much better to the small screen. Thus public interest in basketball began to grow. The increased interest made it easier for enthusiasts to find others to play with, thus increasing the opportunities to improve skills. As skills improved, standards of play rose, with players learning moves and skills from each other. As more people played and watched the game, interest increased still further. More resources were devoted to coaching basketball and developing basketball programs, providing yet more opportunities for players to improve their skills. In the end, the small impetus provided by the introduction of television had a huge impact on basketball skills. A similar process may well be at work for cognitive ability. An outpouring of studies in recent years suggests that social effects have an important influence on school performance.^30 One study of an experimental reduction in school class size resulting in major achievement score gains suggests that a very large fraction of the gains came through the children's extended association with their peers, who shared the experience of small class sizes.^31 In this case an arguably minor intervention had large and long-lasting effects largely owing to a social multiplier effect. But improvements in cognitive ability could have many triggers, rather than a single one. Many such triggers over the past half-century averaged together could be acting to raise cognitive ability. Increasing cognitive demands [End Page 63] from more professional, technical, and managerial jobs; increased leisure time; changing cognitive demands of personal interactions; or changing attitudes toward intellectual activity could all be playing a role. And small initial changes along any of these dimensions would be magnified by individual and social multipliers. Genes and Environment and the Black-White Gap The black-white gap in measured cognitive ability may come about in a similar way, but it could have even more triggers. Segregation and discrimination have caused many aspects of blacks' environment to be inferior to that of whites. Averaged together, the total impact can be large, even if each individual effect is small. Suppose, for example, that environment relevant to the formation of cognitive ability consists of 100 factors, each with an equal effect. If for each of these 100 factors the average black were worse off than 65 percent of whites, he would be worse off than 90 percent of whites when the effects of all the environmental factors were considered together. (The disparity is the necessary result of accumulating a large number of effects when two groups have slightly different means for all the effects.)^32 Taking the total effect of environment in this way, considering the underestimate of the total effect of environment because some of its power is attributed to genes, and considering individual and social multipliers, a purely environmental explanation for black-white differences becomes plausible despite high estimates for the heritability of cognitive ability. Moreover, our model also has explanations for the correlation of the heritability of scores on different tests with the size of the black-white gap on those tests and the anomalous correlation of the size of gains in cognitive ability over time on different tests with the heritability of those test scores. Those cognitive abilities for which multiplier processes are most important will be the ones that show the largest heritability, because of the environmental augmentation of the genetic differences. But they will also be the ones on which a persistent change in environment will have the biggest influence. Thus we might expect that persistent environmental differences between blacks and whites, as well as between generations, could cause a positive correlation between test score heritabilities and test differences.^33 Rushton and Jensen's indirect evidence of a genetic role in black-white differences is, therefore, not probative. Implications and Conclusions The indirect evidence on the role of genes in explaining the black-white gap does not tell us how much of the gap genes explain and may be of no value at all in deciding whether genes do play a role. Because the direct evidence on ancestry, adoption, and cross-fostering is most consistent with little or no role for genes, it is unlikely that the black-white gap has a large genetic component. But what if it does? What would be the implications for the school readiness of children? Much of the variance in human behavior, including cognitive ability and achievement test scores, can be traced to differences in individuals' genetic endowments. But as indisputable as is the role of genes in shaping differences in outcomes within races, so is the role of environment. Studies of young children show that environmental differences explain more variation than do genetic differences. And even studies showing an important role for genes in no way rule out the possibility of improving the school performance of disadvantaged children through [End Page 64] interventions aimed at enhancing their school readiness. Interventions should stand or fall on their own costs and benefits and not be prejudged on the basis of genetic pessimism. In fact, studies of the role of genes and environment in determining school readiness offer some useful lessons in designing and evaluating interventions. These studies show that normally occurring differences in preschool resources or parenting practices in working- and middle-class families have only limited effects on school readiness once the correlation due to parents' and children's genes is taken out of play.^34 Thus small interventions that make only modest changes in the allocation of resources or the nature of parenting practices will have limited to modest effects at best. Effects will likely be somewhat larger if interventions target very disadvantaged families, probably because the room for improvement is greater.^35 Achieving permanent effects on cognitive ability is harder than achieving large effects. Most environmental effects on cognitive ability seem to be like the effect of exercise on physical conditioning: profound but short-lived. But even short-lived improvements in cognitive ability can be valuable if they mediate longer-term changes in achievement--for example, if improved cognitive ability for some period of time allows students to learn to read more quickly, putting them on a permanently higher achievement path. And evidence suggests that programs aimed at improving cognitive ability do have long-term effects on achievement even if they have no significant long-term effects on cognitive ability. However, if interventions make even small permanent changes in behavior that support improved cognitive ability, they can set off multiplier processes, with improved ability leading to better environments and still further improvements in ability. If we knew what aspects of preschool programs help elevate cognitive ability, and if we could get children to continue to seek out such stimulation after they leave preschool programs, their increased ability could lead them to associate with more able peers, to have the confidence to take on more demanding academic challenges, and to get the further advantage of yet more positive stimulation from these activities. This, in turn, could further develop their cognitive ability. Long-lived effects are more likely to be large effects. Effects are particularly likely to be large if an intervention saturates a social group and allows the individual multiplier effects to be reinforced by social multipliers or feedback effects. If students find themselves among others with greater ability, individual interactions and group activities are more likely to give rise to further improvements in cognitive ability. In this same vein, evaluations that do not take into account the social effects of the intervention on children who did not directly take part may be missing an important aspect of the effects of an intervention. Although much of normal environmentally induced variance in cognitive ability seems to be transient, if interventions could induce even small long-lasting changes in behavior, they might produce very large effects through the multiplier process. Taking advantage of such processes may make it possible to overcome the black-white gap and put black and white children on an even footing. [End Page 65] William T. Dickens is a senior fellow in the Brookings Economic Studies program. He acknowledges the excellent research assistance of Rebecca Vichniac and Jennifer Doleac. Endnotes 1. The review necessarily highlights only the most important studies; a complete review of all the arguments on both sides of this debate would require hundreds of pages and be beyond the scope of this article. 2. Heritability is estimated by examining the similarity of people with different degrees of genetic similarity raised in similar sorts of environments, and there is some reason to believe that most estimates are somewhat overstated by existing methods. Robert Plomin and others, Behavioral Genetics, 4th ed. (New York: Worth Publishers, 2001), in chapter 5 and the appendix, provide a thorough discussion of the methods used to estimate heritability. Mike Stoolmiller, "Implications of the Restricted Range of Family Environments for Estimates of Heritability and Nonshared Environment in Behavior-Genetic Adoption Studies," Psychological Bulletin 125 (1999): 392-409, shows that adoption studies probably overstate the degree of heritability and speculates on reasons why some other methods may as well. 3. Robert Plomin and others, Behavioral Genetics (see note 2), examine learning disorders on pp. 145-49, ADHD on pp. 227-29, and personality in chapter 12. For the effects of genes on cognitive ability, see Marcie L. Chambers and others, "Variation in Academic Achievement and IQ in Twin Pairs," Intelligence (forthcoming); Lee Anne Thompson and others, "Associations between Cognitive Abilities and Scholastic Achievement: Genetic Overlap but Environmental Differences," Psychological Science 2 (1991): 158-65; and Sally J. Wadsworth, "School Achievement," in Nature and Nurture during Middle Childhood, edited by John C. DeFries, Robert Plomin, and David W. Fulker (Oxford: Blackwell, 1994), pp. 86-101. 4. James R. Flynn, Race, IQ, and Jensen (London: Routledge, 1980). 5. Richard Nisbett, "Race, Genetics, and IQ," in The Black-White Test Score Gap, edited by Christopher Jencks and Meredith Phillips (Brookings, 1998), pp. 86-102. 6. Arthur Jensen, The g Factor (Westport, Conn.: Praeger, 1998), pp 350-531. 7. Jane R. Mercer, "What Is a Racially and Culturally Nondiscriminatory Test? A Sociological and Pluralistic Perspective," in Perspectives on "Bias in Mental Testing," edited by Cecil R. Reynolds and Robert T. Brown (New York: Plenum Press, 1984); Jonathan Crane, "Race and Children's Cognitive Test Scores: Empirical Evidence That Environment Explains the Entire Gap," mimeo, University of Illinois at Chicago, 1994; and Jeanne Brooks-Gunn and others, "Ethnic Differences in Children's Intelligence Test Scores: Role of Economic Deprivation, Home Environment, and Maternal Characteristics," Child Development 67, no. 2 (1996): 396-408. 8. This is based on the account by James R. Flynn (Race, IQ, and Jensen, pp. 84-87; see note 4) of Klaus Eyferth, "Leistungen verschiedener Gruppen von Besatzungskindern in Hamburg-Wechsler Intelligenztest fur Kinder (HAWIK)," Archiv fur die gesamte Psychologie 113 (1961): 222-41. 9. Flynn, Race, IQ, and Jensen, pp. 84-102 (see note 4). 10. Barbara Tizard, "IQ and Race," Nature 247, no. 5439 (February 1, 1974). 11. Flynn, Race, IQ, and Jensen, pp. 108-11 (see note 4). 12. Elsie G. J. Moore, "Family Socialization and the IQ Test Performance of Traditionally and Transracially Adopted Black Children," Developmental Psychology 22 (1986): 317-26; and Lee Willerman and others, "Intellectual Development of Children from Interracial Matings: Performance in Infancy and at 4 Years," Behavioral Genetics 4 (1974): 84-88. [End Page 66] 13. Sandra Scarr and Richard A. Weinberg, "IQ Test Performance of Black Children Adopted by White Families," American Psychologist 31 (1976): 726-39; and Sandra Scarr and Richard A. Weinberg, "The Minnesota Adoption Studies: Genetic Differences and Malleability," Child Development 54 (1983): 260-67. 14. These are IQ scores, which have a mean of 100 and a standard deviation of 15 in the U.S. population. 15. Sandra Scarr and others, "The Minnesota Transracial Adoption Study: A Follow-Up of IQ Test Performance at Adolescence," Intelligence 16 (1992): 117-35. 16. But see Arthur Jensen, The g Factor, pp. 477-78 (see note 6), on whether late adoption can explain the difference. 17. One body of evidence is difficult to judge. See J. Philippe Rushton, Race, Evolution, and Behavior: A Life History Perspective, 3rd ed. (Port Huron, Mich.: Charles Darwin Research Institute, 2000). Rushton has proposed a theoretical framework that would explain a genetic gap in cognitive ability between blacks and whites and has marshaled evidence for it. But because much of the evidence was known before the theory was proposed, some view the theory as nothing more than post hoc rationalization for hereditarian views on the black-white gap. At most it suggests that some of the black-white gap may be genetic, but it does not suggest how much. 18. Arthur Jensen, Educability and Group Differences (New York: Harper and Row, 1973), pp. 135-39, 161-73, 186-90; Arthur Jensen, Educational Differences (London: Methuen, 1973), pp. 408-12; Jensen, The g Factor, pp. 445-58 (see note 6); and Richard Herrnstein and Charles Murray, The Bell Curve: Intelligence and Class Structure in American Life (New York: Simon and Schuster, 1994), pp. 298-99. 19. Plomin and others, Behavioral Genetics, p. 177 (see note 2); and Ulric Neisser and others, "Intelligence: Knowns and Unknowns," American Psychologist 51, no. 2(1996): 85. 20. Author's calculations from the 1979 National Longitudinal Survey of Youth. 21. John B. Carol, Human Cognitive Abilities: A Survey of Factor-Analytic Studies (Cambridge University Press, 1993), is the most comprehensive survey of what is known about the correlation of scores on different types of mental tests. 22. See J. Philippe Rushton and Arthur Jensen, "Thirty Years of Research on Race Differences in Cognitive Ability," Psychology, Public Policy, and Law (forthcoming), for a review of this evidence and citations to the original studies. 23. David Rowe, Alexander Vazsonyi, and Daniel Flannery, "Ethnic and Racial Similarity in Developmental Process: A Study of Academic Achievement," Psychological Review 101, no. 3 (1994): 396-413; Jensen, The g Factor, pp. 464-67 (see note 6). 24. James R. Flynn, "Massive Gains in 14 Nations: What IQ Tests Really Measure," Psychological Bulletin 101 (1987): 171-91; James R. Flynn, "IQ Gains over Time," in Encyclopedia of Human Intelligence, edited by Robert J. Sternberg (New York: Macmillan, 1994), pp. 617-23; James R. Flynn, "IQ Gains over Time: Toward Finding the Causes," in The Rising Curve: Long-Term Gains in IQ and Related Measures, edited by Ulric Neisser (Washington: American Psychological Association, 1998), pp. 551-53. 25. Existing evidence suggests that IQ gains across subtests are probably positively correlated with g loading. See Roberto Colom, Manuel Juan-Espinosa, and Lu?s F. Garc?a, "The Secular Increase in Test Scores Is a [End Page 67] 'Jensen effect,'" Personality and Individual Differences 30 (2001): 553-58; and Manuel Juan-Espinosa and others, "Individual Differences in Large-Spaces Orientation: g and Beyond?" Personality and Individual Differences 29 (2000): 85-98, for much stronger correlations between g loadings and IQ gains. Jensen, The g Factor, pp. 320-21 (see note 6), reviews a number of studies of the relation between subtests gains and g loadings, all of which show weak positive correlations. J. Philippe Rushton, "Secular Gains in IQ Not Related to the g Factor and Inbreeding Depression--unlike Black-White Differences: A Reply to Flynn," Personality and Individual Differences 26 (1999): 381-89, finds that a measure of g developed on the Wechsler Intelligence Scale for Children has loadings that are negatively correlated with subtest gains in several countries. But see James R. Flynn, "The History of the American Mind in the 20th Century: A Scenario to Explain IQ Gains over Time and a Case for the Irrelevance of g," in Extending Intelligence: Enhancement and New Constructs, edited by P. C. Kyllonon, R. D. Roberts, and L. Stankov (Hillsdale, N.J.: Erlbaum, forthcoming.) for an argument that IQ gains are greatest on tests of fluid g rather than crystallized g. He finds a positive (though statistically insignificant) correlation between a measure of fluid g he develops and IQ gains in the data used by Rushton. Olev Must, Aasa Must, and Vilve Raudik, "The Flynn Effect for Gains in Literacy Found in Estonia Is Not a Jensen Effect," Personality and Individual Differences 33 (2001); and Olev Must, Aasa Must, and Vilve Raudik, "The Secular Rise in IQs: In Estonia the Flynn Effect Is Not a Jensen Effect," Intelligence 31 (2003): 461-71, find no correlation between g loadings and gains on two tests in Estonia, but these are achievement tests with a strong crystallized bias. 26. Plomin and others, Behavioral Genetics,pp. 173-77 (see note 2). 27. Irving Lazar and Richard Darlington, "Lasting Effects of Early Education: A Report from the Consortium for Longitudinal Studies," Monographs of the Society for Research in Child Development 47, nos.2-3 (1982). 28. William T. Dickens and James Flynn, "Heritability Estimates versus Large Environmental Effects," Psychological Review 108, no. 2 (2001). 29. This is not to say that there are no permanent or long-lasting environmental effects on cognitive ability. The effects of brain damage can be severe and permanent. However, such permanent environmental effects evidently explain only a small fraction of normal variation in cognitive ability. Shared family environment plays a large role in explaining variance in cognitive ability when children are spending most of their time in the home, with their activities strongly influenced by their parents. But that effect fades as they spend more of their time away from home and in self-directed activities. 30. Eric A. Hanushek and others, "Does Peer Ability Affect Student Achievement?" Working Paper 8502 (Cambridge, Mass.: National Bureau of Economic Research, 2001); Caroline Hoxby, "Peer Effects in the Classroom: Learning from Gender and Race Variation," Working Paper 7867 (Cambridge, Mass.: National Bureau of Economic Research, 2001); Dan M. Levy, "Family Income and Peer Effects as Determinants of Educational Outcomes," Ph.D. diss., Northwestern University, 2000; Donald Robertson and James Symons, "Do Peer Groups Matter? Peer Group versus Schooling Effects on Academic Achievement," Economica 70 (2003): 31-53; Bruce Sacerdote, "Peer Effects with Random Assignment: Results from Dartmouth Roommates," Quarterly Journal of Economics (May 2001): 681-704; David J. Zimmerman, "Peer Effects in Academic Outcomes: Evidence from a Natural Experiment," Review of Economics and Statistics 85 (2003): 9-23. [End Page 68] 31. Michael A. Boozer and Stephen E. Cacciola, "Inside the 'Black Box' of Project STAR: Estimation of Peer Effects Using Experimental Data," Discussion Paper 832 (Economic Growth Center, Yale University, 2001). 32. In statistics this is referred to as the law of large numbers--that the variance of a mean falls as the number of items being averaged goes up. See Eugene Lukacs, Probability and Mathematics Statistics: An Introduction (New York: Academic Press, 1972). It applies whether or not the weights being put on the elements are equal. Because the variance and standard deviation of the mean fall, while the average difference stays the same, the difference in standard deviations grows. The example assumes that the effects are all uncorrelated with each other and that each has a normal distribution in the white and the black populations. If the effects were assumed to be correlated or the weights unequal, the results would be less dramatic, but with observed values for correlations of environmental factors, increasing the number of items to be averaged could produce the same results. 33. Dickens and Flynn, "Heritability Estimates vs. Large Environmental Effects" (see note 28). 34. Plomin and others, Behavioral Genetics, p. 201(see note 2). 35. Eric Turkheimer and others, "Socioeconomic Status Modifies Heritability of IQ in Young Children," Psychological Science 14, no. 6 (2003). Their own study finds that shared family environment explains 60 percent of the variance of an IQ test score in low-socioeconomic-status seven-year-olds, which is a much larger share than other studies have found. For example, see Kathryn Asbury and others, "Environmental Moderators of Genetic Influence on Verbal and Nonverbal Abilities in Early Childhood" (Institute of Psychiatry, De Crespigny Park, London, 2004). [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From Thrst4knw at aol.com Fri May 20 13:45:13 2005 From: Thrst4knw at aol.com (Thrst4knw at aol.com) Date: Fri, 20 May 2005 09:45:13 EDT Subject: [Paleopsych] free wills and quantum won'ts Message-ID: <6.4568ff23.2fbf43e9@aol.com> Yes, I think new metaphors are needed, and new math, and probably also new (perhaps interdisciplinary) ways of empirically testing hypotheses that relate to the mind-brain question. But I suggest there are still some fairly significant gaps in our current knowledge that need to be bridged before we can begin to apply and test such new ideas to directly bridge the classic divide of "free will" and mechanism, or what some call the manifest and scientific images of nature. My suspicion is that Dan Dennett managed to capture one of the pieces the concept of "free will" and why it confuses us ... our natural human talents let us operate with different explanatory stances for different kinds of domain of phenomena. Without realizing why they are inane, we probably often get lost in inane questions like whether "free will" exists. In effect I suspect that we often end up arguing across different conceptual models where we don't have the tools (at the time) to make them 'commensurable.' In extreme cases like this, Kuhn probably had a reasonable point about arguing across 'paradigms' and such. For example, having "choices" is something that make sense from the perspective of a reasoning human agent trying to explain their behavior and that of other reasoning human agents. They have "choices" because they have (1) the capacity to represent different outcomes which they value differently and (2) because they have some weakly understood mechanism for feeding back that information regarding that capacity into the parts of the nervous system that drive behavior and initiate new patterns of attention and thought. In other words, we can envision different options and we can pick one. More importantly, we simply take that process for granted because it is wired into us. We truly don't know the specifics of how it is implemented in terms of causal models that relate back to our sciences. That leaves us with various extreme explanatory options that rely more on individual plausibility than empirical testing for their resolution. I think it is the fact that (2) is so weakly understood that makes "free will" remain a philosophical rather than (yet) a scientific issue. (Naturalistic) philosophers struggle with how the apparently emergent high level global properties of mind can have a causal effect on the nervous system from which they arise. To oversimplify just for rhetorical purposes, having the mind do something physical seems from a strictly monistic naturalist position a little like the wetness of water having an effect on the hydrogen and oxygen that composes it. The question of "top down causation" of high level mind properties on physical things is hard to get past. I think this is not just a matter of metaphors and mathematics (broadly speaking, what other thinking tools do we have?) but because we don't yet know the specifics of how the mind's global high level properties arise from the complex processes of the body. The metaphors and math will have to bridge that gap in order to bridge the physical and intentional explanatory stances. That means they will have to capture the evolutionary and developmental history of the mind and how it gives rise to something capable of representing options and choosing between them. I've seen several speculative attempts at this, but nothing that yet comes close to being what I would consider empirical. My suggestion would be that we should first map out the missing levels of description tentatively before imagining that we've bridged the explanatory gap. The cognitive linguists and related cognitive scientists like Lakoff and Johnson and Mark Turner have made some initial attempts at this, but I'm not yet persuaded that they've really bridged the whole gap. They have testable hypotheses about how neural function leads to computation and loosely how computation leads to cognition, but no model of cognition that captures the phenomena relevent to "top down causation" as seen from the perspective of a human observer. For example, look at all the different interpretations of Libet's "half second delay" experiments; ranging from paranormal to outright denial, with Dennett''s and Dan Wegner's naturalistic explanations in the middle. This kind of phenomenon is important because raises central issues in the relationship of perception, cognition, and "top down causation." I think if we could agree on what is happening there, we would have a huge step forward to understanding "free will." I think that is a more productive avenue than directly investigating quantum effects, for example, although it seems unavoidable that "quantum spookiness" and nature at the lowest levels will _eventually_ have to be considered somewhere in the causal models. kind regards, Todd In a message dated 5/16/2005 11:28:09 PM Eastern Daylight Time, dsmith06 at maine.rr.com writes: Traditionally, the problem of free will is not a question of whether or not we have choices, it is the question of whether or not these choices are caused by prior events. David ----- Original Message ----- From: HowlBloom at aol.com To: paleopsych at paleopsych.org Sent: Monday, May 16, 2005 11:19 PM Subject: [Paleopsych] free wills and quantum won'ts This is from a dialog Pavel Kurakin and I are having behind the scenes. I wanted to see what you all thought of it. Howard You know that I'm a quantum skeptic. I believe that our math is primitive. The best math we've been able to conceive to get a handle on quantum particles is probabilistic. Which means it's cloudy. It's filled with multiple choices. But that's the problem of our math, not of the cosmos. With more precise math I think we could make more precise predictions. And with far more flexible math, we could model large-scale things like bio-molecules, big ones, genomes, proteins and their interactions. With a really robust and mature math we could model thought and brains. But that math is many centuries and many perceptual breakthroughs away. As mathematicians, we are still in the early stone age. But what I've said above has a kink I've hidden from view. It implies that there's a math that would model the cosmos in a totally deterministic way. And life is not deterministic. We DO have free will. Free will means multiple choices, doesn't it? And multiple choices are what the Copenhagen School's probabilistic equations are all about? How could the concept of free will be right and the assumptions behind the equations of Quantum Mechanics be wrong? Good question. Yet I'm certain that we do have free will. And I'm certain that our current quantum concepts are based on the primitive metaphors underlying our existing forms of math. Which means there are other metaphors ahead of us that will make for a more robust math and that will square free will with determinism in some radically new way. Now the question is, what could those new metaphors be? Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Fri May 20 14:05:51 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 20 May 2005 07:05:51 -0700 Subject: [Paleopsych] LRC: Global Battle Erupts Over Vitamin Supplements by Bill Sardi Message-ID: <01C55D0A.5C602C70.shovland@mindspring.com> Great link. Thanks. I agree with him- conventional medicine is intellectually bankrupt. Steve Hovland www.stevehovland.net -----Original Message----- From: Premise Checker [SMTP:checker at panix.com] Sent: Thursday, May 19, 2005 12:06 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] LRC: Global Battle Erupts Over Vitamin Supplements by Bill Sardi Global Battle Erupts Over Vitamin Supplements by Bill Sardi http://www.lewrockwell.com/sardi/sardi37.html 5.5.16 In an unprecedented action, the World Health Organization (WHO), the United Nations (UNICEF), and an AIDS activist group that promotes drug therapy in South Africa, joined forces in opposing vitamin therapy that exceeds the Recommended Daily Allowance (RDA), and in particular vitamin C in doses they describe as being "far beyond safe levels." These health agencies suggest nutrients primarily be obtained from the diet and warn that supplemental doses of vitamin C that exceed a 2000 milligram per day upper limit could cause side effects such as diarrhea. The AIDS activist group also suggests patients receiving doses beyond the RDA should undergo proper counseling and informed consent before being placed on high-dose vitamin C. As outrageous as these statements sound, they burst into public view recently with an ongoing battle between Dr. Matthias Rath, a former Linus Pauling researcher, and The Treatment Action Campaign in South Africa. The public battle ensued after Dr. Rath published a full-page ad in the New York Times and the International Herald Tribune advocating vitamin therapy over anti-AIDS drug therapy. Coinciding with these full-page newspaper ads is a legal battle underway in South Africa where The Treatment Action Campaign seeks to censor statements made by Dr. Rath. Dr. Rath cites a study by Harvard Medical School researchers that showed dietary supplements slow the progression of AIDS and resulted in a significant decline in viral count. [New England Journal of Medicine 351: 23-32, 2004] Harvard researchers responded by saying vitamin therapy is important but may not replace anti-viral drug therapy. Diet promoted over supplements UNICEF and WHO advocate a balanced diet rather than supplements despite the fact AIDS patients have nutritional needs that exceed what the best diet can provide. AIDS patients often exhibit nutrient deficiencies due to malabsorption or diarrhea. Vitamin E, one of the supplemental nutrients provided in a cocktail developed by Dr. Rath for AIDS patients, is known to reduce the incidence of diarrhea. [STEP Perspectives 7:2-5, 1995] RDA for vitamin C is bogus Furthermore, the RDA for vitamin C established by the National Institutes of Health (NIH), referred to by the Treatment Action Campaign, was established using testing methods that have been proven to be inaccurate. A study published last year in the Annals of Internal Medicine by NIH scientists clearly shows much higher vitamin C levels can be achieved with oral dosing than previously thought possible. [Annals Internal Medicine 140:533-7, 2004]. Twelve noted antioxidant researchers have petitioned the Food & Nutrition Board to review the RDA for vitamin C now that it is apparent the RDA is based upon flawed research. [9]Steve Hickey Ph.D. and Hilary Roberts, pharmacology graduates of Manchester University, have authoritatively outlined the flaws in the current RDA for vitamin C. Furthermore, the RDA was established for healthy people and does not apply to patients with serious infectious disease such as AIDS patients. Health groups tip their hand This battle over vitamin supplements may be a foretaste of what will happen later this year when a worldwide body called Codex Alimentarius will meet to establish upper limits on vitamin and mineral supplements. Codex is governed under the auspices of the United Nations and World Health Organization. These health organizations are tipping their partiality for drugs over nutritional supplements. For example, Codex may establish a 2000 mg upper limit for vitamin C as previously proposed by the National Academy of Sciences, or as low as 225 mg which was recently established by German health authorities. Controlled studies do not support the use of either number. Dr. Rath is reported to recommend 4000 milligrams of daily vitamin C for AIDS patients. The amount of oral vitamin C that a patient can tolerate without diarrhea increases proportionately to the severity of their disease. [Med Hypotheses 18:61-77, 1985] AIDS patients often dont exhibit any diarrhea with extremely high-dose vitamin C therapy. Diarrhea may occur among healthy individuals following high-dose vitamin C therapy depending upon how much vitamin C is consumed at a single point in time. Divided doses taken throughout the day minimizes this problem. Huckster or helper? Dr. Rath, a renowned vitamin researcher who described a vitamin C cure for heart disease and cancer in 1990 in collaboration with Nobel prize winner Linus Pauling [Proc Natl Academy Sciences 87:9388-90, 1990], is characterized as a "wealthy vitamin salesman" by the Treatment Action Campaign in South Africa. Raths vitamin company is providing free vitamin therapy for AIDS victims in South Africa. Anti-AIDS drug therapy failing World health organizations appear to be solely backing AIDS drug therapy at a time when a highly drug-resistant strain of HIV that quickly progresses to AIDS has been reported in New York [AIDS Alert 20: 39-40, 2005], and drug resistance is a growing problem [Top HIV Medicine 13: 51-57, 2003]. Its only a matter of time till all current anti-AIDS drugs fail. Of particular interest is selenium, a trace mineral included in Dr. Raths anti-AIDS vitamin regimen, which appears to slow progression of the disease. Researchers report HIV infection has spread more rapidly in Sub-Saharan Africa than in North America primarily because Africans have low dietary intake of selenium compared to North Americans. [Medical Hypotheses 60: 611-14, 2003] Selenium appears to be a key nutrient in counteracting certain viruses and HIV infection progresses more slowly to AIDS among selenium-sufficient individuals [Proceedings Nutrition Society 61: 203-15, 2002]. The strong reaction by world health organizations against vitamin supplements causes one to wonder if they are afraid vitamin therapy will actually prove to be a viable alternative to AIDS drug therapy. Bill Sardi [[10]send him mail] is a consumer advocate and health journalist, writing from San Dimas, California. He offers a free downloadable book, The Collapse of Conventional Medicine, at [11]his website. [13]Bill Sardi Archives References 9. http://www.lulu.com/Ascorbate 10. mailto:BSardi at aol.com 11. http://www.askbillsardi.com/ 13. http://www.lewrockwell.com/sardi/sardi-arch.html _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From shovland at mindspring.com Fri May 20 14:09:46 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 20 May 2005 07:09:46 -0700 Subject: [Paleopsych] ebook: Collapse of Conventional Medicine Message-ID: <01C55D0A.E8ADAF40.shovland@mindspring.com> Steve Hovland www.stevehovland.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pdf Size: 910370 bytes Desc: not available URL: From shovland at mindspring.com Fri May 20 14:41:54 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 20 May 2005 07:41:54 -0700 Subject: [Paleopsych] The Liquid Universe Message-ID: <01C55D0F.655E2B10.shovland@mindspring.com> Awhile ago one of my photography clients mentioned that she believed in the "liquid universe." The idea is that if you push in some place, you will get a response from somewhere else. On Wednesday I made about 40 sales calls looking for photo shoots. On Thursday someone I didn't talk to on Wednesday called to say she had several things she wanted me to do :-) Steve Hovland www.stevehovland.net From checker at panix.com Fri May 20 19:03:25 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:03:25 -0400 (EDT) Subject: [Paleopsych] NYT: Koreans Report Ease in Cloning for Stem Cells Message-ID: Koreans Report Ease in Cloning for Stem Cells http://www.nytimes.com/2005/05/20/science/20clone.html By [2]GINA KOLATA South Korean researchers are reporting today that they have developed a highly efficient recipe for producing human embryos through cloning, and then extracting their stem cells. Writing in the journal Science, the researchers, led by Dr. Woo Suk Hwang and Dr. Shin Yong Moon of Seoul National University, said they used their method to produce 11 human stem cell lines that were genetic matches of patients who ranged in age from 2 to 56. The method, called therapeutic cloning, is one of the great hopes of the stem cell field. It produces stem cells, universal cells that are extracted from embryos, killing the embryos in the process, and that, in theory, can be directed to grow into any of the body's cell types. Because the stem cells come from embryos that are clones of individuals, they would be exact genetic matches and less likely to be rejected by a patient's immune system. Scientists want to obtain such stem cells from patients with certain disorders and illnesses to study the origin of diseases and to develop replacement cells that would be identical to those a patient has lost in a disease like Parkinson's. Dr. Hwang said he had no intention of using the method to produce babies that were clones. "Our proposal is limited to finding a way to cure disease," he said. "That is our proposal and our research goal." Previously, the same group produced a single stem cell line from a cloned embryo, but the process was so onerous that many scientists said it was not worth trying to repeat it, and some doubted that the South Koreans' report was even correct. Things have changed. The new finding buoyed researchers who had wanted to use such stem cells to study diseases but had thought it would be years, if ever, before it would be practical to obtain them. "It is a tremendous advance," said Dr. Leonard Zon, a stem cell researcher at Harvard Medical School and the president of the International Society for Stem Cell Research, who was not involved in the research. But the report raised concerns among others, who said it was a step down the slippery slope leading to cloned babies. Richard Doerflinger, whose title is director of pro-life activities at the United States Conference of Catholic Bishops, said: "Up until now, people were beginning to wonder whether human cloning for any purpose was feasible at all. This development makes it feasible enough to be a clear and present danger." The Korean report will influence the political debate over embryonic stem cell research, which is unfolding on Capitol Hill. The House is expected to vote as early as next week on a measure that would expand federal financing for embryonic stem cell studies. The measure, which has created deep divisions among Republicans, does not address therapeutic cloning. But a second bill, introduced by Senator Orrin G. Hatch, Republican of Utah, would permit taxpayer financing of therapeutic cloning studies, while prohibiting cloning for reproduction. In their new work, the South Korean researchers produced stem cells that were exact matches for 9 of 11 patients, including 8 adults with spinal cord injuries and 3 children - a 10-year-old boy with a spinal cord injury, a 6-year-old girl with diabetes and a 2-year-old boy with congenital hypogammaglobulinemia, a genetic disorder of the immune system. Dr. Zon cautioned that "it will take a lot of work" before stem cells fulfill their promises in medicine, but he said the new finding would bring scientists significantly closer to the goals. Dr. Hwang said he had been flooded by requests from researchers who wanted to visit and study his methods, including Dr. Ian Wilmut, the researcher in Scotland who created the first cloned mammal, a sheep named Dolly, in 1996, astonishing scientists who had thought cloning was biologically impossible. Dr. Wilmut visited the laboratory in Seoul, and this week Dr. Hwang went to Dr. Wilmut's laboratory at the Roslin Institute in Edinburgh to help him in his quest to produce human embryos by cloning and to extract their stem cells. Others are trying too. In England, the International Center for Life, in Newcastle upon Tyne, announced it had produced a human embryo by cloning, although it did not say it had extracted stem cells or gone through the many detailed steps to prove that they were stem cells and that they were from a clone, as the South Koreans had done. Until now, scientists had been studying human embryonic stem cells extracted from embryos created for that purpose and did not involve cloning cells from specific patients. They had also obtained stem cells from embryos created at fertility clinics and donated by couples who no longer needed them. In addition, scientists are studying mouse stem cells, working on the difficult task of directing the cells to develop into specific tissue types. But researchers wanted embryos that were genetic matches of patients. The only way to do that was to use embryos that were clones of patients, and human cloning had seemed all but impossible. To produce a clone, scientists slip the genetic material from a patient's cell into an unfertilized egg from another person whose genetic material has been removed. The genes from the patient's cell take over, directing the egg to divide and develop into an embryo that is genetically identical to the patient. About five days later, when the cloned embryo contains about 100 cells and is about 0.08 inch in diameter, it changes its form, looking like a ball of cells encased in a sphere. That ball of cells, when removed and grown in the laboratory, becomes the embryonic stem cells. The process, however, fails more often than it succeeds, and, in humans, it seemed to fail almost all the time. In a previous report, published last February, Dr. Hwang and Dr. Moon used 248 human eggs to produce a single embryonic stem cell line, a group of cells that came from one embryonic cell and could grow on a petri dish. But this time, with a handful of technical improvements that mostly involved methods for growing cells and breaking open embryos, they used an average of 17 eggs per stem cell line and could almost guarantee success with the eggs of just one woman obtained in a single month. It did not matter whether the patient whose cells were being cloned was young or middle-aged, male or female, sick or well - the process worked. "You almost have no reason not to do it," said Dr. Davor Solter, the director of the Max Planck Institute for Immunobiology in Freiburg, Germany. He added that it seemed more efficient to clone and obtain human stem cells than to do the same experiment in animals, although no one knows why. Seven states ban any type of human cloning and 11 have laws that prevent embryonic stem cell research, said Lori B. Andrews, a law professor at Chicago-Kent College of Law, and federal money is restricted to research using stem cell lines approved by the Bush administration in 2001. Where such work is legal, however, increasing numbers of scientists, including Dr. Zon, say they have private financing and plan to go forward using cloning to produce stem cells. Dr. John Gearhart, a stem cell researcher at Johns Hopkins University, said the new paper would provide an impetus. "I think you will see more people in the game," he said. Not everyone is excited. Dr. Leon Kass, chairman of the President's Council on Bioethics, commented in an e-mail message that "whatever its technical merit, this research is morally troubling: it creates human embryos solely for research, makes it much easier to produce cloned babies, and exploits women as egg donors not for their benefit." The South Korean government, which paid for the new study, has made it a crime to implant a cloned embryo into a woman's uterus, Dr. Hwang said. "It should be banned throughout the world," he added. The study included 18 women who provided eggs. The South Korean scientists worked hard, said Dr. Gerald Schatten of the University of Pittsburgh School of Medicine, who visited their laboratory and helped the scientists, whose English is limited, write their paper. "They work 365 days a year except for leap year, when they work 366 days," Dr. Schatten said. "They have lab meetings at 6:30 every morning except Sunday, when they have them at 8." Few would venture into the cloning arena if the science was not so promising, researchers say. Of course, they say, there is a long way to go from stem cells to therapy. "It's going to take a lot of work," said Dr. Ronald McKay, a stem cell researcher at the National Institutes of Health. "But we want this to work - it's not a theory. My technical and professional judgment tells me this is really important." Dr. Kass, however, says that cloning and extracting stem cells from the embryos is not the only way to do such work. A majority of the President's Council on Bioethics called for a moratorium on cloning for research, he said, and the council recently suggested other ways of getting stem cells that could develop into the desired tissue types and that would match a patient's own cells "without these violations and moral hazards." Opinion polls have had varied results, often depending on the words that are used to describe the work. In a recent Gallup poll, just 38 percent of respondents approved of cloning embryos for research. Another poll, which used the term "somatic cell nuclear transfer" instead of "cloning," found that 72 percent approved. Dr. Hwang's paper goes a step further, using "S.C.N.T." instead of "somatic cell nuclear transfer." Dr. Ruth Faden, the executive director of the bioethics center at Johns Hopkins, said the moral debate would change if the research led to new treatments with dramatic benefits for some patients. "That could really shake it up," she said. But Dr. Richard Land, the president of the Southern Baptist Convention's ethics and religious liberty commission, said his group would not be assuaged. "We believe a cloned embryo is a human being," Dr. Land said. "We should not be the kind of society that kills our tiniest human beings in order to seek a treatment for older and bigger human beings." Sheryl Gay Stolberg contributed reporting from Washington for this article. From checker at panix.com Fri May 20 19:03:35 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:03:35 -0400 (EDT) Subject: [Paleopsych] NYT: Penny-Wise, Not Pound-Foolish Message-ID: Penny-Wise, Not Pound-Foolish http://www.nytimes.com/2005/05/19/business/19durham.html By STEPHANIE SAUL DURHAM, N.C. - In late 2003, Susan Blech sold her office products business, left her home in Long Beach, N.Y., and came here with a plan to spend her life savings losing weight. So far, her plan has worked. Ms. Blech, 39, has dropped about $70,000 and 220 pounds in Durham. She subsists on an 800-calorie diet prescribed by the Rice Diet Program, one of three major weight-loss and nutrition centers here in the city that sometimes calls itself the Diet Capital of the World. When her weight falls another 50 pounds, to 200, Ms. Blech plans to spend her remaining money for surgery to remove dangling skin from her midsection and fill out her sagging breasts. She has already picked her plastic surgeon - one right here in Durham at the Duke University Medical Center. If she is broke by then, Ms. Blech said, "It's going on the credit cards." Ms. Blech's losses are this town's gains. Durham, a former tobacco-trading and cigarette-manufacturing city of 204,000, has evolved into a Lourdes for the obese. Each year, about 4,000 people like Ms. Blech arrive here, hoping to change their lives. And Durham's status, whether as a magnet or national model, seems likely to grow, as the country increasingly focuses on obesity as a potential epidemic. The obese now make up an estimated one-third of the adult population. They are the people most likely to have diabetes, heart disease and other conditions that account for more than $100 billion of the United States' $1.8 trillion annual medical bill. Durham's diet experts say the people who come to their centers tend to lose more weight, more consistently than people who diet on their own - because of the motivation that brings them to town in the first place and the network of doctors, medical staff and fellow dieters. But as with any diet, anywhere, keeping the weight off long term depends on a person's willingness to forever alter eating and exercise habits. "I don't dare go home," said one dieter, Corinne Keene of Naples, Fla. Ms. Keene, 73, moved to Durham last Dec. 10 after hitting 252 pounds, a medical emergency on her 4-foot-9-inch frame. She has lost more than 50 pounds, but fears she will not keep dieting after leaving the supportive cocoon of the Rice Diet Program. Durham has been known for weight loss ever since the Rice Diet was founded here in the 1930's. Local residents can reel off a list of corpulent celebrities over the years who have left some of themselves in Durham. There was the Kentucky Fried Chicken founder, Col. Harland Sanders, who, in his trademark white suit, huffed and puffed on a local walking trail back in the 1970's while on the Rice Diet - which has evolved over the years but still largely involves a strict regimen of grains and fruit. The comedian Buddy Hackett, also a Ricer, was known for gags like ordering pizza for fellow dieters when he visited in the 1970's and 1980's. James Coco, the roly-poly comedic actor who starred on Broadway in "Last of the Red Hot Lovers," dieted at the Structure House, then published a book in 1984 promoting the diet's effects - three years before he died of a heart attack at age 56. The celebrities still come, if more guarded these days about their privacy. But in the last few years, as weight loss has moved beyond vanity to a matter of national medical urgency, more and more everyday people have started spending their time and money on the Durham cure. "I see a shift in people recognizing that this is something they have to deal with," said Dr. Gerard Musante, a psychologist who founded Structure House in 1977. While shedding pounds at Durham diet houses - the Rice Diet Program, the Duke Diet and Fitness Center and Structure House - dieters pump more than $51 million a year into the local economy, according to the city's Convention and Visitors Bureau. The money includes not only the fees at the diet centers, but also the money dieters spend around town on everything from new sneakers to cosmetic surgery. "The last time we did a study, the diet and fitness impact was almost equal to the conventions and meetings held in Durham in a one-year period," said Shelly Green, chief operating officer for the convention bureau. "It's very significant." Often, the dieters arrive plagued by the medical complications of severe obesity, and require visits to eye doctors, endocrinologists and surgeons. They rent apartments, set up temporary offices and sometimes come to live here permanently. As their weight drops, they buy several sets of new clothes. As their diabetes improves, they need to be fitted for new glasses. All of the spending adds to Durham's economy. The dieters are easy to spot, tackling a walking path along the three-foot-high stone wall that encircles Duke University's east campus; or milling about the Southpoint Mall, where shopping takes the place of eating; or attending games at the minor league Durham Bulls' baseball stadium. Some have been spotted cheating on their diets at Francesca's Dessert Cafe. A few dieters like Durham so much, they stay. One is Franklin Wittenberg, a former electronics importer-exporter, who came from Connecticut for a diet in 1981, disgusted with himself. Nearly 25 years later, Mr. Wittenberg could be called either a success or a failure of Durham's diet houses. Mr. Wittenberg, 71, has thick white hair and a chiseled face that is remarkably thin compared with his girth, which seems to engulf his office swivel chair. He lost weight several times, but could never keep it off. Mr. Wittenberg talks about the pain of being obese, but also seems to revel in his business success. While on his initial diet here at Duke Diet and staying in a scruffy Durham motel room, Mr. Wittenberg had an epiphany: dieters like him should not have to live in such quarters. So Mr. Wittenberg opened one of the country's first suite motels, a two-story complex with 114 units and a central outdoor pool. It is just across the street from the Duke center. Since opening Duke Towers in 1983, Mr. Wittenberg reckons that he has collected $50 million in rent money from dieters. Many of the overweight come to town in sandals, their feet too swollen to fit in a closed shoe. Because they must exercise, one of the dieters' first stops often is a consultation with Walter Cleary. A 74-year-old former Duke football and track coach, Mr. Cleary owns 9th Street Active Feet. The store's $2 million in annual sales owes much to equipping dieters with shoes like the Brooks Beast, a sneaker with extra arch support that comes in super-widths like EEEE and retails for $110. Shoppers try them sitting on special oversize reinforced benches. "Invariably, they're going to be overpronated," Mr. Cleary said of the dieters, using a fancy word for flat-footed. "Their weight has collapsed their arches." The other thing he has noticed, he said, is that "none of them are poor." The diet programs are not covered by insurance. A few of the morbidly obese spend their last dimes to come here. Others have enough money to return each year for annual tuneups, and those people seem to be the most successful, Mr. Wittenberg said. Durham's roots as a diet capital trace to Dr. Walter Kempner, a kidney specialist who fled Nazi Germany and joined Duke's medical faculty. As he was experimenting with diets to control blood pressure and promote kidney function, he found that an overweight female patient who had eaten rice and fruit while on an extended vacation returned much thinner and with her blood pressure under control. The Rice Diet was born. Dr. Kempner died in 1997, and the for-profit Rice Diet Program, which separated from Duke in 2001, is now run by a cardiologist, Dr. Robert A. Rosati. But the program continues to have the same spartan feel Dr. Kempner favored. A gravel parking lot adjoins what appears to be a large white ranch-style house. Inside, the Rice Diet House has all the decorative appeal of a slightly upscale Veterans of Foreign Wars hall, with Formica tables and leather sofas. The dieters start arriving at 7:30 a.m. for weigh-ins, thinly brewed decaf to avoid caffeine, and breakfast - about a cup of oatmeal with raisins and honey, plus a bowl of fruit. Later, they will hear lectures, go for walks, participate in group discussions or practice yoga. In the first phase of the program, the dieters eat only grains and fruit, a regimen that adds up to 800 calories a day or slightly more. Lunch and dinner each consist of small portions of rice, couscous or kasha and two fruits. Vegetables, beans and pasta are added in the second phase, when fish is also served once a week. Jeff Melchor, 43, arrived at the Rice Diet Program last Nov. 23 in an old Lincoln Town Car he bought for the cross-country drive from San Francisco. At 548 pounds, it was the only vehicle he could fit in. A friend drove him. As Mr. Melchor tells it, the alternative was death. His body was covered in Ace bandages to hide sores that seemed to be related to his diabetes. He was injecting himself with insulin eight times a day. Five months later, Mr. Melchor has lost 150 pounds and tossed out his insulin. He no longer needs it. His sores have healed. His shoe size has shrunk, and he has purchased several pairs at 9th Street Active Feet. As his diabetes has abated, he has required three different pairs of eyeglasses, purchased at Specs Eye Care, also on 9th Street. A team of Duke-affiliated plastic surgeons has become accustomed to operating on people who have lost huge amounts of weight. One surgeon, Dr. Michael Zenn, said he received about 50 referrals a year, from dieters and from patients who had undergone gastric bypass surgery. "They have skin hanging from their arms, they have skin hanging from their bellies," Dr. Zenn said. "They have breasts that are empty bags." Fixing these problems can cost around $25,000, depending on severity, Dr. Zenn said. Ms. Blech, who arrived here Nov. 24, 2003, carries an album with photos of herself 12 years ago when she was a bodybuilder weighing about 165 pounds, then a more recent version, when she weighed closer to her peak of 468 pounds. She attributes her weight gain to emotional issues caused by her mother's stroke-induced quadriplegia when Ms. Blech was a baby. Ms. Blech said the emotional issues have been more difficult to deal with than the weight, but she has made strides on those, as well, while in the Rice Diet's therapy programs. Ms. Blech's 220-pound weight loss has elevated her to a special dieters' Century Club plaque at the Rice Diet Program, a distinction reserved for those who have lost 100 pounds or more and kept it off. "I knew when I came down here that I was going to be broke when I left," she said. "It's worth every single penny." From checker at panix.com Fri May 20 19:03:49 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:03:49 -0400 (EDT) Subject: [Paleopsych] NYT: New Monkey Species Is Found in Tanzania Message-ID: New Monkey Species Is Found in Tanzania http://www.nytimes.com/2005/05/19/science/19cnd-monkey.html [Click on the URL to find an audio file giving the sound of the new monkey.] By [2]CORNELIA DEAN Two teams of American scientists, working independently hundreds of miles apart in Tanzania, have identified a new species of monkey, the first new primate species identified in Africa in 20 years. The research teams, who learned of each other's work last October, named the creature the highland mangabey or Lophocebus kipunji. They report their discovery jointly in Friday's issue of the journal Science. One team, led by Dr. Tim Davenport of the Wildlife Conservation Society, observed the monkey on Mount Rungwe and in the adjacent Kitulo National Park. Scientists in the other team, led by Dr. Carolyn L. Ehardt of the University of Georgia, discovered the same species at sites about 250 miles away in Ndundulu Forest Reserve in the Udzungwa Mountains. The scientists said there were probably fewer than 1,000 of the mangabeys living in these areas. Though the Ndundulo forest is in excellent condition, they said, the Rungwe forest habitat is under assault by loggers, poachers and others. The researchers said they expected the new species to be classified as critically endangered. The newly discovered monkey, a tree-dwelling creature, is about three feet long, with long brownish fur. It has a crest of hair on its head and abundant whiskers. Unlike other Lophocebus mangabeys, which communicate with a "whoop gobble," the new species has an unusual "honk bark," the researchers said. Dr. Colin Groves of the Australian National University, an expert on primate taxonomy, said there was "no doubt at all" that the researchers had identified a new species. The scientists from the conservation society, the organization that manages the Bronx Zoo and other parks, were working in the highlands of southwest Tanzania in early 2003 when residents told them of a shy monkey they called kipunji. Because there are strong local traditions "based on both real and mythical forest animals," the scientists said, they were not sure the creature actually existed. But by the end of the year they had made some clear observations and concluded they had identified a previously unreported species. Meanwhile, the team from Georgia had been searching for another species of mangabey that ornithologists had reported seeing in the Udzungwa Mountains. It turned out to be another population of the new species. The researchers said the last primate species identified in Africa was the sun-tailed monkey, Cercopithecus solatus, found in Gabon in 1984, but several new primate species have been identified recently elsewhere. In part, Dr. Groves said in an e-mail message, that is because "there really has been more exploration by people with their eyes open." From checker at panix.com Fri May 20 19:04:06 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:04:06 -0400 (EDT) Subject: [Paleopsych] New Scientist: Interview: A step in space-time Message-ID: Interview: A step in space-time http://www.newscientist.com/article.ns?id=mg18625002.100&print=true * 21 May 2005 * Valerie Jamieson Mark Baldwin was born in Fiji and danced with the Royal New Zealand Ballet before moving to the UK, where he developed his choreography skills. He has been artistic director at the Rambert Dance Company in London since 2002. Ray Rivers is professor of theoretical physics at Imperial College London, where he investigates phase changes in the early universe by studying solids and liquids. He is also a fan of contemporary dance and has acted as science adviser to Rambert for their newest work, Constant Speed What do you think of art-science collaborations in general? MB: Dreary and boring. RR: Some of them anyway. They don't have a good history. I'm not saying they aren't a good idea. They are often done with good intentions, but it's just the way some of them have been realised. I remember once seeing people dressed in yellow as quarks. Once in a while, I drive past the hall where I saw them and my heart sinks every time. MB: There has been a piece of theatre recently aimed at children about tearing holes in space and getting lost in them. I didn't want to go there. I'm not Dr Who. How is Constant Speed, your new dance for Rambert, different? MB: Constant Speed isn't worthy. It is gorgeous, cheap and nasty, and fabulous. RR: Right from the beginning we were dead against giving a physics lesson. How did it come about? MB: Jerry Cowhig from Institute of Physics Publishing had been to see the company a few times, and he was looking for projects to commission to celebrate Einstein year, which is this year. I know nothing about physics so they suggested I talk to Ray. RR: I was very interested in taking part for several reasons. First, I like Mark's choreography. I like contemporary dance and I've seen quite a bit of his work. The company is a good one too. Earlier I had made an attempt with my wife, who is involved with music and dance, to represent some basic ideas in physics with movement. Why dance? RR: Of all the art forms that one can use to express the notion of here, now and what happens then, dance is probably the best. In some sense, there are ways you can represent equations by movement because they often describe movement. The equations and ideas in Einstein's papers are very dynamical. Dance is better suited to the 1905 papers than any of the other visual arts. Were you given free rein to do anything you wanted? MB: After conversations with Ray, Brownian motion and the photoelectric effect seemed like the best places to start. I could relate to these ideas because of their abstract nature. RR: Einstein has done so many other things. It is very easy when thinking of Einstein to drift into the cold war and his concerns about nuclear weapons. But 1905 was a remarkable year and we were both very keen to base things on it. How did you start? MB: You start in an empty studio, just yourself and a few notes from Ray. Then you try to find a way of dealing with these things in movement. RR: I wrote some notes about the basic ideas, such as the speed of light. MB: They were wonderful. And we did a lot of talking... RR: ...trying to find visual metaphors. MB: Ray bought me a battery-operated kids' toy called a bumble ball. It is covered in blunt knobbles, and when you switch it on it bounces all over the place quite randomly. It is a fabulous example of Brownian motion, how a molecule behaves in space. That was great because it set off a whole chain of thought choreographically about what you might do. To be quite primitive about it, I made up lots of movement based on the idea that you have this thing bouncing around like crazy inside your stomach and hips, and you're not sure which direction it is going to pull you next. It is a matter of coming up with loads of ways of improvising on that idea and then teaching them to the dancers, who refine them. What about the photoelectric effect - how did you represent that? RR: The photoelectric effect describes how light comes in chunks. Each wavelength of light has a different energy and a different context. MB: Blue is much stronger than red light. I get that. We use the colour coding all the way through. The dance starts in white and then we introduce red, which is a weak colour. Then we introduce a couple of blue dancers and they dominate the whole scene. Did you use props? MB: To show light arriving in packets, we have an enormous hand-made mirror ball. The mirrors are minuscule and they are stuck randomly onto the ball so the reflections won't be where you expect. At first, the audience will just see the reflections, and later the ball will be lowered to the floor. RR: I'd love to see how that works. I've only been to rehearsals in the studio without any lighting or set. How did you illustrate E = mc^2? RR: Special relativity and the notion of a constant speed of light don't lend themselves well to simile or metaphor. The nearest I could come up with is taking something built up in a sequential way and then viewing it simultaneously. MB: Choreographically what I've done is take long phrases of movement in a strip along the studio and squash them. You don't know that's what the dancers do. But I know, and they know. That's how I came up with that set of movements. Sounds pretty challenging. MB: It was reasonably scary. I'd gone into most other projects knowing something. But I really didn't know anything about this. I never did physics at school. I trained to be a dancer and was much more interested in the arts. But when I did twig as to how to use Einstein's ideas, it was quite liberating. RR: At first, I was a bit nervous because I'm such a fan of Mark's choreography. Here is someone whose work you like, and the last thing you want to do is suggest something that compromises their work. Did you check it for scientific accuracy? RR: I didn't stand around saying, "No, no it's not like that." I provided information for Mark, but it is his integral vision. Will anyone watching the performance be able to pinpoint all Einstein's ideas? MB: Probably not. RR: I doubt it. It would be disappointing if they did. The main thing about Constant Speed is that it's a great piece of choreography. As artistic director of a commercial company, Mark has to get as many bums on seats as possible. MB: You don't listen to a piece of music and think, "Ooh yes, I got that." Art doesn't work like that. Dance gathers ideas and comes up with its own translation of them. It's all quite ambiguous and you can read things into it. RR: Brownian motion is the one thing anyone who is familiar with the 1905 papers will recognise; no other dance work uses the body's randomness as part of the choreography. How did you choose the music? RR: Einstein's work of 1905 started off with the classical physics tradition of the 19th century and distorted it in a way that didn't break the underlying ideas. So for us, there were two possibilities with the music. We could take an early 20th-century composer whose music deconstructs 19th-century classical tradition. Or we could pick someone in the classical tradition and then deconstruct it through movements. MB: At first we listened to lots of different music. From Arnold Schoenberg to the Strauss waltzes. That was my way of finding a path to where we might go. RR: Bj?rk even came up at one stage. What did you go for? MB: Our conductor Paul Hoskins came up with the idea of using Franz Lehar from 1905 because there are lots of very interesting ideas there. RR: Lehar didn't just compose frothy music like The Merry Widow. MB: I had the mad notion that Einstein might have been listening to Lehar when he came up with the constant speed of light. Do you have a favourite bit of the dance? RR: I just thought it was tremendous. But that's the problem with being a fan already. It is a very athletic piece but there is an unpredictability the first time you see it. MB: There is a bit at the beginning for 10 women and I like it because it is quite formal, yet random at the same time. It seems quite clever and witty and you are never sure what it's going to do next, and the dancers dance it really well. Actually I like it all. It's not a long piece, only 27 minutes. Were you setting out to popularise physics? MB: Because of the size of the theatres we go to, in some ways we are popularising these ideas. Physicists don't zoom up and down the country performing to 55,000 people a year. We do. We can't help saying, "Hey, look at this, it can be fun, it can be sexy." You've got all these amazing dancers wearing incredibly glamorous costumes and moving like lightning all over the place. There is something warm and friendly about it. It is not like physics. But education does come into it. We teach workshops to about 6000 young people a year. RR: Is that one area where you might be a bit more pedagogic about the science? MB: Yes. It's interesting when you teach young people and they have to physically interpret an idea. They have a much better chance of grasping that idea in a much more detailed fashion later on. Our education team is quite excited about these sorts of workshops. I'm really pleased because it is the kind of thing that I think contemporary dance should be doing. It should be trying to find links into other worlds because they enrich ours. Do you worry what physicists think? RR: No. I think quite a few physicists who do go will be disappointed. We are a somewhat reductive race and I can see them coming along with their tick boxes. Sorry, that's a bit rude. What have you got out of it? MB: What I've realised working on this project is that physics is in everything, absolutely everything - the way light arrives, the way our realities are different, the way time is different to each person and the only constant thing is the speed of light. The whole universe is about physics and that is a fantastic thing for a dance audience. I've been involved in dance for 30 years and the people coming to our show won't know anything about physics beforehand. Are there other areas of physics that might lend themselves to dance? RR: There are all sorts of things. Maybe a potted history of motion, starting with the way our notion of friction dominated our basic understanding of the way the world worked. If Aristotle had had rollerblades, he would have come to a different way of viewing the world. MB: That's interesting because it is what choreography is about. In society, dance doesn't seem to be as important as music. So thinking about all the profound ways that motion affects the universe is a lovely pat on the back for the dancers. From checker at panix.com Fri May 20 19:04:19 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:04:19 -0400 (EDT) Subject: [Paleopsych] theBookseller.com: The review malaise Message-ID: The review malaise http://www.thebookseller.com/?pid=42&did=15911 Literary editors are turning what should be a force of good into a waste of time, says Scott Pack. Every Saturday and Sunday I read the broadsheets. All of them. I settle into my armchair at 9 a.m. with a cup of tea and commence thumbing through newsprint to unearth the books pages. When I emerge hours later, looking not unlike a chimney sweep, I am thoroughly depressed. The reason for my unusual melancholy? The realisation that literary editors are increasingly turning what should be a force for good in our industry into a complete waste of time. Book reviews should inspire reading. They should excite, stimulate, agitate and empower readers to discover new books and avoid bad ones. They should turn you on to undiscovered authors, prompt you into finally reading the writer you have never quite got round to, and make you wonder at the world of delights that remain unread. But let's be honest. They don't, do they? A full-page review of a biography of a largely forgotten academic with an unfeasible beard. A literary fiction hardback that everyone else reviewed months ago. Nearly 2,500 words on yet another Nazi history. Four "chick-lit" books reviewed in one piece, at the end of which the reader is none the wiser as to their relative worth. Hardly a recipe for inspiration, but these were the lead features in a particular newspaper one Saturday in April. It was very dull. The beard was the highlight. The result of this awkward mish-mash is that reviews no longer sell books in the volume that they used to. In more unguarded moments, usually involving a glass of something, you can get publishers to admit that they only push hardbacks for review so that they can generate quotes for the paperback jacket. Hardly the most sincere of motives. Don't get me wrong. Reviews can sell books, and should do. When you get several positive reviews of a book around publication it can help to stimulate interest and hopefully sales. The problem is that this so rarely happens, and it won't happen regularly while authors, agents, publishers and retailers sit back and do nothing. If the music pages can manage to feature a diverse selection of CDs, all released that week, then surely we can encourage one literary editor to do the same? I bet if any did succumb they would find many more publishers willing to spend advertising money with them. Until then, I am giving up my weekend routine and will be tuning in to "Dick and Dom" instead. Scott Pack is buying manager at Waterstone's From checker at panix.com Fri May 20 19:04:30 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:04:30 -0400 (EDT) Subject: [Paleopsych] BBC: Penguin's enduring shelf life Message-ID: Penguin's enduring shelf life http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/entertainment/arts/4558711.stm Published: 2005/05/19 13:19:37 GMT Penguin, pioneer of the paperback novel, is celebrating its 70th anniversary. Defined by generations of iconic covers, Penguin publications adorn bookshelves across the globe, prompting a rush of nostalgic affection. But what makes Penguin one of the world's most enduring publishers? THE FOUNDER Contrary to popular belief, Penguin founder Allen Lane did not invent the paperback book, but he made it respectable. Prior to 1935, when Mr Lane and his brothers Richard and John, gambled their own cash to publish 10 titles, paperbacks were synonmous with "pulp fiction", cheap novels typified by lurid covers and poor-quality writing. A natural businessman, Mr Lane noticed a gap in the market for quality paperbacks while stranded at Exeter station with nothing to read. It was his aim to make quality books as affordable as a packet of cigarettes - just sixpence - opening up the world of literature to those who had previously borrowed from the library. The first 10 Penguin paperbacks were reprints of hardback books, among them Agatha Christie's The Mysterious Affair at Styles and Hemingway's Farewell to Arms. Rights were sold by hardback publishers who did not believe Mr Lane would be able to pull off the enterprise. It was certainly a gamble. To make a profit, Mr Lane had to sell 17,500 copies of each of the 10 books. But after a slow start the project was rescued by a large order from Woolworths, and sales soared. More than 150,000 copies were sold in the first four days and three million by the end of the year. A year on, it was estimated that that a Penguin book was bought every ten seconds. And in 1937, the Penguincubator was born - vending machines on railway platforms - taking Mr Lane's idea back to its original birthplace. THE COVERS Allen Lane understood that great ideas need great marketing, and it was Penguin's inspired branding that turned the company into a publishing tour-de-force. The two-tone covers allowed the publisher to share equal space with the author and title, while the small penguin logo - sketched at London Zoo - underscored the company's pulling power. Mr Lane considered illustrated book covers to be crass, and stole the idea of using colour-coded covers from an Anglo-German publisher called Albatross Verlag. Readers quickly came to recognise genres by colour: orange for fiction, green for crime novels, blue for biography and so on. But as the company evolved, so too did the covers. In the 1960s they were defined by Italian art director Germano Facetti. Facetti added a compelling image, be it a graphic illustration or a historic painting, to the tripartite covers, designed to echo the contents of the book. Classic works like Miro's The Tilled Field adorned George Orwell's Animal Farm, while a Duffy landscape beckoned the reader to delve into EM Forster's Room With A View. But JD Salinger's Catcher in the Rye had a blank cover because the author refused to condone any illustration. The covers became increasingly commercial - and for many readers lost their iconic look. Allen Lane abhorred the vulgar illustrations and branded the covers "breastsellers". THE AUTHORS Penguin has established strong bonds with many of its authors over the course of its 70-year history. Evelyn Waugh, Muriel Spark, Roald Dahl, Margaret Drabble, Nick Hornby and Zadie Smith are among their authors championed by the publishing house. Nowadays, there are some 5,000 different titles in print at any time, translated into up to 62 languages. A champion of free speech, Penguin has defended many of its authors. It published Salman Rushdie's The Satanic Verses in 1988 despite accusations of blasphemy against Islam, and the subsequent fatwa against Rushdie. It also successfully defended a libel suit brought by revisionist historian David Irving in 2000 after the publication of Professor Deborah Lipstadt's Denying the Holocaust. But the most famous legal battle took place in the 1960s when Penguin was prosecuted under the Obscene Publications Act over its decision to publish DH Lawrence's Lady Chatterley's Lover in full. Penguin's acquittal marked a turning point in British censorship laws, and two million copies of Lawrence's scandalous novel were sold in six weeks. From its humble beginnings in a church on London's Marylebone Road, Penguin had become a national institution. From checker at panix.com Fri May 20 19:04:42 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:04:42 -0400 (EDT) Subject: [Paleopsych] They Do Know Squat About Art Message-ID: They Do Know Squat About Art http://www.washingtonpost.com/wp-dyn/content/article/2005/05/18/AR2005051802340_pf.html They Do Know Squat About Art At Auction, Bidders Are Not Moved by Tom Friedman's Feces on a Cube By David Segal Washington Post Staff Writer Thursday, May 19, 2005; C01 NEW YORK -- It's little more than a scribble, a quick slash of ink on a 12-by-18-inch piece of plain white paper. If you saw it at the office, you might ball it up and toss it into the trash, or fold it into an airplane and fling it down the hall. It is unlikely you'd do what Christie's auction house did last week: try to sell it for $20,000. That was the low end of the estimated price for this "ink on paper," as it was dryly described in the Christie's catalogue, by an artist living in Massachusetts named Tom Friedman. It was on display last week during the preview for the house's annual spring auction, where potential buyers and interested gawkers get a chance to sniff over the merchandise before it hits the block. Even in the often mystifying alternative universe of contemporary art -- where you occasionally can't suppress philistine thoughts of the Wait, I could have done that variety -- this piece stood out. There it was, amid the Warhols and Basquiats, not more than 100 feet from an Edward Hopper, hanging with the titans. "Starting an old dry pen on a piece of paper," explained the Christie's catalogue. Which is to say, this thing is exactly what it looks like. And 20 grand seemed reasonable compared with another Friedman piece being sold at the same auction. This one, also untitled, is a two-foot white cube with a barely visible black speck set right in the middle of the top surface. Would you like to guess what that black speck is? You're advised to think outside the box. To again quote Christie's, it is ".5mm of the artist's feces." Yes, Tom Friedman put his poop on a pedestal, and last week Christie's tried to sell it, with bidding to start at $45,000. Auction season in Manhattan is a two-week spending spree of paddle-waving rich people and art dealers in Prada suits, all of them vying for highbrow booty at Christie's and its archrival, Sotheby's. The regulars were asking questions like "How much will the Hopper fetch?" and "Which house will gross more?" But if you'd never visited Planet Expensive Art, you didn't care about that, not after you spotted those Friedmans. After that, all you could wonder is: How does an artist peddle his doody, not to mention his doodle? And here's another stumper: Who would buy it? When it's showtime at Christie's, as it was last Wednesday night, the streets around Rockefeller Center in midtown Manhattan are crammed with black limousines. Nobody walks to a fine-art auction, or takes the subway or a cab. You get chauffeured to the event, then walk up a flight of stairs, and if you're a heavy hitter you're given a bidding paddle and a reserved seat, near the front of an expansive room. There's a row of bidders along one wall, all of them on the phone with collectors who are either too busy or too publicity-shy to show up in person. The art appears on a turntable at the front, or, if it's too large to fit or too small to see, on a video screen nearby. The auctioneer stands before everyone, to the right of the goodies, at a lectern. "And one million two hundred thousand dollars starts it," said auctioneer Christopher Burge, selling a piece by artist Jeff Koons called "Small Vase of Flowers." To the untrained eye, this piece looked a lot like a small vase of flowers. The wonder of this spectacle flows largely from the massive sums involved and how quickly the money is spent. In a busy 10 minutes, $15 million will change hands here. It's just like in the movies: The bidders motion so subtly that you hardly see them move, and when the numbers get large enough, the room starts to buzz. "One million seven hundred thousand, one million eight hundred thousand," Burge said, motioning around the room. "Against you now," he added, pointing at someone. Against you now. That's a prod to the underbidder that roughly translates to "Are you going to cough it up or not?" Like every house, Christie's earns its commissions -- a sliding scale that starts at 20 percent on the first $200,000 -- by turning these events into a sort of roller derby for the rich. Except that none of the rollers emotes or says much of anything. "It's important to try to keep a level head," says Harry Blain, a London-based dealer and gallery owner who last week bid on multimillion-dollar paintings on behalf of several clients. "Otherwise you'll get caught up in the emotion and forget about the value, which is what the auction houses would like to have happen. It's not accidental that the whole thing is set up to be so theatrical." While the bidding escalates, a huge electronic tote board behind the auctioneer instantly translates the figures into yen, euros and other currencies, giving the whole affair a very James Bond international flavor. "Fair warning" is offered when the bidding slows to a halt and then Burg slams that rocklike thing in his hand against the lectern, adding a little tally -ho! flourish with his arm when he really gets excited. Every piece has a reserve price, which eBay users know is a figure set by the owner of the art, below which he (or she) won't sell. So Christie's might start the bidding at, say, $1 million, but if the reserve is $1.3 million and the high bid is $1.1 million, the auctioneer says "passed," and the item stays with its owner. It's always a little awkward when things don't sell, so good auctioneers sort of mutter "passed," or they say it as they bang the gavel, so it's not all that obvious. Last Wednesday, there were only a handful of passes. That was the night that big-ticket contemporary art went up for sale, including works by Roy Lichtenstein, Jasper Johns and Lucian Freud. Total take for the evening, including commissions: $133.7 million. Tom Friedman's pieces went up for auction the next day, during the Thursday afternoon session, along with roughly 100 other items. The estimates for these pieces are far lower -- some eventually go for as little as $15,000 -- but being bought and sold in the secondary art market at this level is a big deal, and at 40, Friedman is among the youngest. It turns out he's already been exhibited in the world's most prestigious galleries and contemporary art museums. How did that happen? "I guess it's a slow process," he says from his studio in Amherst. If you're expecting a prankster or someone guffawing behind the back of his admirers, Friedman is a surprise. He's an earnest guy and although he recognizes that a lot of his art is funny, he isn't joking, nor is he playing for laughs. About his work, he's entirely candid and, frankly, the more he explains it, the more compelling it seems. "I'll either have an idea that will lead me to a material, or I'll see a material that will lead me to an idea," he says. He tends to use stuff that you'd find around the house (glue, paper, Play-Doh), so that hey-I-could-do-that response is no accident. That squiggle aside, most of his work is obsessively composed. He once carved a self-portrait on an aspirin. (And it looks like him!) He made a perfect sphere out 1,500 pieces of bubble gum he chewed, which he then wedged into the corner of a wall. Another time he placed his pubic hairs on a bar of soap, arranging them in perfect circles, like the rings on a radar screen. His rise to prominence happened fast. While he was at the University of Illinois getting a graduate degree in art, a teacher praised his work to a New York gallery owner known only as Hudson. Among the pieces Hudson saw during a visit to Chicago was a spiral made of laundry detergent. "It seemed to me that he had an open-ended area of investigation and a finesse with materials, or a dialogue with materials and how to get them to work and to resonate," says Hudson. "I was really impressed also with his ability to edit his own work, and to present it in a professional but not fussy manner." Hudson's gallery, Feature Inc., held a Friedman exhibit in 1991 and the show caught the eye of Chuck Close, a painter of considerable renown. Endorsements like that are invaluable in the art world, and in 1995 the Museum of Modern Art came calling for a show it was putting together. You'll find Friedman's art in some well-known collections, too. For a while, the pedestal was owned by Charles Saatchi, one of the world's most famous collectors. "I wanted to find a material that you could present the smallest amount of and it would have the most impact," Friedman says of the piece. "I was really interested in minimalism then and with minimalism there's this sense of purity, of clean forms and geometry. I really liked the juxtaposition. The cube is logical and clean. The feces is regressive and insane." The first time he exhibited the piece, someone at the gallery thought it was a stool. Scratch that. Someone thought it was a seat and sat on it. Friedman saw it happen and yelled "Stop!" but too late. He was unable to find the small, crucial part of the piece. "I had to go home and make some more," he says. On their big day, the Friedman items came up early. A fight for the ink scrawl started at $14,000 and within about six seconds it had sold for $26,400, including commission, to a guy in a fuchsia sweater. Then it was time for the poop on a cube, or Lot 416 as it was called by auctioneer Barbara Strongin. "Lot 416, now showing on the screen," she said. "And $45,000 to start here. At $45,000. $48,000, at $50,000. Any advance from 50?" It might seem like someone was bidding from the way the price went up but that apparently was just the auctioneer trying to gin up interest and give the sale some forward momentum, an accepted and common tactic. There were no bidders. Strongin paused for a moment, then gave up. "Down it goes, at $50,000," she said. And as the white cube and the teeny dropping vanished from the screen, Strongin added a word that never in the history of fine art has ever rung so true: "Passed." From checker at panix.com Fri May 20 19:04:52 2005 From: checker at panix.com (Premise Checker) Date: Fri, 20 May 2005 15:04:52 -0400 (EDT) Subject: [Paleopsych] bd's mongolian barbeque opening a restaurant in Mongolia itself Message-ID: bd's mongolian barbeque?: Welcome! http://bdsmongolianbarbeque.com/ulaanbaatar.html 5.5.12 [There's not a Mongolian in sight in the restaurant in Bethesda. Only Whites (though on one theory the Mongolians were White, plus one black guy among those behind the grill. So you don't need immigrants to produce ethnic food, here, or in Mongolia itself! [Click on the URL to get lots of photographs. I give an indication of what they are in the article.] [grandopening.gif] (May 2005) Where do I begin? Do I begin by telling you the story of how this thing first started or do I tell you how important a continuous supply of propane gas is to a restaurant in Ulaanbaatar? Theres so much to tell about this experience enough for a book! Stay tuned. I do have to tell you how proud I am of all the BDs family and friends who have worked so hard to get this amazing project off the ground. Today, BDs has a new franchise restaurant in Ulaanbaatar, Mongolia. Today, Mongolia has its first American Franchise Restaurant! Today, the wonderful and historic Mongol Empire form of cooking has returned to Mongolia. Today, the Mongolian Youth Development Foundation has been set up with annual donor it can count on. Today, our first international bd's mongolian barbeque is open and it's time to build the business. Our first day, May 12, 2005 was hugely successful. We were coming off of three great V.I.P. nights (practice nights before we officially open) and the momentum did not stop. In fact, the only thing that did stop was the anticipation. It is over and the restaurant is now trading. Customers are coming in from all sides of Ulaanbaatar. There are many Mongolian customers and a lot of foreigners. I am most intrigued by how fast the word has spread. Yes, Esunmunkh (our franchise partner) and his team have done a wonderful job with advertising and public relations, but better has been the rapid word of mouth. We seem to be the coolest new thing in town. Not only that, guests are raving! They are raving about the food, the service and the fun. They are glad and proud that we are in town. Obviously it will be months and maybe years until we truly know how successful we are. In the meantime, the BD's team has climbed another mountain. A mountain we should be damn proud of getting to the top of! Stay tuned for more updates, photos and information on this wonderful story. Go Mongo! Billy "BD" Downs May 12, 2005 [ub02.jpg] Two of the many authentic Mongolian pieces of art adorning the bd's in Ulaanbaatar. [ub03.jpg] Billy `bd' Downs and Esunmunkh (bd's franchise partner) toast the opening of bd's in Ulaanbaatar. [ub04.jpg] Billy Downs and Esun celebrate the opening of the Detroit American Bar. [ub05.jpg] Trainer David Cupchak (Ann Arbor, Mich. bd's) teaches proper bar service to the Mongolian team. [ub06.jpg] Trainers Jenny Phan (Novi, Mich. bd's) and Tim Shebak (Ann Arbor, Mich. bd's) review training topics for the day. [ub07.jpg] The entire bd's mongolian barbeque team takes a moment for a group photo before opening. [ub08.jpg] Tim Shebak and two managers pose in front of the grill. [ub09.jpg] The outside of bd's mongolian barbeque in Ulaanbaatar, Mongolia! [ub10.jpg] bd's mongolian barbeque hosts the United States and India Ambassadors. [12][mongolianchildren.jpg] Billy "bd" Downs has been searching for an avenue to give back to Mongolia for quite some time. His first trip to Mongolia in 1997 was spent touring the capital city of Ulaanbaatar, visiting nomadic families throughout the countryside and learning about Mongolian customs. While this trip provided Downs with a deep understanding of the Mongolian culture and lifestyle, his goal to become intimately involved with the country was not achieved. For the next several years, Billy Downs continued to research opportunities to help the country of Mongolia. Through this research Downs met Myagmar Esunmunkh, a native of Mongolia who runs the Mongolian Youth Development Foundation (MYDF). This non-profit foundation located in Ulaanbaatar, Mongolia was first established in 1995 and provides programs designed to help teach life skills to the children of Ulaanbaatar. In the early part of 2004, after meeting on several occasions in the United States, the two formed a relationship based on a mutual interest in each other's culture. In September 2004 Billy traveled back to Mongolia, again with the goal of establishing an opportunity to give back to this developing country. ([13]Click here to read about his trip in detail.) During his trip, Billy and Esunmunkh met and discussed ways in which the two could work together to assist both the Mongolian Youth Development Foundation, as well as the country of Mongolia as a whole. [esunmunkh.jpg] 'BD' with MYDC President Esunmunkh Myagmar These discussions led to a partnership and plans to franchise a [bds.gif] in Ulaanbaatar with the idea that the profits of the restaurant would be used to support the Mongolian Youth Development Foundation. The partnership between Billy Downs and Myagmar Esunmunkh aims to not only contribute to a very important charitable organization through the profits of the restaurant, but to also assist in the growth of the local economy through the employment of 20-30 local Mongolians. With the use of their earnings, these employees will be able to feed, house and clothe both their immediate family, as well as extended family members. Satisfied that this is the perfect way to give back to Mongolia, Billy "bd" Downs will be opening the doors to [bds.gif] - Ulaanbaatar in May, 2005. For more information on the Mongolian Youth Development Foundation, visit: [14]www.mydc.org.mn. [helpsupport.gif] Help support the youth of Mongolia by purchasing cool stuff! From May 2nd through July 17th, all bd's mongolian barbeque locations will be selling World Tour t-shirts for $10 and specialty pint glasses for $4. $5 of each t-shirt and $2 of each pint glass will go to support the youth of Mongolia. World Tour T-shirt Donate to the youth of Mongolia! To make a tax-deductible cash donation, please contact Mongo Charities at (248) 398-2560. [const1.jpg] bd's Ulaanbaatar under construction [const2.jpg] Construction workers taking a break [const3.jpg] Two months from now this will be a fine dining establishment! Links: [15]bd's Mongolia trip journal and photo gallery August 2004 [16]Mongolia location announcement October 2004 [17]Mongolian Youth Development Center References 12. http://www.mydc.org.mn/ 13. http://bdsmongolianbarbeque.com/mongotrip.html 14. http://www.mydc.org.mn/ 15. http://bdsmongolianbarbeque.com/mongotrip.html 16. http://bdsmongolianbarbeque.com/news_nrn1.html 17. http://www.mydc.org.mn/ From shovland at mindspring.com Sat May 21 00:24:18 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 20 May 2005 17:24:18 -0700 Subject: [Paleopsych] The Liquid Universe Message-ID: <01C55D60.C1809440.shovland@mindspring.com> This is the same phenomenon as prayer and the epigenetic control of gene expression. Steve Hovland www.stevehovland.net -----Original Message----- From: ldj [SMTP:ldj at mail.sisna.com] Sent: Friday, May 20, 2005 9:49 AM To: Steve Hovland Subject: Re: [Paleopsych] The Liquid Universe Steve, this is fascinating, because I find the same thing happens to me. In the old Whole Earth Catalog by Stewart Brand (ca. 1969), there was an article about this very thing, a marketing guy writing that just before a big ad campaign launches, sales go 'way up. He said you have to actually run the campaign for the bounce to happen. Also, just before vaccinations for a particular disease are launched, the prevelance of that disease drops. I always thought it was an artifact of better nutrition or something; now I wonder if there is some kind of anticipative principle in the universe? Something about time running both ways? Recall the random number generators that went crazy just before 911? One is at Princton, investigating anomolous activity. This may bear on the research that shows some effect of prayer on people with illnesses. I'm at work, so if you can forward this on to the list, if you think it contributes, I would be grateful. Lynn ---------- Original Message ---------------------------------- From: Steve Hovland Reply-To: The new improved paleopsych list Date: Fri, 20 May 2005 07:41:54 -0700 >Awhile ago one of my photography clients mentioned >that she believed in the "liquid universe." > >The idea is that if you push in some place, you will >get a response from somewhere else. > >On Wednesday I made about 40 sales calls looking >for photo shoots. > >On Thursday someone I didn't talk to on Wednesday >called to say she had several things she wanted me >to do :-) > >Steve Hovland >www.stevehovland.net > > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > _________________________________ SISNA...more service, less money. http://www.sisna.com/exclusive/ From shovland at mindspring.com Sat May 21 00:32:19 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 20 May 2005 17:32:19 -0700 Subject: [Paleopsych] Magic Message-ID: <01C55D61.E045D150.shovland@mindspring.com> Conscious operation of quantum effects. Steve Hovland www.stevehovland.net From shovland at mindspring.com Sat May 21 14:32:26 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 21 May 2005 07:32:26 -0700 Subject: [Paleopsych] The Biology of Belief: How Thoughts Control Life Message-ID: <01C55DD7.3E1A7400.shovland@mindspring.com> http://beliefbook.com/ This book will forever change how you think about your own thinking. Stunning new scientific discoveries about the biochemical effects of the brain's functioning show that all the cells of your body are affected by your thoughts. The author, a renowned cell biologist, describes the precise molecular pathways through which this occurs. Using simple language, illustrations, humor and everyday examples, he demonstrates how the new science of epigenetics is revolutionizing our understanding of the link between mind and matter, and the profound effects it has on our personal lives and the collective life of our species From shovland at mindspring.com Sat May 21 15:07:51 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 21 May 2005 08:07:51 -0700 Subject: [Paleopsych] UK firm claims breakthrough in fuel cell technology Message-ID: <01C55DDC.2F7CE630.shovland@mindspring.com> http://news.yahoo.com/s/nm/20050519/sc_nm/energy_fuelcells_dc LONDON (Reuters) - A small British technology company on Thursday claimed to be on the verge of unlocking the vast potential of fuel cells as a commercially viable source of green energy. Cambridge-based CMR Fuel Cells said it had made a breakthrough with a new design of fuel cell which is a tenth of the size of existing models and small enough to replace conventional batteries in laptop computers. "We firmly believe CMR technology is the equivalent of the jump from transistors to integrated circuits," said John Halfpenny, the firm's chief executive. Fuel cells have for years been touted as the next big green From kendulf at shaw.ca Sat May 21 23:25:26 2005 From: kendulf at shaw.ca (Val Geist) Date: Sat, 21 May 2005 16:25:26 -0700 Subject: [Paleopsych] The Biology of Belief: How Thoughts Control Life References: <01C55DD7.3E1A7400.shovland@mindspring.com> Message-ID: <000e01c55e5c$5e7e39e0$873e4346@yourjqn2mvdn7x> Thank you Steve! Sincerely, Val Geist ----- Original Message ----- From: "Steve Hovland" To: "paleopsych at paleopsych. org (E-mail)" Cc: "Stan Stevens (E-mail)" Sent: Saturday, May 21, 2005 7:32 AM Subject: [Paleopsych] The Biology of Belief: How Thoughts Control Life > http://beliefbook.com/ > > This book will forever change how you think about your own thinking. > > Stunning new scientific discoveries about the biochemical effects of the > brain's functioning show that all the cells of your body are affected by > your thoughts. > > The author, a renowned cell biologist, describes the precise molecular > pathways > through which this occurs. > Using simple language, illustrations, humor and everyday examples, he > demonstrates > how the new science of epigenetics is revolutionizing our understanding of > the link > between mind and matter, and the profound effects it has on our personal > lives and > the collective life of our species > > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > -- > No virus found in this incoming message. > Checked by AVG Anti-Virus. > Version: 7.0.322 / Virus Database: 266.11.14 - Release Date: 5/20/2005 > > From checker at panix.com Sun May 22 01:14:01 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:14:01 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Against Depression': Anatomy of Severe Melancholy Message-ID: 'Against Depression': Anatomy of Severe Melancholy http://www.nytimes.com/2005/05/22/books/review/22ANGIERL.html AGAINST DEPRESSION By Peter D. Kramer. 353 pp. Viking. $25.95. By NATALIE ANGIER PETER D. KRAMER, author of the phenomenally successful ''Listening to Prozac,'' may be thought of as America's Dr. Depression, and he may have done more than anybody else to illuminate the clawing, scabrous, catastrophic monotony that is depressive illness. But he has never suffered from the mental disorder himself. Not that he's a chipper bon vivant. ''I am easily upset,'' he writes in ''Against Depression.'' ''I brood over failures. I require solitude. . . . In medieval or Renaissance terms, I am melancholic as regards my preponderant humor.'' Still, he has never qualified for a diagnosis of even low-level depression. My first reaction to that biographical detail was to question Kramer's authority on the subject. How can you really understand what pain is, I wondered, if you've never felt the Cuisinart inside? I quickly dropped my objections, however, when I realized I was doing for depression precisely what Kramer warns against in this eloquent, absorbing and largely persuasive book. I was lifting it to the status of the metaphysical, or at least the meta-medical. I was granting to its specific pain the presumed reimbursement of revelation, the power to ennoble, instruct and certify the sufferer. By contrast, I'd never insist that my endocrinologist suffer my autoimmune disorder before treating me or talking publicly about autoimmunity; or that my endodontist, before extracting my infected dental pulp, first be ''enlightened'' with a few root canals of his own. That Kramer has not been depressed may in fact allow him to resist doing what depressives, and those who love them, too readily do, which is romanticize and totemize and finally trivialize the illness. Instead, Kramer, who is a clinical professor of psychiatry at Brown University, sees depression for what it is. ''It is fragility, brittleness, lack of resilience, a failure to heal,'' he writes. It is sadness, hopelessness, chronic exhaustion allied with corrosive anxiety, a loss of any emotion but guilt, of any desire but to stop, please stop, and to stay stopped, forever. ''Depression is a disease of extraordinary magnitude,'' he says, and ''the major scourge of humankind.'' Found by the World Health Organization to be the single most disabling disease, depression afflicts people of every age, class, race, creed and calling: as many as 25 percent of us will be caught in its vise at least once in our lives. The disease blights careers, shatters families and costs billions of dollars in lost workdays a year. Kramer cites studies putting the annual workplace cost in this country alone at $40 billion -- the+equivalent of 3 percent of the gross national product. Depression also kills, through suicide, heart disease, pneumonia, accidents. Forget the persistent myth of depression as a source of artistry, soulfulness and rebellion. Depression doesn't fan creative flames. It is photophobic and anhedonic and would rather just drool in the dark. Kramer wrote ''Against Depression'' to dispel what he sees as the lingering charisma of the disease. And yes, people talk about it now as a biological disease rather than a moral or spiritual failing. The stigma of mental illness has mainly faded, and antidepressants are among the most widely prescribed of all medications. Nevertheless, in the dozen years since the publication of ''Listening to Prozac,'' Kramer has seen plenty of resistance to the idea that depression, like cancer, AIDS or malaria, is a disease without redeeming value, best annihilated entirely. He has read stacks of depression memoirs, and though most have parroted the party line that depression is a disease like any other, ''hints of pride almost invariably showed through, as if affliction with depression might after all be more enriching than, say . . . kidney failure.'' The writers couldn't help conveying the message: ''Depression gave me my soul.'' Moreover, whenever Kramer gives a talk, sooner or later an audience member invariably asks The Question. So, Dr. Kramer, what would have happened if van Gogh had taken Prozac? Or Kierkegaard? Or Virginia Woolf? The implication of the question is obvious. Throw out the depression bath water and, whoops, there go ''Starry Night'' and ''Mrs. Dalloway'' with it. Kramer presents a sustained case that depression, far from enhancing cognitive or emotional powers, essentially pokes holes in the brain, killing neurons and causing key regions of the prefrontal cortex -- the advanced part of the brain, located just behind the forehead -- to shrink measurably in size. He lucidly explains a wealth of recent research on the disease, citing work in genetics, biochemistry, brain imaging, the biology of stress, studies of identical twins. He compares the brain damage from depression with that caused by strokes. As a result of diminished blood flow to the brain, he says, many elderly stroke patients suffer crippling depressions. Is stroke-induced depression a form of ''heroic melancholy''? If not, then why pin merit badges on any expression of the disease? Rallying his extensive familiarity with art and literature, Kramer argues that history's depressive luminaries were creative not because of but despite their struggles with mental illness -- as a result of their underlying resilience, a quality he admires. Kramer envisions a utopian future in which neuro-resilience and neuro-regeneration may be easily induced with drugs or gene therapy. How much more intellectually and emotionally courageous might we be, he asks, how much more readily might we venture out on limbs and high wires, if we knew a private trampoline would always break our fall? KRAMER'S narrative is not seamless. He argues that depression has long been very much among us, and he rightly discounts pat evolutionary hypotheses about the disease's ''adaptive value,'' but he doesn't offer much of an explanation himself for how a condition so devastating+has come to be so common. Kramer can also sound defensive and willfully dour. To counter possible charges of superficiality or a fondness for smiley-face fixes, he presents his ''bona fides as a person who can appreciate alienation, both the social and existential varieties,'' among them being a New York-born German Jew who lost many relatives in the Holocaust. He rejects our habitual conflation of tragedy with depth and joy with shallowness, yet when A. L. Kennedy, author of the memoir ''On Bullfighting,'' struggles to find some lightness by recalling how her suicidal fantasies clashed with her fear of public embarrassment, Kramer dismisses her attempts as an author's version of ''meeting cute.'' Ah, but self-mockery can be a small source of joy, even redemption, which is why, whenever I lapse into hand-wringing, I recall Ezra Pound's ode to misery, a parody of A. E. Housman: ''O woe, woe, / People are born and die, / We also shall be dead pretty soon / Therefore let us act as if we were dead already.'' Now that's what I call cute. Natalie Angier, who contributes science articles to The Times, is completing a book about the scientific canon. From checker at panix.com Sun May 22 01:14:20 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:14:20 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'Everything Bad Is Good for You': The Couch Potato Path to a Higher I.Q. Message-ID: 'Everything Bad Is Good for You': The Couch Potato Path to a Higher I.Q. http://www.nytimes.com/2005/05/22/books/review/22KIRNL.html EVERYTHING BAD IS GOOD FOR YOU How Today's Popular Culture Is Actually Making Us Smarter. By Steven Johnson. 238 pp. Riverhead Books. $23.95. By WALTER KIRN To people over a certain age the idea that popular culture is in decline is a comforting one, which may explain its deep appeal. If today's TV shows are worse than yesterday's, and if new diversions like video games are inferior to their earlier counterparts, whatever those might be (Scrabble? Monopoly?), then there's no harm in paying them no attention. To a 40-year-old who's busy with work and family, the belief that he isn't missing anything by not mastering ''SimCity'' or by letting his 10-year-old program the new iPod is a blessed solace. If the new tricks are stupid tricks, then old dogs don't need to learn them. They can go on comfortably sleeping by the fire. The old dogs won't be able to rest as easily, though, once they've read ''Everything Bad Is Good for You,'' Steven Johnson's elegant polemic about the supposed mental benefits of everything from watching reality television to whiling the night away playing ''Grand Theft Auto.'' Johnson, a cross-disciplinary thinker who has written about neuroscience, media studies and computer technology, wants to convince us that pop culture is not the intellectual tranquilizer that its sound-alike critics have made it out to be but a potent promoter of cerebral fitness. The Xbox and ''The Apprentice,'' he contends, are pumping up their audiences' brains by accustoming them to ever-increasing levels of complexity, nuance and ambiguity that work on brain cells much as crunches do on the abdominal muscles. The depressing corollary to his thesis is that if a person isn't doing these exercises -- perhaps because he's too busy raising children who engage in them compulsively -- he's getting flabby from the neck up. Johnson's argument isn't strictly scientific, relying on hypotheses and tests, but more observational and impressionistic. It's persuasive anyhow. When he compares contemporary hit crime dramas like ''The Sopranos'' and ''24'' -- with their elaborate, multilevel plotlines, teeming casts of characters and open-ended narrative structures -- with popular numbskull clunkers of yore like ''Starsky and Hutch,'' which were mostly about cool cars and pretty hair, it's almost impossible not to agree with him that television drama has grown up and perhaps even achieved a kind of brilliance that probably rubs off on its viewers. About the fact-filled dialogue on shows like ''E.R.'' and ''The West Wing,'' he writes: ''It rushes by, the words accelerating in sync with the high speed tracking shots . . . but the truly remarkable thing about the dialogue is not purely a matter of speed; it's the willingness to immerse the audience in information that most viewers won't understand.'' As a child of ''Kojak'' -- a series that taught me nothing except a peculiar, tough-sounding ethnic accent with which I could entertain my buddies -- I don't like to think that merely by watching TV today's teenagers are boosting their I.Q.'s (for that, I had to plow through ''Moby-Dick''; or so my teachers told me). I'm afraid I have to cede the point, however, because I've seen ''24'' as well, which is to ''Kojak'' what playing the video game ''Doom'' is to zoning out in front of ''Captain Kangaroo.'' Johnson posits a number of mental mechanisms that are toned and strengthened by the labor of figuring out the rules of high-end video games and parsing the story structures of subtle TV shows. Playing physiologist, he asserts that the games address the dopamine system by doling out neurochemical rewards whenever a player advances to a new level or deciphers a new puzzle. These little squirts of feel-good brain juice aggravate a craving for further challenges, until the Baby Einstein at the joystick has worked himself into an ecstasy of problem-solving that, Johnson tells us, will serve him well in later life (though he's vague about exactly how). Johnson calls the relevant intellectual skills ''probing'' and ''telescoping,'' and defines them as the ability to find order in bewildering symbolic territory. Wandering through labyrinths full of monsters keeps a person on his toes, that is, and this is good preparation for modern life -- perhaps because modern life so closely resembles a labyrinth full of monsters. So far, so good. But when Johnson purports to discern a silver lining in programs like ''Joe Millionaire'' and ''The Apprentice,'' he has to resort to trickier tactics. After reminding us that his argument doesn't depend on the content of the shows being particularly interesting but relates instead to the intricacies of their formats, he suggests that reality TV engages viewers' ''emotional intelligience'' by confronting them with a staged array of rapidly shifting social situations and densely interlocking human relationships. When an ''Apprentice'' team leader chews out an underling who is well liked by other contestants, it lights up an ancient corner of our brains responsible for assuring our survival as members of communities and tribes. Our minds run a series of lightning calculations having to do with tones of voice, facial expressions, ethical principles and psychological verities as we weigh the chances that the team leader will implode or the underling will revolt. And what will the Donald think? That's a factor, too. Though I side with Johnson in his contention that emotional intelligience is an authentic, important competency, and while I'll admit that ''The Apprentice'' delivers up enough half-baked strife and intrigue to absorb our inner office-politicians, I'm not sure why such a regimen is good for people except in the sense that it isn't actually harmful. As elsewhere in the book, Johnson's contrarian contempt for the knee-jerk vilification of pop culture seems to push him further than may be warranted into defending and elevating artifacts that are neither here nor there. My grandmother's love of lurid true-crime magazines, with their blow-by-blow re-creations of small-town rapes, roused her emotional intelligence, too, telling her to avoid dark parking lots and pockmarked men with certain styles of mustaches, but, really, what of it? Stimulation is not a virtue all by itself. Johnson seems to feel it is, though. In temperament, he's like a cerebral Jack La Lanne. He admires exertion for its own sake -- in this case, neurological exertion. The faster the synapses fire the better, no matter to what end, even if the body supporting them is growing sluggish and obese and the spirit animating them is chronically neglecting its family members in order to TiVo ''The Simpsons.'' Johnson is a cool and neutral thinker, concerned with process rather than purpose, but the provocative title of his book, by alluding to some unheralded moral dimension in the consumption of today's pop culture, is mischievously misleading -- a way to snag the attention of the squares who refuse to acknowledge the benefits of doing anything other than reading the Holy Bible by candlelight. Considered purely on its own terms, Johnson's thesis holds up despite these quibbles. Our own internal computers are indeed speeding up, and part of the credit for this must surely go to the brute sophistication of our new entertainments, which tax the brain as ''Kojak'' never did. The old dogs may grump about cultural illiteracy and the erosion of traditional values, but the new dogs have talents, aptitudes and skills that we, as we drowse by the fire, can only dream of. Their sheer agility may not bring them wisdom, but our plodding didn't either, let's be fair. Walter Kirn, whose most recent novel is ''Up in the Air,'' is a regular contributor to the Book Review. From checker at panix.com Sun May 22 01:14:35 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:14:35 -0400 (EDT) Subject: [Paleopsych] NYT: 'Blink' Meets 'Freakonomics' Message-ID: 'Blink' Meets 'Freakonomics' http://www.nytimes.com/2005/05/21/business/21online.html By DAN MITCHELL THE hot new book "Freakonomics" applies economic analysis to a range of human activity - from sumo wrestling to "Seinfeld" - but raises at least as many questions as it answers. Naturally, readers are drawn to the blog, which picks up where the book leaves off. And unlike a lot of writers who blog their books with a seeming reluctance, the authors, Steven D. Levitt, an economist, and Stephen J. Dubner, a journalist, take to it with the same zeal they applied to their book, and the blog is abuzz with activity. For example, Mr. Levitt tells of an e-mail message he recently received from a fellow trend-tracker, Malcolm Gladwell: A man approached Mr. Gladwell at the Toronto airport, asked for an autograph, and pulled out a copy of "Freakonomics" for him to sign. "We are totally co-branded!" Mr. Gladwell wrote. They already were. Mr. Gladwell's name, affixed to his blurb ("Prepare to be dazzled") appears above Mr. Levitt's and Mr. Dubner's on the cover of "Freakonomics." And Mr. Levitt heaps praise on Mr. Gladwell several times on his blog. It's no wonder they feel a kinship. The men, along with Jared Diamond, the evolutionary biologist and author of "Guns, Germs, and Steel," share a recognition that people are hungry for books that reveal the hidden processes that underlie everything from the tides of world history to how a [2]7-Eleven patron decides between Coke and Pepsi. Mr. Gladwell has two books at or near the top of The [3]New York Times paperback ("The Tipping Point") and hardcover ("Blink") best-seller lists, but "Freakonomics" is this spring's "it" book. In it, the authors explain the economics behind, to take two examples, the crack market and the Ku Klux Klan. Why do so many crack dealers live with their mothers? Why did the Klan's popularity fade? The answers will surprise. The blog, at [4]freakonomics.com/blog.php, often taking cues from participants, goes even further afield. One particularly amusing post, by Mr. Dubner, tells of an ill-fated meal in a New York restaurant and incorporates theories of psychology and behavioral economics in answering the question "Why pay $36.09 for rancid chicken?" This weekend, public radio wraps up "Think Global," ([5]thinkglobal2005.org) an ambitious, weeklong examination of globalization that includes contributions from public radio stations and independent producers. Regular programs, like "Talk of the Nation" on National Public Radio, were devoted to the topic, and several documentaries were also produced. One of those was "Global 3.0," by American Public Media (the national public broadcaster that didn't receive $200 million from the estate of Joan B. Kroc, widow of the [6]McDonald's founder). The program starts from the should-be-obvious premise that globalization creates both winners and losers, and it's not always easy to tell which is which. Chris Farrell of "Sound Money" and Robert Krulwich, a correspondent for ABC News, the hosts, present deeply reported material with a breezy familiarity that never strays too far. The Web component is packed with useful links, as well as audio that can be downloaded and the program transcript. The Birmingham News features an exhaustive collection of material relating to the accounting-fraud trial of Richard M. Scrushy, the ousted chief executive of HealthSouth. The jury is deliberating. The paper's dedicated Web page offers up-to-the-minute news, a comprehensive archive, background materials and, most impressively, a complete set of the paper's graphs and charts. The Supreme Court ruled in favor of mail-order (which really means online) wine sales. But before you whip out your platinum card to order that case of Boone's Farm Strawberry Hill, take a gander at Stephen Bainbridge's deconstruction of the ruling at his blog, [7]professorbainbridge.com. Just for starters, he points out, any or all of the 24 states that now prohibit ordering wine through the mail might simply change their laws to conform to the ruling while still maintaining the ban. Mr. Bainbridge, a wine connoisseur who also teaches corporate law at the University of California, Los Angeles, explains how. For complete links, go to nytimes.com/business. E-mail: online at nytimes.com. References 1. http://www.nytimes.com/services/xml/rss/nyt/Business.xml 2. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=SE 3. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=NYT 4. http://freakonomics.com/blog.php 5. http://thinkglobal2005.org/ 6. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=MCD 7. http://professorbainbridge.com/ From checker at panix.com Sun May 22 01:14:47 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:14:47 -0400 (EDT) Subject: [Paleopsych] NYTBR: 'The Friend Who Got Away': A Girl's Best Friend Message-ID: 'The Friend Who Got Away': A Girl's Best Friend http://www.nytimes.com/2005/05/22/books/review/22HANSENL.html By SUZY HANSEN Women, especially girls, aren't always nice to one another, and writers and movie directors have tried to document this pathology as if it were a sociological ill to be cured. The catty and bullying few were recast as Queen Bees and Mean Girls and Tyra Banks; even the feminist Phyllis Chesler published a book called ''Woman's Inhumanity to Woman.'' Of course, woman-to-woman cruelty has always existed (we all have mothers, don't we?), and it certainly wasn't Margaret Atwood who broke the news that women could be sociopathic misogynists (though her women may be the freakiest). Still, it's all a little hysterical, isn't it? So it's a relief to report that ''The Friend Who Got Away,'' an anthology in the ''Bitch in the House'' tradition, reveals women to be thoughtful and kind, sometimes callous and neglectful, like all humans. There are more slow fades than blowups here, and (sorry, guys) there's nary a catfight in sight. That might not be the best thing for an essay collection. In a book about fighting dames, one naturally expects lots of cheating lovers, hair pulling or face slapping (or both), friendship necklaces flushed down toilets and perhaps a husband or two ensnared at the neighborhood barbecue. ''The Friend Who Got Away'' doesn't serve up such banal treasures, and at first, that's disappointing. Instead, we get, out of 20, a handful of painful, resonant essays and a handful of painfully boring ones. The editors, Jenny Offill and Elissa Schappell, are themselves serious writers, and their endeavor reflects that. Wholesale nastiness is sacrificed for the nuances and generally flat rhythms of real life. The truth is most friendships do in fact break down slowly. Perhaps two friends each start to dislike how the other has changed over time. Problems fester, and then the slightest insult or missed meeting dissolves the strongest of bonds. Admittedly, this complexity makes for richer reading, but why must there be so many tame essays? There's one about a 12-year-old confused by her friend's Christianity (Offill's humdrum ''End Days''); one about a breakup over the women's movement (Beverly Gologorsky's predictable ''In a Whirlwind''); one by Katie Roiphe (in which she admits she's self-absorbed, yet no wisdom is imparted). Some writers don't seem all that troubled they've lost their comrades; others weren't really friends with them at all. Some essays also fail because of the writers' odd inability to conjure up their friends. The editors' premise is that female friendships curiously resemble romantic relationships; in that case, the writer should describe the friend as passionately as she would a lover. What's interesting, always, is why another human being captivates us and has the power to wreck us. The heartbreakers should come to life more vividly than the vexed narrators, who, especially in the case of destroyed friendships, are fundamentally unreliable. That's why the clever pairing of Heather Abel and Emily Chenoweth -- friends who tell different sides of the same story, the death of Chenoweth's mother -- provides the book with a much needed core. If ''The Friend Who Got Away'' has a common theme, it's that when tragedy strikes one friend, the other falls apart. The miseries include a child's death (Ann Hood's devastating ''How I Lost Her''), multiple miscarriages (Kate Bernheimer's acute study of insensitivity, ''Other Women'') and life-altering disease (Jennifer Gilmore's magnificent and Atwood-like ''Kindness of Strangers''). Each time, the desertion is inexcusable; the more it happens, the more inevitable it seems. The Abel and Chenoweth essays succeed because both writers have uncanny memories for the tenderest details of teenage emotions, mixed with impossibly sharp but sympathetic powers of retrospection. Abel explains that early on she learned to ''compete without competing.'' (She writes, for example: ''Apartheid was mine. Athleticism was not mine.'') One thing that she did not have was suffering, and she enjoyed taking care of her best friend, whose mother fell ill during their freshman year of college. But Chenoweth's subsequent withdrawal and transformation baffled Abel, and her essay burns with confusion. Chenoweth's essay -- wisely printed after Abel's -- begins not with Abel but with, of course, the real tragedy at hand, her mother. ''My mother was my first best friend,'' she begins, and it stings. Even when Abel was the problem, she was not the problem. Friends pride themselves on being able to understand everything about their counterparts, but with tragedy, the one friend becomes aware that no matter how many tears are shed, she will never know the other's pain. Being shut out is almost like being killed off -- for both parties. Chenoweth's essay is the only one in the book that captures the severity of loss. YET sometimes tragedy ruins relationships for an even uglier reason: a personal loss makes someone special. Outsiders want to share in its totality of emotion, or flee for fear of feeling average in comparison. If the friendship already had a tinge of rivalry to it, as almost all do, then a friend's sudden tragic glow, as well as her much admired recovery and quickly forgiven mistakes, weighs heavily on the more ordinary friend. The Abel and Chenoweth essays are so obviously winning it's surprising this book didn't turn into an anthology of companion essays subtitled ''Twenty Women Tell Their Sides of the Story: You Be the Judge.'' The results, though, would probably be the same each time: both writers would plead their cases, only to reveal that they're sad simply because the friendship no longer exists. On the page, at least, the end of a friendship is one tragedy two women can share, without the tiniest bit of envy. Suzy Hansen is an editor at The New York Observer. From checker at panix.com Sun May 22 01:15:03 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:15:03 -0400 (EDT) Subject: [Paleopsych] NYT: The B School Blues: A Case Study Message-ID: The B School Blues: A Case Study http://www.nytimes.com/2005/05/21/business/21offline.html By PAUL B. BROWN IN a scorching indictment, two widely known professors say the nation's business schools are too focused on research and are failing to prepare graduates to work in corporations. "Business schools are on the wrong track," Warren G. Bennis and James O'Toole, professors at the University of Southern California, conclude in this month's Harvard Business Review. "During the past several decades, many leading B schools have quietly adopted an inappropriate - and ultimately self-defeating - model of academic excellence," they write. "Instead of measuring themselves in terms of the competence of their graduates, or by how well their faculties understand important drivers of business performance, they measure themselves almost solely by the rigor of their scientific research." Business schools have gone that route in part to make sure they are seen as more than trade schools, the authors say, but also because they seem to suffer from "physics envy," and are bent on demonstrating that the study of business can be as disciplined and quantitative as the science of black holes, quarks and leptons. That decision feeds on itself, the authors write. Professors trained in quantitative research hire faculty members with the same inclination and make being published in technical journals a condition for continued employment. "Today it is possible to find tenured professors of management who have never set foot inside a real business, except as customers," they write. "By allowing the scientific research model to drive out all others, business schools are institutionalizing their own irrelevance," Mr. Bennis and Mr. O'Toole contend, adding that the concerns of the real world must - to some degree - find their way into the classroom. "Going back to the trade school paradigm would be a disaster," they write. "Still, we believe it is necessary to strike a new balance between scientific rigor and practical relevance." Of course, many of the executives - especially chief financial officers - enmeshed in the current corporate scandals are business school graduates. But ignoring the fact that some financial wizards have recently cooked the books with a creativity that would put Emeril Lagasse and Mario Batali to shame, the authors of "Not Your Father's C.F.O." concentrate on the good that chief financial officers can accomplish. In their article in strategy+business, a management magazine published by the consulting firm Booz Allen Hamilton, three Booz Allen vice presidents contend, after interviewing the top financial executives at [2]Pfizer, [3]FedEx and [4]Procter & Gamble, that the role of the C.F.O. has changed drastically - for the better. "The classic model - the C.F.O. as chief accountant and technical expert focused narrowly on the firm's financial statements and capital structure - has been pass? for a decade or more," wrote the authors, Vinay Couto, Irmgard Heinz and Mark J. Moran. "The C.F.O. has long since operated as more of a business partner with the C.E.O., closely involved in designing and overseeing strategy, operations and performance." The authors say the pace of the evolution has accelerated. Many companies are eliminating the position of chief operating officer, whose duties are then assigned to the C.F.O. But the best of the bunch, the authors contend, will go beyond simply picking up the slack. They will help their companies adhere to stricter financial reporting standards - regulations often imposed in light of sins committed by top financial executives - and the pressures caused by increasing globalization and technology changes. Indeed, the authors argue, these executives have no choice but to do more. And will do so, presumably, without manipulating the numbers. Final Take In the current (April/May/June) issue of Inventors' Digest, Paul Niemann asks, "Is Irony the Step-Monster of Innovation?" He points out that Mickey Mouse's creator, Walt Disney, was afraid of mice; that the inventor of dynamite, Alfred Nobel, established the Nobel Peace Prize; and that Rube Goldberg, whose name is synonymous with inventions, at least cartoon ones, never invented anything at all. References 1. http://www.nytimes.com/services/xml/rss/nyt/Business.xml 2. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=PFE 3. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=FDX 4. http://www.nytimes.com/redirect/marketwatch/redirect.ctx?MW=http://custom.marketwatch.com/custom/nyt-com/html-companyprofile.asp&symb=PG From checker at panix.com Sun May 22 01:15:16 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:15:16 -0400 (EDT) Subject: [Paleopsych] NYT: A Place for Grandparents Who Are Parents Again Message-ID: A Place for Grandparents Who Are Parents Again http://www.nytimes.com/2005/05/21/nyregion/21grandparents.html By [2]TIMOTHY WILLIAMS Eleven years ago, Annie Barnes, 62, found herself raising her two grandchildren after their father was murdered and their mother disappeared. Both children had been born premature and with serious health problems - the younger, a girl, weighed two and a half pounds; the other, a boy, was born with syphilis and addicted to heroin and crack. But the little boy, Alonzo Poinsett, now 12, and his sister, Shakela, 10, are doing well today, and they will soon join their grandmother in an ambitious new housing experiment - a 51-unit apartment building in the South Bronx that is the first public development in the United States designed and built exclusively for grandparents raising grandchildren. The six-story project, called the GrandParentFamily Apartments, will open within the next few weeks; it already has a waiting list of more than 100 families. The development, on Prospect Avenue, will have three full-time social workers, support groups, parenting classes and, for the children, tutoring, a full-time youth coordinator and organized activities in the afternoons and evenings. The generational skip in the population means that the units will have some unusual features: emergency pull cords in the bedrooms and bathrooms, shower thermostats to keep the water from getting too hot, a community center for the older residents and their friends and a youth lounge. It is an attempt to better serve a growing population that is often thrown together by bad luck and usually lacks a strong support system. The $12.8 million project was financed by Presbyterian Senior Services, the West Side Federation for Senior and Supportive Housing and the city's Housing Authority. "This is an important group of people that often gets ignored," said David Taylor, executive director of Presbyterian Senior Services, "and what I hope is that this becomes a demonstration project, a place that encourages other places to do the same thing." While it is not uncommon for grandparents to raise grandchildren, advocates for the elderly who study the issue say the number of households headed by grandparents is growing. A 1991 study by the city estimated that there were 14,000 grandparent-headed homes in New York City. By 2000 there were 84,000 such families in the city, according to the United States Census. The national trend is difficult to track because the Census Bureau only began recording information about households headed by grandparents in the 2000 census; it identified 2.4 million families with 4.4 million children in their households. Because these families are often impoverished, the GrandParentFamily Apartments, in the poorest Congressional district in the country, are reserved for families with a median income of about $25,100; a typical monthly rent is about $300. Nearly all the adults moving into the building are grandmothers. The children moving in with them have often lost their parents to illness, murder, prison, drug abuse or mental illness. Some children never knew their parents or scarcely remember them because they have been gone so long. The grandparents must have legal custody of their grandchildren to be eligible for an apartment. The process of gaining custody can be a long and trying process in Family Court. Typically, judges take custody from parents only when the parent has a history of abusing or neglecting a child, abusing drugs or is in jail so often that the child goes uncared for. For the grandparents, raising children again can be as traumatic as it is for the children to be without their parents. Some wonder where they went wrong with their own children. Many are depressed and struggling financially because they did not expect to have to raise a second generation. And all have found that the dreams they had for old age had to be abandoned. But a home at the GrandParentFamily Apartments at least helps. "I've always had the feeling I was alone in the world, and for once there's some help," said Sarah Saddler, 73, who is taking care of three of her youngest daughter's four children, Ashlee, 17; Courtney, 15; and Kerry, 12, who is autistic. Ms. Saddler had three of her own children, and raised her eldest daughter's three children after she died from complications of diabetes in 1990 at age 37. Ms. Saddler would not say why her youngest daughter could no longer care for the children, but she alluded to mental illness. For more than a year, Ms. Saddler has lived with her daughter and her four children in a two-bedroom, one-bathroom apartment in the Bronx. Clothes are stored in plastic bags and in suitcases. "That," said Ms. Saddler raising an eyebrow as her granddaughters giggled, "is not good at all." The children say being raised by their father is not an option. Asked where he is, the girls say in unison, "Who cares?" LaVaida Thomas, 67, who will also live in the new project, is raising two of her daughter's five children, Aaron Cousins, 14, and Terrence, 12. Ms. Thomas's daughter lost custody of the children after her youngest was born in 2003 with traces of cocaine in her bloodstream. She also wants to adopt the three other children, who are in foster care. "I've always said I would keep my kids together if I can," said Ms. Thomas, who has problems with her heart and has had three strokes. "The same goes for the grandkids. So I said, 'Let me live until I can see them half-way grown.' " She said living in her new home with people her own age would provide the support system she has lacked since her husband died five years ago. "We'll have things in common," said Ms. Thomas, who has six children of her own. "You'll talk a little bit." Annie Barnes, who got custody of her two grandchildren after her son was fatally stabbed in 1994, has been looking forward to the move for months. "These are my son's children," said Ms. Barnes, who added that she had envisioned her retirement as being filled with travel. She did not know, she said, what happened to her son's girlfriend, the children's mother. Her grandchildren know little about their parents and as Ms. Barnes speaks they listen intently, though Shakela covers her ears with her hands when the talk becomes too graphic. For years, Shakela, who is in fourth grade, has shared a room with Alonzo, who is a year ahead of her in school, but stands nearly a foot taller. They recently visited their new three-bedroom apartment for the first time. The children dashed through the front door past their grandmother, running from room to room. They tested the bathroom faucets, the light switches and surveyed the views from the bedroom windows. "I'm going to put my bed in there," Alonzo said excitedly. "I'm going to hook up my PlayStation 2. Put the clothes in the closet. Put the computer in my room." Shakela said she wanted to decorate her room with her dolls and cutouts of Sponge Bob Squarepants and Bratz. Ms. Barnes looked from one child to the other, and smiled. "They are mines now," she said. Shakela glanced at her grandmother and smiled back. "We're hers." From checker at panix.com Sun May 22 01:15:32 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:15:32 -0400 (EDT) Subject: [Paleopsych] NYTBR: (MacKinnon) 'Women's Lives, Men's Laws': Down by Law Message-ID: 'Women's Lives, Men's Laws': Down by Law http://www.nytimes.com/2005/05/22/books/review/22HECHTL.html WOMEN'S LIVES, MEN'S LAWS By Catharine A. MacKinnon. 558 pp. The Belknap Press/Harvard University Press. $39.95. By JENNIFER MICHAEL HECHT MOST adults probably have an opinion on pornography. They may also have an opinion on Catharine A. MacKinnon, one of the country's most prominent antipornography activists. In the 1980's, she and Andrea Dworkin wrote an ordinance banning pornography that some feminists applaud but a great many of us reject as censorship. The battle lines are clear enough to support invective shorthand. Those for them are sometimes called MacDworkinites. They give as good as they get: MacKinnon refers to the American Civil Liberties Union as ''the pro-pimp lobby.'' The well-worn debate notwithstanding, ''Women's Lives, Men's Laws,'' MacKinnon's first collection of essays since 1987, is absorbing and important despite its polemical blinders. Perhaps more than anyone else, MacKinnon has changed the way we use the law in America today. The essays here on rape, abortion, prostitution and harassment law make clear how, and it's intellectually exciting stuff. As MacKinnon, who teaches law at the University of Michigan and the University of Chicago, memorably suggests, ''sexual assault in the United States today resembles lynching prior to its recognition as a civil rights violation.'' Instead of arresting prostitutes, she says, we should be using the antislavery 13th Amendment to fight pimps. True, her claims are often too large for their footnote. Worse, she's so maddeningly certain about everything that readers are often forced to disagree. Yet much here is persuasive. ''Women's Lives, Men's Laws'' is a compelling vision of how our use of law shapes inequality and how we might rethink it. As MacKinnon gracefully explains, issues of sexual equality used to be dealt with in Aristotelian terms: fairness is when equals, and equals only, are treated equally. So, since only women get pregnant, nothing to do with pregnancy in the workplace was actionable as sex discrimination. Thanks to MacKinnon the concept of equality now incorporates circumstances that promote second-class citizenship. She is responsible for many such conceptual changes: Sexual harassment isn't merely harmless flirting because it can easily be shown to have harmful consequences. Domestic abuse often creates such passivity that different standards are needed for ''consent.'' Unfortunately, the book's driving thesis is that pornography causes violence, and it's not convincing. MacKinnon asserts that ''because the aggressors have won, it is hard to believe them wrong.'' But she doesn't usefully respond to the substantive reasons people disagree with her. To many of us it is evident that abuse, not images, causes abuse; rape has existed in the absence of pornography; and some pornography is enjoyed by peaceful women and men. Then there's the question of strategy: MacKinnon and Dworkin's approach was adopted in Canada and the law was used to ban their own books as well as literature with gay themes. Some feminists have celebrated sex work as empowering. Here too MacKinnon is surely right that many women in porn would leave if they had any real choice, and many of them get hurt. Jenna Jameson may be mainstream now, but her book is subtitled ''A Cautionary Tale,'' and she's not kidding. Pornography does some harm to some users too. But MacKinnon goes farther: ''Every day the pornography industry gets bigger and penetrates more deeply and more broadly into social life, conditioning mass sexual responses to make fortunes for men and to end lives and life chances for women and children. . . . The age of first pornography consumption is younger and probably dropping, and the age of the average rapist is ever younger. The acceptable level of sexual force climbs ever higher; women's real status drops ever lower.'' One study, she tells us, held that one out of three American men would commit rape if assured escape, and that ''the figure climbs following exposure to commonly available aggressive pornography.'' MacKinnon's world is utterly recognizable as our own, but there's something off. The women here are bruised, raped, poverty-stricken rag dolls. Utterly recognizable, as I said, but incomplete. The men in this world have no barriers between their fantasies and their behaviors. Women should look at pornography ''to find out what men really think of them.'' Why did Anita Hill remain on speaking terms with Clarence Thomas? ''If women refused to talk with every man who ever said vile sexual things to us, we would be talking mostly to women.'' The women I know are on speaking terms with only one or two people who have said vile sexual things to them. MacKinnon notes that when the equal rights amendment ''expired unratified,'' in 1982, women did not riot in the streets. Instead: they do menial labor in offices, ''fight for their lives as fist met face'' and ''lay their lives down as penis sliced in and out and in and out.'' Why that last image? Remember the old joke where a guy taking a Rorschach test sees people having sex on every card, then shouts at the shrink, ''How dare you show me all these disgusting pictures?!''+MacKinnon's fury is appropriate to the subject, and enlivened by a poison wit, but is often out of control. Sometimes an argument is merely burdened with a too bilious turn of phrase, but sometimes her one-sidedness is fundamental. MacKinnon wants a rape law that assesses ''consensual'' sex in terms of all power differences -- not only age. It's an interesting way of thinking about law and social equality, but who else really believes adult women need or want such protection? She several times mentions men with or without ''weapons other than the penis.'' Being human means negotiating different kinds of power, and sometimes a penis is just a penis. Jennifer Michael Hecht is the author of ''Doubt: A History'' and ''The End of the Soul.'' From checker at panix.com Sun May 22 01:15:41 2005 From: checker at panix.com (Premise Checker) Date: Sat, 21 May 2005 21:15:41 -0400 (EDT) Subject: [Paleopsych] Voice Literary Supplement: We Can Build You Message-ID: We Can Build You: A field guide to genetic engineering and bodily enhancement http://www.villagevoice.com/vls/0520,vls10,64041,21.html by Jenny Davidson Call it pollution, call it enhancement, but genetic engineering is here to stay. Nobody stays neutral: Ramez Naam loves it, Pete Shanks hates it, and their books provide comically extreme elaborations of their respective positions. Don't be misled by the subtitle of Human Genetic Engineering: A Guide for Activists, Skeptics, and the Very Perplexed. The only activists and skeptics Shanks cares for are the ones who want to stop genetic engineering right now, and he writes in a mode of monitory condemnation that makes even the notoriously puritanical Times health writer Jane Brody sound wildly permissive. Shanks provides a well-informed account of current biotechnology, including useful bibliographies of print and online resources, but his bossy certainty damages the book. Of course, he's often right: Given the already unequal distribution of health care dollars, it's not hard to accept that human GE is "potentially a driver of inequality"; we risk endorsing discrimination in parents' quest for the "genetically perfect" baby (and in reality, no such thing exists). But Shanks may alienate readers with his dogmatism. "Calling twins 'natural clones' or clones 'delayed twins' is either simplistic or propaganda designed to make the process seem familiar and thus acceptable," he opines. This and other similar statements ("The mindset that considers cloning appropriate is precisely the mindset that embraces the idea of human GE," he says dismissively) feel like intellectual bullying, and his boldface type and bullet-point lists reveal a desire to indoctrinate rather than to persuade. Shanks accepts without question the conservative bioethicist Leon Kass's "wisdom of repugnance," which makes the gut feelings of ordinary people a better basis for judgment than the knowledge of scientists and ethicists. "We have a common genetic heritage," Shanks states, "our gene pool is a genetic commons, and no individual has the right to pollute it." But many Americans used to reject interracial marriage on similar grounds, and there are good reasons not to rely on repugnance. Less precise in its science, goofier yet far more likable than Human Genetic Engineering, Naam's More Than Human: Embracing the Promise of Biological Enhancement celebrates biotechnology with evangelical fervor, arguing that "rather than prohibiting the exploration of new technologies, society ought to focus on spreading the power to alter our own minds and bodies to as many people as possible." Naam likes science fiction even more than science fact. Will identifying each gene their unborn child carries really "give parents an idea of a prospective child's appearance, intelligence and personality"? Even assuming that neural prosthetics will shortly allow us to pipe information directly from the emotional centers of our loved ones' brains straight to our own "empathy centers," are we likely to want to know all the thoughts and feelings of the people around us? (Naam's portrait of marital sex enhanced by neural implants devolves into soft-focus porn, but surely not all sexual couplings would benefit from mind reading.) And whether or not the human life span can be doubled, it seems singularly unlikely that life extension will cause people to "reach states of mental and emotional capacity for growth that simply can't be satisfied in one human lifetime." But this cheerful gullibility makes Naam's book an enjoyable and stimulating read. What others merely imagine, the Australian performance artist known as Stelarc makes flesh. Suspended from cables running through the hooks that pierce his skin, a naked Stelarc hangs poised as if in flight over East 11th Street: It's Superman! The photographs in Stelarc: The Monograph (out this fall from MIT) document his use of robotics, surgery, biotechnology, prosthetics, and computer software to reconfigure his own body as a cyborg. Stelarc believes the human body's on its way to obsolescence, but not all this volume's contributors agree with him. A demonstration of his prosthetic Extended Arm brings tears to one essayist's eyes, reminding her of the persistence and pathos of bodily attachments. Whether it's pierced and suspended from cables, scaffolded in metal prostheses, or penetrated by a miniature camera that broadcasts from inside his intestinal tract, Stelarc's body remains "wet, unpredictable, emotively disorderly, itself a technological marvel." A real marvel is Michael Chorost, author of Rebuilt: How Becoming Part Computer Made Me More Human. Born hard of hearing after his mother contracted rubella during pregnancy, Chorost lost the rest of his hearing one random terrifying day in adulthood. Rebuilt tells the story of his choice to undergo surgery for a cochlear implant, a complex apparatus of chips and electrodes and processors that triggers the auditory nerves in a pattern the brain learns to interpret as sound. With a background in computer programming and literary study, Chorost is better suited than Stelarc to explore what it means to become a kind of cyborg, and he brings to his fascinating subject great intellectual clarity, the habit of self-examination, and a willingness to expose himself. It turns out that surgery is just the beginning, and Chorost calls for a more systematic training program for patients struggling to integrate technology into their bodies in order to become "more human." He is particularly compelling on the inner workings of the code that sorts out sounds into different frequencies and the disorienting effects of the two different software programs that control the implant's electrode array: "One new version of the world would be unsettling enough," says Chorost, who in the aftermath of the surgery felt less like a hearing person than "the receptor of a flood of data, which I was constantly stitching into meaningful language a half-second or so after I actually heard it." Stranger and more unsettling than Stelarc's body art, Chorost's celebration of the technology that allowed him to hear again shows the futility of drawing a line between man and machine. Jenny Davidson teaches 18th-century British literature at Columbia. She is the author of two books: a novel, Heredity; and a monograph, Hypocrisy and the Politics of Politeness: Manners and Morals from Locke to Austen (Cambridge). She blogs at Light Reading. http://jennydavidson.blogspot.com/ From shovland at mindspring.com Sun May 22 04:49:39 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 21 May 2005 21:49:39 -0700 Subject: [Paleopsych] America Message-ID: <01C55E4E.FD8837B0.shovland@mindspring.com> America America is more than a place. America is more than the people who live here. America is an idea, an ideal, a dream, a hope, a promise kept. We do not own America. We are charged with living up to America's standards. These are high standards of honesty and integrity, hard work, and caring, and sometimes they are a heavy burden. Yet we know that having this burden is a very great blessing. Many of our ancestors shed their blood for this thing we call freedom. It is our duty to preserve it, not discard it. Steve Hovland 5/21/05 From HowlBloom at aol.com Sun May 22 04:51:03 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 22 May 2005 00:51:03 EDT Subject: [Paleopsych] Paul--is this yours? Message-ID: <193.404e4571.2fc169b7@aol.com> I ran across the following today while doing research for a national radio show--Coast to Coast--I'm about to do in 70 minutes (2am to 5am EST the night of 5/21 and morning of 5/22). It's exciting and sounds like the space-borne solar array you were talking about in an email several days ago. Is it one of the projects you've helped along? Howard Modules of the USC School of Engineering's Information Sciences Institute (ISI) proposed half-mile-long Space Solar Power System satellite self assemble with what the researchers call "hormonal" software,.funded by a consortium including NASA, the NSF, and the Electric Power Research Institute (EPRI) ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sun May 22 17:02:50 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:02:50 -0400 (EDT) Subject: [Paleopsych] NYT: Class Matters: Social Class and Religion in the United States of America Message-ID: Class Matters: Social Class and Religion in the United States of America http://www.nytimes.com/2005/05/22/national/class/EVANGELICALS-FINAL.html [Fourth in a series] On a Christian Mission to the Top By [2]LAURIE GOODSTEIN and [3]DAVID D. KIRKPATRICK For a while last winter, Tim Havens, a recent graduate of Brown University and now an evangelical missionary there, had to lead his morning prayer group in a stairwell of the campus chapel. That was because workers were clattering in to remake the lower floor for a display of American Indian art, and a Buddhist student group was chanting in the small sanctuary upstairs. Like most of the Ivy League universities, Brown was founded by Protestant ministers as an expressly Christian college. But over the years it gradually shed its religious affiliation and became a secular institution, as did the other Ivies. In addition to Buddhists, the Brown chaplain's office now recognizes "heathen/pagan" as a "faith community." But these days evangelical students like those in Mr. Havens's prayer group are becoming a conspicuous presence at Brown. Of a student body of 5,700, about 400 participate in one of three evangelical student groups - more than the number of active mainline Protestants, the campus chaplain says. And these students are in the vanguard of a larger social shift not just on campuses but also at golf resorts and in boardrooms; they are part of an expanding beachhead of evangelicals in the American elite. The growing power and influence of evangelical Christians is manifest everywhere these days, from the best-seller lists to the White House, but in fact their share of the general population has not changed much in half a century. Most pollsters agree that people who identify themselves as white evangelical Christians make up about a quarter of the population, just as they have for decades. What has changed is the class status of evangelicals. In 1929, the theologian [4]H. Richard Niebuhr described born-again Christianity as the "religion of the disinherited." But over the last 40 years, evangelicals have pulled steadily closer in income and education to mainline Protestants in the historically affluent establishment denominations. In the process they have overturned the old social pecking order in which "Episcopalian," for example, was a code word for upper class, and "fundamentalist" or "evangelical" shorthand for lower. Evangelical Christians are now increasingly likely to be college graduates and in the top income brackets. Evangelical C.E.O.'s pray together on monthly conference calls, evangelical investment bankers study the Bible over lunch on Wall Street and deep-pocketed evangelical donors gather at golf courses for conferences restricted to those who give more than $200,000 annually to Christian causes. Their growing wealth and education help explain the new influence of evangelicals in American culture and politics. Their buying power fuels the booming market for Christian books, music and films. Their rising income has paid for construction of vast mega-churches in suburbs across the country. Their charitable contributions finance dozens of mission agencies, religious broadcasters and international service groups. On [5]The Chronicle of Philanthropy's latest list of the 400 top charities, [6]Campus Crusade for Christ, an evangelical student group, raised more from private donors than the Boy Scouts of America, the Public Broadcasting Service and Easter Seals. Now a few affluent evangelicals are directing their attention and money at some of the tallest citadels of the secular elite: Ivy League universities. Three years ago a group of evangelical Ivy League alumni formed [7]the Christian Union, an organization intended to "reclaim the Ivy League for Christ," according to its fund-raising materials, and to "shape the hearts and minds of many thousands who graduate from these schools and who become the elites in other American cultural institutions." The Christian Union has bought and maintains new evangelical student centers at Brown, Princeton and Cornell, and has plans to establish a center on every Ivy League campus. In April, 450 students, alumni and supporters met in Princeton for an "Ivy League Congress on Faith and Action." A keynote speaker was Charles W. Colson, the born-again Watergate felon turned evangelical thinker. [8]Matt Bennett, founder of the Christian Union, told the conference, "I love these universities - Princeton and all the others, my alma mater, Cornell - but it really grieves me and really hurts me to think of where they are now." The Christian Union's immediate goal, he said, was to recruit campus missionaries. "What is happening now is good," Mr. Bennett said, "but it is like a finger in the dike of keeping back the flood of immorality." And trends in the Ivy League today could shape the culture for decades to come, he said. "So many leaders come out of these campuses. Seven of the nine Supreme Court justices are Ivy League grads; four of the seven Massachusetts Supreme Court justices; Christian ministry leaders; so many presidents, as you know; leaders of business - they are everywhere." He added, "If we are going to change the world, we have got, by God's power, to see these campuses radically changed." An Outsider on Campus Mr. Havens, who graduated from Brown last year, is the kind of missionary the Christian Union hopes to enlist. An evangelical from what he calls a "solidly middle class" family in the Midwest, he would have been an anomaly at Brown a couple of generations ago. He applied there, he said, out of a sense of "nonconformity" and despite his mother's preference that he attend a Christian college. "She just was nervous about, and rightfully so, what was going to happen to me freshman year," Mr. Havens recalled. When he arrived at Brown, in Providence, R.I., Mr. Havens was astounded to find that the biggest campus social event of the fall was the annual SexPowerGod dance, sponsored by the [9]Lesbian Gay Bisexual Transgender Queer Alliance and advertised with dining-hall displays depicting pairs of naked men or women. "Why do they have to put God in the name?" he said. "It seems kind of disrespectful." Mr. Havens found himself a double outsider of sorts. In addition to being devoted to his faith, he was a scholarship student at a university where half the students can afford $45,000 in tuition and fees without recourse to financial aid and where, he said, many tend to "spend money like water." But his modest means did not stand out as much as his efforts to guard his morals. He did not drink, and he almost never cursed. And he was determined to stay "pure" until marriage, though he did not lack for attention from female students. Just as his mother feared, Mr. Havens, a broad-shouldered former wrestler with tousled brown hair and a guileless smile, wavered some his freshman year and dated several classmates. "I was just like, 'Oh, I can get this girl to like me,' " he recalled. " 'Oh, she likes me; she's cute.' And so it was a lot of fairly short and meaningless relationships. It was pretty destructive." In his sophomore year, though, his evangelical a cappella singing group, a Christian twist on an old Ivy League tradition, interceded. With its support, he rededicated himself to serving God, and by his senior year he was running his own Bible-study group, hoping to inoculate first-year students against the temptations he had faced. They challenged one another, Mr. Havens said, "committing to remain sexually pure, both in a physical sense and in avoiding pornography and ogling women and like that." Mr. Havens is now living in a house owned and supported by the Christian Union and is trying to reach not just other evangelicals but nonbelievers as well. Prayers in the Boardrooms The Christian Union is the brainchild of Matt Bennett, 40, who earned bachelor's and master's degrees at Cornell and later directed the Campus Crusade for Christ at Princeton. Mr. Bennett, tall and soft-spoken, with a Texas drawl that waxes and wanes depending on the company he is in, said he got the idea during a 40-day water-and-juice fast, when he heard God speaking to him one night in a dream. "He was speaking to me very strongly that he wanted to see an increasing and dramatic spiritual revival in a place like Princeton," Mr. Bennett said. While working for Campus Crusade, Mr. Bennett had discovered that it was hard to recruit evangelicals to minister to the elite colleges of the Northeast because the environment was alien to them and the campuses often far from their homes. He also found that the evangelical ministries were hobbled without adequate salaries to attract professional staff members and without centers of their own where students could gather, socialize and study the Bible. Jews had [10]Hillel Houses, and Roman Catholics had [11]Newman Centers. He thought evangelicals should have their own houses, too, and began a furious round of fund-raising to buy or build some. An early benefactor was his twin brother, Monty, who had taken over the [12]Dallas hotel empire their father built from a single Holiday Inn and who had donated a three-story Victorian in a neighborhood near Brown. To raise more money, Matt Bennett has followed a grapevine of affluent evangelicals around the country, winding up even in places where evangelicals would have been a rarity just a few decades ago. In Manhattan, for example, he visited Wall Street boardrooms and met with the founder of Socrates in the City, a roundtable for religious intellectuals that gathers monthly at places like the Algonquin Hotel and the Metropolitan Club. Those meetings introduced him to an even more promising pool of like-minded Christians, the New Canaan Group, a Friday morning prayer breakfast typically attended by more than a hundred investment bankers and other professionals. The breakfasts started in the Connecticut home of a partner in Goldman, Sachs but grew so large that they had to move to a local church. Like many other evangelicals, some members attend churches that adhere to evangelical doctrine but that remain affiliated with mainline denominations. Other donors to the Christian Union are members of local elites across the Bible Belt. Not long ago, for example, Mr. Bennett paid a visit to Montgomery, Ala., for lunch with Julian L. McPhillips Jr., a wealthy Princeton alumnus and the managing partner of a local law firm. Mr. Bennett, wearing an orange Princeton tie, said he wanted to raise enough money for the Christian Union to hire someone to run a "healing ministry" for students with depression, eating disorders or drug or alcohol addiction. Mr. McPhillips, who shares Mr. Bennett's belief in the potential of faith healing, remarked that he had once cured an employee's migraine headaches just by praying for him. "We joke in my office that we don't need health insurance," he told Mr. Bennett before writing a check for $1,000. Mr. Bennett's database has so far grown to about 5,000 names gathered by word of mouth alone. They are mostly Ivy League graduates whose regular alumni contributions he hopes to channel into the Christian Union. And these Ivy League evangelicals, in turn, are just a small fraction of the large number of their affluent fellow believers. Gaining on the Mainline Their commitment to their faith is confounding a long-held assumption that, like earlier generations of Baptists or Pentecostals, prosperous evangelicals would abandon their religious ties or trade them for membership in establishment churches. Instead, they have kept their traditionalist beliefs, and their churches have even attracted new members from among the well-off. Meanwhile, evangelical Protestants are pulling closer to their mainline counterparts in class and education. As late as 1965, for example, a white mainline Protestant was two and a half times as likely to have a college degree as a white evangelical, according to an analysis by [13]Prof. Corwin E. Smidt, a political scientist at [14]Calvin College, an evangelical institution in Grand Rapids, Mich. But by 2000, a mainline Protestant was only 65 percent more likely to have the same degree. And since 1985, the percentage of incoming freshmen at highly selective private universities who said they were born-again also rose by half, to 11 or 12 percent each year from 7.3 percent, according to the [15]Higher Education Research Institute at the University of California, Los Angeles. To many evangelical Christians, the reason for their increasing worldly success and cultural influence is obvious: God's will at work. Some also credit leaders like the midcentury intellectual Carl F. H. Henry, who helped to found a [16]large and influential seminary, a [17]glossy evangelical Christian magazine and the [18]National Association of Evangelicals, a powerful umbrella group that now includes 51 denominations. Dr. Henry and his followers implored believers to look beyond their churches and fight for a place in the American mainstream. There were also demographic forces at work, beginning with the G.I. Bill, which sent a pioneering generation of evangelicals to college. Probably the greatest boost to the prosperity of evangelicals as a group came with the Sun Belt expansion of the 1970's and the Texas oil boom, which brought new wealth and businesses to the regions where evangelical churches had been most heavily concentrated. The most striking example of change in how evangelicals see themselves and their place in the world may be the [19]Assemblies of God, a Pentecostal denomination. It was founded in Hot Springs, Ark., in 1914 by rural and working-class Christians who believed that the Holy Spirit had moved them to speak in tongues. Shunned by established churches, they became a sect of outsiders, and their preachers condemned worldly temptations like dancing, movies, jewelry and swimming in public pools. But like the Southern Baptists and other conservative denominations, the Assemblies gradually dropped their separatist strictures as their membership prospered and spread. As the denomination grew, Assemblies preachers began speaking not only of heavenly rewards but also of the material blessings God might provide in this world. The notion was controversial in some evangelical circles but became widespread nonetheless, and it made the Assemblies' faith more compatible with an upwardly mobile middle class. By the 1970's, Assemblies churches were sprouting up in affluent suburbs across the country. Recent surveys by [20]Margaret Poloma, a historian at the University of Akron in Ohio, found Assemblies members more educated and better off than the general public. As they flourished, evangelical entrepreneurs and strivers built a distinctly evangelical business culture of prayer meetings, self-help books and business associations. In some cities outside the Northeast, evangelical business owners list their names in Christian yellow pages. The rise of evangelicals has also coincided with the gradual shift of most of them from the Democratic Party to the Republican and their growing political activism. The conservative Christian political movement seldom developed in poor, rural Bible Belt towns. Instead, its wellsprings were places like the Rev. Ed Young's booming mega-church in suburban Houston or the [21]Rev. Timothy LaHaye's in Orange County, Calif., where evangelical professionals and businessmen had the wherewithal to push back against the secular culture by organizing boycotts, electing school board members and lobbying for conservative judicial appointments. 'A Bunch of Heathens' Mr. Havens, the Brown missionary, is part of the upsurge of well-educated born-again Christians. He grew up in one of the few white households in a poor black neighborhood of St. Louis, where his parents had moved to start a church, which failed to take off. Mr. Havens's father never graduated from college. After being laid off from his job at a marketing company two years ago, he now works in an insurance company's software and systems department. Tim Havens's mother home-schooled the family's six children for at least a few years each. Mr. Havens got through Brown on scholarships and loans, and at graduation was $25,000 in debt. To return to campus for his missionary year and pay his expenses, he needed to raise an additional $36,000, and on the advice of Geoff Freeman, the head of the Brown branch of Campus Crusade, he did his fund-raising in St. Louis. "It is easy to sell New England in the Midwest," as Mr. Freeman put it later. Midwesterners, he said, see New Englanders as "a bunch of heathens." So Mr. Havens drove home each day from a summer job at a stone supply warehouse to work the phone from his cluttered childhood bedroom. He told potential donors that many of the American-born students at Brown had never even been to church, to say nothing of the students from Asia or the Middle East. "In a sense, it is pre-Christian," he explained. Among his family's friends, however, encouragement was easier to come by than cash. As the summer came to a close, Mr. Havens was still $6,000 short. He decided to give himself a pay cut and go back to Brown with what he had raised, trusting God to take care of his needs just as he always had when money seemed scarce during college. "God owns the cattle on a thousand hills," he often told himself. "God has plenty of money." Thanks to the Christian Union, Mr. Haven's present quarters as a ministry intern at Brown are actually more upscale than his home in St. Louis. On Friday nights, he is a host for a Bible-study and dinner party for 70 or 80 Christian students, who serve themselves heaping plates of pasta before breaking into study groups. Afterward, they regroup in the living room for board games and goofy improvisation contests, all free of profanity and even double entendre. Lately, though, Mr. Havens has been contemplating steps that would take him away from Brown and campus ministry. After a chaste romance - "I didn't kiss her until I asked her to marry me," he said - he recently became engaged to a missionary colleague, Liz Chalmers. He has been thinking about how to support the children they hope to have. And he has been considering the example of his future father-in-law, Daniel Chalmers, a Baptist missionary to the Philippines who ended up building power plants there and making a small fortune. Mr. Chalmers has been a steady donor to Christian causes, and he bought a plot of land in Oregon, where he plans to build a retreat center. "God has always used wealthy people to help the church," Mr. Havens said. He pointed out that in the Bible, rich believers helped support the apostles, just as donors to the Christian Union are investing strategically in the Ivy League today. With those examples and his own father in mind, Mr. Havens chose medicine over campus ministry. He scored well on his medical school entrance exams and, after another year at Brown, he will head to St. Louis University School of Medicine. At the Christian Union conference in April, he was pleased to hear doctors talk about praying with their patients and traveling as medical missionaries. He is looking forward to having the money a medical degree can bring, and especially to putting his children through college without the scholarships and part-time jobs he needed. But whether he becomes rich, he said, "will depend on how much I keep." Like other evangelicals of his generation, he means to take his faith with him as he makes his way in the world. He said his roommates at Brown had always predicted that he would "sell out"- loosen up about his faith and adopt their taste for new cars, new clothes and the other trappings of the upper class. He didn't at Brown and he thinks he never will. "So far so good," he said. But he admitted, "I don't have any money yet." References 2. http://query.nytimes.com/search/query?ppds=bylL&v1=LAURIE%20GOODSTEIN&fdq=19960101&td=sysdate&sort=newest&ac=LAURIE%20GOODSTEIN&inline=nyt-per 3. http://query.nytimes.com/search/query?ppds=bylL&v1=DAVID%20D.%20KIRKPATRICK&fdq=19960101&td=sysdate&sort=newest&ac=DAVID%20D.%20KIRKPATRICK&inline=nyt-per 4. http://www.hds.harvard.edu/library/bms/bms00630.html 5. http://philanthropy.com/ 6. http://www.ccci.org/ 7. http://involve.christian-union.org/site/PageServer 8. http://involve.christian-union.org/site/PageServer?pagename=CUStaffINfo 9. http://queer.brown.edu/ 10. http://www.hillel.org/ 11. http://www.catholiclinks.org/newmanunitedstates.htm 12. http://www.remingtonhotels.com/ 13. http://www.calvin.edu/admin/csr/faculty/smidt_c/ 14. http://www.calvin.edu/ 15. http://www.gseis.ucla.edu/heri/heri.html 16. http://www.fuller.edu/ 17. http://www.christianitytoday.com/ 18. http://www.nae.net/ 19. http://ag.org/ 20. http://www3.uakron.edu/sociology/poloma.htm 21. http://www.timlahaye.com/index2.php3 From checker at panix.com Sun May 22 17:03:02 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:03:02 -0400 (EDT) Subject: [Paleopsych] NYT: Daily Lesson Plan for "Up from the Holler" Message-ID: Daily Lesson Plan http://www.nytimes.com/learning/teachers/lessons/20050520friday.html 5.5.20 Related Article [30]Up From the Holler: Living in Two Worlds, at Home in Neither By TAMAR LEWIN Social Motion Examining Social Mobility Through Personal Interviews Author(s) [37]Michelle Sale, The New York Times Learning Network [38]Andrea Perelman, The Bank Street College of Education in New York City Grades: 6-8, 9-12 Subjects: Language Arts, Social Studies [39]Interdisciplinary Connections Overview of Lesson Plan: In this lesson, students consider the difficulties associated with social mobility to interview an adult and write about his or her personal experiences. Review the [40]Academic Content Standards related to this lesson. Suggested Time Allowance: 1 hour Objectives: Students will: 1. Examine aspects of their own lives to develop a sense of social class and consider their comfort levels. 2. Consider the cultural uneasiness of a woman who changed social classes by reading and discussing the article "Up From the Holler: Living in Two Worlds, At Home in Neither." 3. Create interview questions about social mobility. 4. Conduct an interview with an adult about his or her experiences with class and social mobility. Resources / Materials: -student journals -pens/pencils -paper -classroom board -construction paper or unlined copy paper (one sheet per student) -markers or colored pencils (enough for students to share) -copies of the article "Up From the Holler: Living in Two Worlds, At Home in Neither," found online at [41]http://www.nytimes.com/learning/teachers/featured_articles/2005052 0friday.html (one per student) Activities / Procedures: 1. WARM-UP/DO NOW: Students will be creating picture webs, which are similar to word webs but using simple drawings to illustrate their thoughts and ideas. (An example of a picture web can be found at http://www.eduplace.com/rdg/hmsv/4/handson/page124.html.) Before class begins, place a sheet of construction paper or unlined paper on each student's desk. Upon entering class, students respond to the following prompt (written on the board prior to class): "Nearly everyone has felt like an outsider at some point. Consider various aspects of your everyday life that are related to social class. Create a picture web illustrating your social class by including these aspects of your life, family and community: -food served at family and/or social parties you attend -newspapers and/or books you and others read -recreational activities/vacations -education levels of adults in your community -jobs held by adults in your community -neighborhood and types of housing -cars or types of transportation usually used" After a few minutes, allow students to share some of their images. Then note that for most people, these aspects of our worlds are part of our social class "comfort zone"-allowing us to feel comfortable when we are in familiar circumstances and, for many people, uncomfortable when they move into a sphere dominated by another class. Ask students: How would you feel if you were in a situation that looked very different from the images on your picture web? Have you ever been to a party at a friend's house, or in another situation, where everything was very different from what you are used to? How did you feel? How did you react? Explain that The New York Times is currently running a series of articles examining social class in the United States, and read this statement, published in The Times: "A team of reporters spent more than a year exploring ways that class - defined as a combination of income, education, wealth and occupation - influences destiny in a society that likes to think of itself as a land of unbounded opportunity." Then tell them that the article they are about to read deals with the discomfort that can be felt in moving from one social class to another. 2. As a class, read and discuss the article "Up From the Holler: Living in Two Worlds, At Home in Neither" ([42]http://www.nytimes.com/learning/teachers/featured_articles/200505 20friday.html), focusing on the following questions: a. What clues tell the reader that Della Mae Justice grew up in a low social class? b. How did Ms. Justice change her social class? c. Why is Ms. Justice uncomfortable with her new status? d. According to Ms. Justice, what don't people with low socioeconomic backgrounds have? e. Why did Joe Justice rescue Ms. Justice from foster care? f. How did the opportunities Mr. Justice provided for Ms. Justice change her chances to become part of the middle class? g. Why was Berea College a comfortable place for Ms. Justice to continue her education? h. How has her sense of family obligation affected Ms. Justice's life? i. Why did Ms. Justice move back to Pikeville? j. What type of law does Ms. Justice practice? k. According to Ms. Justice, how does socioeconomic status affect how far a person can go in life? l. What things is Ms. Justice doing to ensure Will and Anna have middle class childhoods? m. What types of knowledge or experiences does Ms. Justice feel is common for middle-class people to have? n. According to sociologists, what distinguishes middle-class children from working-class children? 3. Explain that for homework, students will interview the adult of their choice to investigate his or her experiences with social mobility and social class comfort zones. They will create their own portrait of the person's experience with class from childhood through adulthood, similar to how Tamar Lewin chronicled Della Mae Justice's story in the article "Up From the Holler: Living in Two Worlds, At Home in Neither." Divide students into pairs. Explain that although this is an individual activity, they may work together to brainstorm questions for their portrait projects. Since the topic of class may be a sensitive one, encourage students to develop tactful yet probing questions. Suggest they analyze the article to see what types of questions the author, Tamar Lewin, may have asked Ms. Justice. Students should develop at least ten questions, focusing on the experiences, opportunities and choices presented in a particular person's life. Students should also keep in mind that aspects of class can create a comfort zone in which to live. Additionally, students should explore their subject's desired and real experiences with class mobility. Encourage students to create open questions that require answers beyond simple "yes" and "no." They should ask questions that fit into the following categories: -neighborhood and home life -educational experiences -family structure -childhood aspirations or goals -employment history -concepts of the American dream and class mobility Each student should leave class with a list of questions and an idea of who to interview. 4. WRAP-UP/HOMEWORK: Individually, students interview a chosen adult about his or her experiences with class from childhood to adulthood and then write a narrative portrait about the person's life seen through the lens of social class, inspired by the Times article. In a future class, students should share their findings and discuss the difficulties and benefits associated with social mobility. Further Questions for Discussion: -Does social class define one's identity? If so, how? If not, why not? -Why is it difficult to change social class? -In the article, Ms. Justice mentions that not having certain types of knowledge makes her uncomfortable in the middle class. Why does common knowledge matter? How is the body of knowledge common in any given social class determined? -What role does the American dream play in achieving socioeconomic success? Evaluation / Assessment: Students will be evaluated based on completion of picture webs, participation in class discussion, thoughtfully created interview questions, thoroughly conducted interviews and well-written narratives. Vocabulary: holler, Appalachian, flank, rural, transformed, vantage, indicate, conventional, imploded, ratcheted, contemporary, abhorrent, sundering, octagonal, mingle, institution, pursue, fellowship, intolerable, custody, winces, bristle, continuum, niche, affluent, ruefully, hogan, extracurricular, negotiate, prospect, lucrative Extension Activities: 1. There have been many different interpretations of the American dream. Examine the Library of Congress's page about the American dream at [43]http://memory.loc.gov/ammem/ndlpedu/lessons/97/dream/thedream.html . Do you hold an "American dream"? Create a poster illustrating your ideas. 2. Research Berea College ([44]http://www.berea.edu) and prepare a presentation for a high school college fair. On a poster, highlight the notable and unique aspects of this institution and illustrate what makes it different from other higher education institutions. 3. Read "Bloomability" by Sharon Creech or "Deliver Us From Normal" by Kate Klise, novels which address issues of comfort and class. Then write a book review exploring the plausibility of the plot based on what you have learned and experienced with social class and social class comfort zones. 4. With your classmates, use information from your interviews to create illustrated and annotated timelines and then display them to show the variety of paths people have traveled within and between social classes. Interdisciplinary Connections: Journalism - Create a poll that asks members of your community about their comfort zone. Include questions about locations where people may feel like an insider or an outsider, as well as questions about general knowledge people in the community have. (Consider the fact that Ms. Justice felt left out when her contemporaries talked about Che Guevara and Mt. Vesuvius.) Compile and analyze your findings. Write an article for your school's newspaper. Media Studies - Watch a movie where the main character is clearly outside of his or her class-based comfort zone, such as "Breakfast at Tiffany's" (1961), "My Fair Lady" (1964), "Pretty Woman" (1990) or "The Princess Diaries" (2001). Then write a paper analyzing the main character's comfort zone and her discomfort with a change in social class. Did gender play any role at all in the issues as dramatized in the film? How might the class issues play out differently if the gender roles were reversed? Teaching with The Times - Read the entire class series. Write a pitch letter suggesting an article for a future article in the series that relates to children or teenagers and social class. Include your rationale for why such an article should be included in the series. To order The New York Times for your classroom, [45]click here. Other Information on the Web The Learning Network's Class Matters special section ([46]http://www.nytimes.com//learning/issues_in_depth/20050515.html) provides the articles from this series, lesson plans, interactive graphics and more for use in your classroom. PBS sponsored a film entitled "People Like Us," which deals with social mobility and class: [47]http://www.pbs.org/peoplelikeus/film/index.html. Academic Content Standards: McREL This lesson plan may be used to address the academic standards listed below. These standards are drawn from [48]Content Knowledge: A Compendium of Standards and Benchmarks for K-12 Education: 3rd and 4th Editions and have been provided courtesy of the [49]Mid-continent Research for Education and Learning in Aurora, Colorado. Grades 6-8 Behavioral Studies Standard 2 - Understands various meanings of social group, general implications of group membership, and different ways that groups function. Benchmark: Understands that a large society may be made up of many groups, and these groups may contain many distinctly different subcultures (e.g., associated with region, ethnic origin, social class, interests, values) Behavioral Studies Standard 4 - Understands conflict, cooperation, and interdependence among individuals, groups, and institutions. Benchmark: Understands how role, status, and social class may affect interactions of individuals and social groups Civics Standard 9 - Understands the importance of Americans sharing and supporting certain values, beliefs, and principles of American constitutional democracy. Benchmark: Knows how an American's identity stems from belief in and allegiance to shared political values and principles, and how this identity differs from that of most other nations, which often base their identity on such things as ethnicity, race, religion, class, language, gender, or national origin Language Arts Standard 1 - Uses the general skills and strategies of the writing process. Benchmarks: Uses content, style, and structure (e.g., formal or informal language, genre, organization) appropriate for specific audiences (e.g., public, private) and purposes (e.g., to entertain, to influence, to inform); Writes persuasive compositions; Writes compositions that address problems/solutions Language Arts Standard 8 - Uses listening and speaking strategies for different purposes. Benchmarks: Plays a variety of roles in group discussions; Asks questions to seek elaboration and clarification of ideas; Uses strategies to enhance listening comprehension Grades 9-12 Behavioral Studies Standard 1 - Understands that group and cultural influences contribute to human development, identity, and behavior. Benchmarks: Understands that social distinctions are a part of every culture, but they take many different forms (e.g., rigid classes based solely on parentage, gradations based on the acquisition of skill, wealth, and/or education); Understands that people often take differences (e.g., in speech, dress, behavior, physical features) to be signs of social class; Understands that the difficulty of moving from one social class to another varies greatly with time, place, and economic circumstances Civics Standard 13 - Understands the character of American political and social conflict and factors that tend to prevent or lower its intensity. Benchmarks: Knows how universal public education and the existence of a popular culture that crosses class boundaries have tended to reduce the intensity of political conflict (e.g., by creating common ground among diverse groups) Language Arts Standard 1 - Uses the general skills and strategies of the writing process. Benchmarks: Uses strategies to address writing to different audiences; Uses strategies to adapt writing for different purposes; Writes fictional, biographical, autobiographical, and observational narrative compositions; Writes persuasive compositions that address problems/solutions or causes/effects; Writes reflective compositions Language Arts Standard 8 - Uses listening and speaking strategies for different purposes. Benchmarks: Asks questions as a way to broaden and enrich classroom discussions; Uses a variety of strategies to enhance listening comprehension; Adjusts message wording and delivery to particular audiences and for particular purposes (e.g., to defend a position, to entertain, to inform, to persuade) References 30. http://www.nytimes.com/learning/teachers/featured_articles/20050520friday.html 37. http://www.nytimes.com/learning/teachers/lessons/bio.html#MichelleSale 38. http://www.nytimes.com/learning/teachers/lessons/bio.html#AndreaPerelman 39. http://www.nytimes.com/learning/teachers/lessons/20050520friday.html#ic 40. http://www.nytimes.com/learning/teachers/lessons/20050520friday.html#standards 41. http://www.nytimes.com/learning/teachers/featured_articles/20050520friday.html 42. http://www.nytimes.com/learning/teachers/featured_articles/20050520friday.html 43. http://memory.loc.gov/ammem/ndlpedu/lessons/97/dream/thedream.html 44. http://www.berea.edu/ 45. http://www.nytimes.com/learning/teachers/NIE/index.html 46. http://www.nytimes.com//learning/issues_in_depth/20050515.html 47. http://www.pbs.org/peoplelikeus/film/index.html 48. http://www.mcrel.org/standards-benchmarks/ 49. http://www.mcrel.org/ From checker at panix.com Sun May 22 17:03:15 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:03:15 -0400 (EDT) Subject: [Paleopsych] NYT: Seriously, the Joke Is Dead Message-ID: Seriously, the Joke Is Dead http://www.nytimes.com/2005/05/22/fashion/sundaystyles/22joke.html By WARREN ST. JOHN IN case you missed its obituary, the joke died recently after a long illness, of, oh, 30 years. Its passing was barely noticed, drowned out, perhaps, by the din of ironic one-liners, snark and detached bons mots that pass for humor these days. The joke died a lonely death. There was no next of kin to notify, the comedy skit, the hand-buzzer and Bob Newhart's imaginary telephone monologues having passed on long before. But when people reminisce about it, they always say the same thing: the joke knew how to make an entrance. "Two guys walked into a bar"; "So this lady goes to the doctor"; "Did you hear the one about the talking parrot?" The new humor sneaks by on little cat feet, all punch line and no setup, and if it bombs, you barely notice. The joke insisted on everyone's attention, and when it bombed - wow. "A joke is a way to say, 'I'm going to do something funny now,' " said Penn Jillette, the talking half of the comedy and magic duo Penn & Teller and a producer of "The Aristocrats," a new documentary about an old dirty joke of the same name. "If I don't get a laugh at the end, I'm a failure." It's a matter of faith among professional comics that jokes - the kind that involve a narrative setup, some ridiculous details and a punch line - have been displaced by observational humor and one-liners. Lisa Lampanelli, who describes herself as the world's only female insult comic, said that in the business, straight jokes were considered "the kiss of death." "You don't tell joke jokes onstage ever," she said. "Because then you're a big hack." But out in the real world, the joke hung on for a while, lurking in backwaters of male camaraderie like bachelor parties and trading floors and in monthly installments of Playboy's "Party Jokes" page. Then jokes practically vanished. To tell a joke at the office or a party these days is to pronounce oneself a cornball, an attention hog, and of course to risk offending someone, a high social crime. "I can't remember the last time I was sitting around and heard someone tell a good joke," Ms. Lampanelli said. While many in the world of humor and comedy agree that the joke is dead, there is little consensus on who or what killed it or exactly when it croaked. Theories abound: the atomic bomb, A.D.D., the Internet, even the feminization of American culture, have all been cited as possible causes. In the academic world scholars have been engaged in a lengthy postmortem of the joke for some time, but still no grand unifying theory has emerged. "There isn't a lot of agreement," said Don L. F. Nilsen, the executive secretary of the International Society for Humor Studies and a professor of linguistics at Arizona State University. Among comics, the most cited culprit in the death of the joke is so-called "political correctness" or, at least, a heightened sensitivity to offending people. Mr. Jillette said he believed most of the best jokes have a mean-spirited component, and that mean-spiritedness is out. "You used to feel safer telling jokes," he said. "Since all your best material is mean-spirited, you feel less safe. You're worried some might think that you really have this point of view." Older comics tend to put the blame on the failings of younger generations. Robert Orben, 78, a former speechwriter for President Gerald R. Ford and the author of several manuals for comedians, said he believed a combination of shortened attention spans and lack of backbone among today's youth made them ill-suited for joke telling. "A young person today has a nanosecond attention span, so whatever you do in a humor has to be short," he said. "Younger people do not wait for anything that takes time to develop. We're going totally to one-liners." "Telling a joke is risk taking," Mr. Orben added. "Younger people are more insecure and not willing to put themselves on the line, so a quick one-liner is much safer." (Asked if he had a favorite joke, Mr. Orben said, "The Washington Redskins," suggesting that even veteran joke tellers might have abandoned the form.) Scholars say that while humor has always been around - in ancient Athens, for example, a comedians' club called the Group of 60 met regularly in the temple of Herakles - the joke has gone in and out of fashion. In modern times its heyday was probably the 1950's, but the joke's demise began soon after, a result of several seismic cultural shifts. The first of those, Mr. Nilsen said, was the threat of nuclear annihilation. "Before the atomic bomb everyone had a sense that there was a future," Mr. Nilsen said. "Now we're at the hands of fate. We could go up at any moment. In order to deal with something as horrendous as that, we've become a little cynical." Gallows humor and irony, Mr. Nilsen said, were more suited to this dire condition than absurd stories about talking kangaroos, tumescent parrots and bears that sodomize hunters. (Don't know that one? Ask your granddad.) Around the same time, said John Morreall, a religion professor and humor scholar at the College of William and Mary, the roles of men and women began to change, which had implications for the joke. Telling old-style jokes, he said, was a masculine pursuit because it allowed men to communicate with one another without actually revealing anything about themselves. Historically women's humor was based on personal experience, and conveyed a sense of the teller's likes and dislikes, foibles and capacity for self-deprecation. The golden age of joke telling corresponded with a time when men were especially loathe to reveal anything about their inner lives, Mr. Morreall said. But over time men let down their guard, and comics like Lenny Bruce, George Carlin and later Jerry Seinfeld, embraced the personal, observational style. "A very common quip was, 'Women can't tell jokes,' " Mr. Morreall said. "I found that women can't remember jokes. That's because they don't give a damn. Their humor is observational humor about the people around that they care about. Women virtually never do that old-style stuff." "Women's-style humor was ahead of the curve," he said. "In the last 30 years all humor has caught up with women's humor." The mingling of the sexes in the workplace and in social situations wasn't particularly good for the joke either, as jokes that played well in the locker room didn't translate to the conference room or the co-ed dinner party. And in any event, scholars say, in a social situation wit plays better than old-style joke telling. Witty remarks push the conversation along and enliven it, encouraging others to contribute. Jokes, on the other hand, cause conversation to screech to a halt and require everyone to focus on the joke teller, which can be awkward. Whatever tenuous hold the joke had left by the 1990's may have been broken by the Internet, Mr. Nilsen said. The torrent of e-mail jokes in the late 1990's and joke Web sites made every joke available at once, essentially diluting the effect of what had been an spoken form. While getting up and telling a joke requires courage, forwarding a joke by e-mail takes hardly any effort at all. So everyone did it, until it wasn't funny anymore. "The Aristocrats," the documentary produced by Mr. Jillette and the comic Paul Provenza, says a lot about what the straight-up joke once was, and what it isn't any longer. The film, which was shown at Sundance in January and will be released in theaters this summer, features dozens of comics talking about and performing an over-the-top vaudeville standard about a family that shows up at a talent agency, looking for representation. The talent agent agrees to watch them perform, at which point the family goes into a crazed fit of orgiastic and scatological mayhem, the exact details of which vary from comic to comic. The punch line comes when the agent asks the family what they call their bizarre act. The answer: "The Aristocrats!" Much of the humor in the documentary comes not from the joke, which nearly everyone in the film concedes is lousy, but from watching modern-day observational comics like Mr. Carlin, Paul Reiser and Gilbert Gottfried perform in the anachronistic mode of Buddy Hackett, Milton Berle and Red Skelton. Imagine watching a documentary of contemporary rock guitarists doing their teenage versions of the solo in "Free Bird" and you'll get the idea; with each rendition it becomes more and more clear why people don't do it anymore. "Part of the joke is that it's even more inappropriate because we don't do that anymore," Mr. Nilsen said. One paradox about the death of the joke: It may result in more laughs. Joke tellers, after all, are limited by the number of jokes they can memorize, while observational wits never run out of material. And Mr. Morreall said that because wits make no promise to be funny, the threshold for getting a laugh is lower for them than for joke tellers, who always battle high expectations. "Jon Stewart just has to twist his eyebrows a little bit, and people laugh," he said. "It's a much easier medium." Some comics who grew up in the age of the joke say they are often amazed at how easy crowds are in the era of observational humor. Shelley Berman, 79, a comic whose career took off on "The Ed Sullivan Show" and who now plays Larry David's father on the HBO show "Curb Your Enthusiasm," said these days even the most banal remark seemed to get a response. "I don't tell jokes in my act," he said. "But if I tell an audience I don't tell jokes, I'll get people laughing at that line." From checker at panix.com Sun May 22 17:03:34 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:03:34 -0400 (EDT) Subject: [Paleopsych] NYT Idiotorial: A Surprising Leap on Cloning Message-ID: A Surprising Leap on Cloning http://www.nytimes.com/2005/05/22/opinion/22sun2.html South Korean scientists stunned their rivals around the world last week by announcing that they had produced the first human embryos that were genetic matches for diseased or injured patients, and had done so by a highly efficient method that could bring further rapid advances in cloning. It was sobering evidence that leadership in "therapeutic cloning" has shifted abroad while American scientists, hamstrung by political and religious opposition, make do with private or state funds in the absence of federal support. The Korean achievement, published in the online edition of the journal Science, makes the current debate in Congress over federal financing of stem cell research look pathetically behind the times. Under current restrictions imposed by President Bush, federal money can be used for research on 20 stem cell lines that were derived from surplus embryos at fertility clinics years ago, but not on any newly derived lines. A House bill that may come up for a vote soon would expand the number of surplus embryos that could be studied but would not allow federal funding for therapeutic cloning, the most promising avenue for stem cell research. That would be a missed opportunity. Stem cells derived from cloned human embryos that are genetically matched to sick patients are potentially much more useful than stem cells derived from surplus embryos at fertility clinics, both for research and for potential treatments. Since cloned embryos carry the genetic makeup of patients with known diseases, scientists can study how those diseases develop from the earliest stages and can perhaps find drug treatments to interrupt the process. And if scientists ultimately succeed in converting the stem cells themselves into replacement tissues to repair damaged organs, those tissues would have the best chance of avoiding rejection by a patient's immune system if they were genetically matched to the patient through therapeutic cloning. Unfortunately, the House has twice passed bills to ban therapeutic cloning outright, not just restrict federal financing, and President Bush remains "dead set against human cloning," according to a White House spokesman. The president threatened to veto even the modest proposals to use more surplus embryos from fertility clinics. In the upcoming struggles over stem cell legislation, supporters of sound science must ensure that no ban is imposed on therapeutic cloning that would further shackle American researchers while scientists in Asia and Britain forge ahead. From checker at panix.com Sun May 22 17:03:44 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:03:44 -0400 (EDT) Subject: [Paleopsych] NYT: Kristof: China, the World's Capital Message-ID: China, the World's Capital New York Times opinion column by Nicholas D. Kristof, 5. http://www.nytimes.com/2005/05/22/opinion/22kristof.html KAIFENG, China As this millennium dawns, New York City is the most important city in the world, the unofficial capital of planet Earth. But before we New Yorkers become too full of ourselves, it might be worthwhile to glance at dilapidated Kaifeng in central China. Kaifeng, an ancient city along the mud-clogged Yellow River, was by far the most important place in the world in 1000. And if you've never heard of it, that's a useful warning for Americans - as the Chinese headline above puts it, in a language of the future that many more Americans should start learning, "glory is as ephemeral as smoke and clouds." As the world's only superpower, America may look today as if global domination is an entitlement. But if you look back at the sweep of history, it's striking how fleeting supremacy is, particularly for individual cities. My vote for most important city in the world in the period leading up to 2000 B.C. would be Ur, Iraq. In 1500 B.C., perhaps Thebes, Egypt. There was no dominant player in 1000 B.C., though one could make a case for Sidon, Lebanon. In 500 B.C., it would be Persepolis, Persia; in the year 1, Rome; around A.D. 500, maybe Changan, China; in 1000, Kaifeng, China; in 1500, probably Florence, Italy; in 2000, New York City; and in 2500, probably none of the above. Today Kaifeng is grimy and poor, not even the provincial capital and so minor it lacks even an airport. Its sad state only underscores how fortunes change. In the 11th century, when it was the capital of Song Dynasty China, its population was more than one million. In contrast, London's population then was about 15,000. An ancient 17-foot painted scroll, now in the Palace Museum in Beijing, shows the bustle and prosperity of ancient Kaifeng. Hundreds of pedestrians jostle each other on the streets, camels carry merchandise in from the Silk Road, and teahouses and restaurants do a thriving business. Kaifeng's stature attracted people from all over the world, including hundreds of Jews. Even today, there are some people in Kaifeng who look like other Chinese but who consider themselves Jewish and do not eat pork. As I roamed the Kaifeng area, asking local people why such an international center had sunk so low, I encountered plenty of envy of New York. One man said he was arranging to be smuggled into the U.S. illegally, by paying a gang $25,000, but many local people insisted that China is on course to bounce back and recover its historic role as world leader. "China is booming now," said Wang Ruina, a young peasant woman on the outskirts of town. "Give us a few decades and we'll catch up with the U.S., even pass it." She's right. The U.S. has had the biggest economy in the world for more than a century, but most projections show that China will surpass us in about 15 years, as measured by purchasing power parity. So what can New York learn from a city like Kaifeng? One lesson is the importance of sustaining a technological edge and sound economic policies. Ancient China flourished partly because of pro-growth, pro-trade policies and technological innovations like curved iron plows, printing and paper money. But then China came to scorn trade and commerce, and per capita income stagnated for 600 years. A second lesson is the danger of hubris, for China concluded it had nothing to learn from the rest of the world - and that was the beginning of the end. I worry about the U.S. in both regards. Our economic management is so lax that we can't confront farm subsidies or long-term budget deficits. Our technology is strong, but American public schools are second-rate in math and science. And Americans' lack of interest in the world contrasts with the restlessness, drive and determination that are again pushing China to the forefront. Beside the Yellow River I met a 70-year-old peasant named Hao Wang, who had never gone to a day of school. He couldn't even write his name - and yet his progeny were different. "Two of my grandsons are now in university," he boasted, and then he started talking about the computer in his home. Thinking of Kaifeng should stimulate us to struggle to improve our high-tech edge, educational strengths and pro-growth policies. For if we rest on our laurels, even a city as great as New York may end up as Kaifeng-on-the-Hudson. E-mail: nicholas at nytimes.com From checker at panix.com Sun May 22 17:03:53 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:03:53 -0400 (EDT) Subject: [Paleopsych] NYT: Taking Care of the Koran Message-ID: Taking Care of the Koran http://www.nytimes.com/2005/05/22/weekinreview/22reading.html By PETER EDIDIN In the furor last week over a Newsweek item, since retracted, saying that Americans at Guant?namo Bay, Cuba, flushed the Koran down a toilet while interrogating Muslim detainees, it was noted that the Defense Department had issued instructions on the proper treatment of the Koran. The instructions were contained in a memo released by the Joint Detention Operation Group - JDOG - to all its personnel at Guant?namo in January 2003. Excerpts follow, with the spelling and grammar of the military's text. Intent To ensure the safety of the detainees and MP's while respecting the cultural dignity of the Korans thereby reducing the friction over the searching the Korans. JTF-GTMO personnel directly working with detainees will avoid handling or touching the detainee's Koran whenever possible. When military necessity does require the Koran to be search , the subsequent procedures will be followed. Inspection a. The MP informs the detainee that the chaplain or a Muslim interpreter will inspect Koran. If the detainee refuses the inspection at any time, the noncompliance is reported to the Detention Operations Center (DOC) and logged appropriately by the block NCO. b. The Koran will not be touched or handled by the MP. c. The chaplain or Muslim interpreter will give instructions to the detainee who will handle the Koran. He may or may not require a language specific interpreter. d. The inspector is examining so as to notice an unauthorized items, markings, or any indicators that raises suspicion about the contents of the Koran. e. The inspector will instruct the detainee to first open the one cover with one hand while holding the Koran in the other thus exposing the inside cover completely. f. The inspector instructs the detainee to open pages in an upright manner (as if reading the Koran). This is a random page search and not every page is to be turned. Pages will be turned slowly enough to clearly see the pages. g. The inspector has the detainee show the inside of the back cover of the Koran. h. The detainee is instructed to show both ends of the Koran while the book is closed so that inspector can note the binding while closed paying attention to abnormal contours or protrusions associated with the binding. The intent is to deduce if anything may be in the binding without forcing the detainee to expose the binding, which may be construed as culturally insensitive or offensive given the significance of the Koran. i. How the detainee reacted, observation by other detainees, and other potentially relevant observations will be annotated appropriately on the block significant activities sheet as well as staff journal. Handling a. Clean gloves will be put on in full view of the detainees prior to handling. b. Two hands will be used at all times when handling the Koran in manner signaling respect and reverence. Care should be used so that the right hand is the primary one used to manipulate any part of the Koran due to the cultural association with the left hand. Handle the Koran as if it were a fragile piece of delicate art. c. Ensure that the Koran is not placed in offensive areas such as the floor, near the toilet or sink, near the feet, or dirty/wet areas. Removal a. Korans should be left in the cell as a general rule ..., even when a detainee is moved to another cell or block. In principal, every cell ... will have a Koran "assigned" to it. b. If a Koran must be removed at the direction the CJDOG, the detainee library personnel or chaplain will be contacted to retrieve and properly store the Koran in the detainee library. c. If the chaplain, librarian or Muslim interpreter ... cannot remove the Koran, then the MP may remove the Koran after approved by the DOC ... Place a clean, dry, detainee towel on the detainee bed and then place the Koran on top of the clean towel in a manner, which allows it to be wrapped without turning the Koran over at any time in a reverent manner. ... From checker at panix.com Sun May 22 17:04:28 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:04:28 -0400 (EDT) Subject: [Paleopsych] NYT: Enron Offers an Unlikely Boost to E-Mail Surveillance Message-ID: This is about finding patterns. Frank --------------- Enron Offers an Unlikely Boost to E-Mail Surveillance http://www.nytimes.com/2005/05/22/weekinreview/22kola.html [Click the URL to find a great graphic and a not so great one.] By GINA KOLATA AS an object of modern surveillance, e-mail is both reassuring and troubling. It is a potential treasure trove for investigators monitoring suspected terrorists and other criminals, but it also creates the potential for abuse, by giving businesses and government agencies an efficient means of monitoring the attitudes and activities of employees and citizens. Now the science of e-mail tracking and analysis has been given a unlikely boost by a bitter chapter in the history of corporate malfeasance - the Enron scandal. In 2003, the Federal Energy Regulatory Commission posted the company's e-mail on its Web site, about 1.5 million messages. After duplicates were weeded out, a half-million e-mails were left from about 150 accounts, including those of the company's top executives. Most were sent from 1999 to 2001, a period when Enron executives were manipulating financial data, making false public statements, engaging in insider trading, and the company was coming under scrutiny by regulators. Because of privacy concerns, large e-mail collections had not previously been made publicly available, so this marked the first time scientists had a sizable e-mail network to experiment with. "While it's sad for the people at Enron that this happened, it's a gold mine for researchers," said Dr. David Skillicorn, a computer scientist at Queen's University in Canada. Scientists had long theorized that tracking the e-mailing and word usage patterns within a group over time - without ever actually reading a single e-mail - could reveal a lot about what that group was up to. The Enron material gave Mr. Skillicorn's group and a handful of others a chance to test that theory, by seeing, first of all, if they could spot sudden changes. For example, would they be able to find the moment when someone's memos, which were routinely read by a long list of people who never responded, suddenly began generating private responses from some recipients? Could they spot when a new person entered a communications chain, or if old ones were suddenly shut out, and correlate it with something significant? There may be commercial uses for the same techniques. For example, they may enable advertisers to do word searches on individual e-mail accounts and direct pitches based on word frequency. "Will you let your e-mail be mined so some car dealer can send information to you on car deals because you are talking to your friends about cars?" asks Dr. Michael Berry, a computer scientist at the University of Tennessee who has been analyzing the data. Working with the Enron e-mail messages, about a half-dozen research groups can report that after just a few months of study they have already learned that they can glean telling information and are refining their ability to sort and analyze it. Dr. Kathleen Carley, a professor of computer science at Carnegie Mellon University, has been trying to figure out who were the important people at Enron by the patterns of who e-mailed whom, and when and whether these people began changing their e-mail communications when the company was being investigated. Companies have organizational charts, but they reveal little about how things really work, Dr. Carley said. Companies actually operate through informal networks, which can be revealed by analyzing "who spends time talking to whom, who are the power brokers, who are the hidden individuals who have to know what's going on," she said. With the Enron data, Dr. Carley continued, "what you see is that prior to the investigation there is this surge in activity among the people at the top of the corporate ladder." But she adds, "as soon as the investigation starts, they stop communicating with each other and start communicating with lawyers." It showed, she says, "that they were becoming very nervous." The analyses also found someone so junior she did not show up on organization charts but who, whichever way the e-mail data was mined, "shows up as a person of interest," Dr. Skillicorn said, in the language of intelligence analysts. In the investigation of a terror network, pinpointing such a person could be of enormous significance. Dr. Berry said the e-mail traffic patterns tracked major events, like the manipulation of California energy prices. "We could see how things built up right before the bankruptcy," he said. There were e-mail surges with each crisis, pointing to a problem that was consuming Enron employees. And in each crisis, there were features of certain e-mail messages - word choices, routing patterns - that allowed the computer scientists to isolate them from the morass of irrelevant personal or business messages. One thing that didn't show up when the researchers screened for changes in word use was guardedness, said Dr. Skillicorn, a failure that was revealing in itself. Ordinarily, he said, when people are being deceptive they are more self-conscious, and their word use becomes simpler, as though they are trying too hard to sound natural. But that apparently never occurred at Enron because its employees remained unconcerned while they engaged in illegal activity. "It wasn't a case of keeping a low profile," Dr. Skillicorn said. "They didn't worry about the story they were telling." The scientists who are studying the Enron data said they assumed intelligence agencies are doing similar classified analyses on international e-mail traffic. Since World War II, a five-nation consortium of the United States, Canada, Britain, Australia and New Zealand have cooperated in a vast communications collection and analysis program called Echelon, for example, one that has assumed increasing importance since the terror attacks of Sept. 11, 2001. No one in the unclassified world knows precisely what is being done with the Echelon data. But, Dr. Berry said, surveillance in the civilian world could one day have troubling consequences. It could allow companies, without ever actually infringing on e-mail conversations, to track employee attitudes and activities closely and easily. "They can monitor discussions without actually isolating individuals," Dr. Berry said. "They can assess morale. If they make a cut in salaries, how long does the unhappiness go on? You could track topics and get a sense of how people are responding to policies and flag potential hot spots." Or, he said, managers might be able to learn which people have too much time on their hands. And, as Dr. Skillicorn notes, if you try to write bland e-mail messages with hidden communications, chances are the programs will pick those out, too. "It's clearly Orwellian," Dr. Berry said. "And I know that freaks people out." From checker at panix.com Sun May 22 17:04:40 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 13:04:40 -0400 (EDT) Subject: [Paleopsych] Book World: In the Beginning Message-ID: In the Beginning http://www.washingtonpost.com/wp-dyn/content/article/2005/05/19/AR2005051901405_pf.html Reviewed by Mark Oppenheimer Sunday, May 22, 2005; BW08 WHOSE BIBLE IS IT? A History of the Scriptures Through the Ages By Jaroslav Pelikan. Viking. 274 pp. $24.95 WHY THE JEWS REJECTED JESUS The Turning Point in Western History By David Klinghoffer. Doubleday, 247 pp. $24.95 AFTER THE APPLE Women in the Bible -- Timeless Stories of Love, Lust, and Longing By Naomi Harris Rosenblatt. Miramax. 264 pp. $23.95 Many authors who write about the Bible are so tendentious that their books are worthless; other writers are thoughtful and well-meaning but nonetheless argue as much from faith as from evidence. Which is why any syllabus of religion reading should begin with a book that teaches humility, reminding us how difficult it is even for the faithful to get at God's words. After all, God is perfect, but translators and scribes are not. One such book is Whose Bible Is It ?, a new history of how the Bible was written, redacted and translated into its present editions, written by the esteemed church historian Jaroslav Pelikan. The book is far from perfect, but fortunately Pelikan is at his best where most readers will be at their worst: in antiquity. He begins with lucid, succinct explanations of the Hebrew Bible's translation into its first Greek edition, known as the Septuagint, then into Jerome's Latin version, the Vulgate. A fluent reader of Hebrew, Greek and Latin (and, for what it's worth, German, Italian, French, Russian, Slavonic and Czech), Pelikan is good at unsettling our notions of what the Bible really says. By the end of the 4th century, there were competing Hebrew, Latin and Greek versions of every major book of the Bible, and almost nobody could read them all and compare. Few Greeks would know, as Pelikan does, that what they read as "They have pierced my hands and feet," a line from the 22nd psalm that Jesus cries on the cross in the New Testament, was originally rendered by Hebrew scribes as "Like lions [they maul] my hands and feet" -- which, lacking the "piercing," seems much less like an Old Testament foreshadowing of the crucifixion. But as Pelikan moves beyond antiquity, what promised to be a handy history of Bible translations becomes less thorough and more eccentric. He spends too little time, for example, on the numerous Bible translations published during the Reformation and the Renaissance, and he pays almost no attention to the 20th-century English versions of the Bible. Many of these are not new translations but editions of a standard translation annotated for particular niche audiences; the Christian publishing house Zondervan offers, for example, a Mom's Devotional Bible , a Recovery Devotional Bible , a Sports Devotional Bible and dozens more like them. While Pelikan may find these editions tangential to a narrative focused on figures like Luther, Gutenberg and Calvin, they are immediately relevant to many Christians' experience of the Bible today. Pelikan essays some pet scholarly theories, and many readers may not realize when he is moving from a recitation of acknowledged fact to an assertion of opinion. It is by no means obvious, for example, that the Babylonian Talmud -- the systematic compendium of Jewish law and teachings completed around the year 600 -- is some sort of analogue to the New Testament, just because they are both extensions of the Old Testament. This is Pelikan's most original point, one he repeats several times -- for example, "According to Judaism, the written Torah is made complete and fulfilled in the oral Torah, so that the Talmud is in many ways the Jewish counterpart to the New Testament." He is at his queerest in noting that New Testament expositor Martin Luther King Jr. marched for civil rights alongside a "scion of the Talmud," Rabbi Abraham Joshua Heschel -- a great Jewish teacher, to be sure, but not known as a Talmudist, except in the sense that all rabbis are "scions of the Talmud." Pelikan's juxtaposition is appealing -- look at the variety of wonders the Old Testament has made possible! -- but ultimately silly. The Talmud can be contemptuous of Christianity, and its main purpose is to adumbrate rules for living that -- it so happens -- Christians dismiss as legalistic, outmoded and unnecessary. Pelikan's commitment to this pairing surely derives from his well-meaning ecumenism: He is rightly lauded for his promotion of religious tolerance and his opposition to anti-Semitism, and this book concludes with a charge to read the Old and New Testaments and "to interpret them and reinterpret them over and over again -- and ever more studiously to do so together." Whose Bible Is It? will surely aid in that project. But it would be a more bracing, intellectually gratifying read if Pelikan were a tad less earnest -- and if he were franker, with himself and with us, about his agenda. What Pelikan does for the history of Bible, David Klinghoffer has done better for the history of Jews' resistance to Christianity. Why the Jews Rejected Jesus is an ambitious survey of a big topic, but Klinghoffer's frank conviction lends his material urgency and narrative verve; it's fun to read the words of someone so sure that he's right. An observant and politically conservative Jew, Klinghoffer is not one to wear his learning lightly; his column in the Jewish weekly the Forward is by turns smart and annoying, and he is never caught in the embarrassing position of giving his opponents the benefit of the doubt. But perhaps that makes him the perfect man to write a book on a topic so difficult. Given the influence of Mel Gibson, the medieval but still lingering "blood libel" that Jews use the blood of Christian infants to bake their Passover matzahs, and the notorious fraud known as The Protocols of the Elders of Zion that purported to expose a sinister Jewish cabal, Klinghoffer has decided that the Jews need to re-learn the ancient art of the disputation, the debate between Christianity and Judaism. Roman emperors used to force Jews to debate Christians in public, as a kind of sport. Klinghoffer also takes obvious pleasure in the mere unsheathing of his sword, but he has other, more pressing goals. He is tired of Jews who don't know their history and so say things like, "Jews have never proselytized." He is disappointed in Jews' ignorance of their own scripture, which Christians have for millennia twisted to what he considers heretical ends. And he is fed up with Jews ("Reform Jews," one can almost hear him hiss) ill-equipped to refute the apologia of Christian evangelists. Klinghoffer is a lively historian, and he's much fairer than his right-wing journalism would lead one to expect. His book is at once a primer on scripture, a vivid picture of the ancient and medieval dialogue between Jews and Christians, and a theological explication of why Christians' messianic claims have made so little sense to Jews. The most salient reason Jews didn't believe Jesus was the messiah, Klinghoffer persuasively argues, is that the Hebrew Bible, in books like Ezekiel, makes it quite clear what the reign of the messiah will look like -- and Jesus has accomplished nothing of the kind. There has been no ingathering of the exiles, no eternal peace, no rebuilding of the Temple -- none of the things the messiah was supposed to bring. And no interpretive trickery by Christians can get around that fact. Klinghoffer's principal appeal is not to the intellect but the gut. He has the courage to say what many Jews silently believe: This whole Christian thing just doesn't make much sense. It doesn't feel like the messiah has come; it's unlikely God would ever become human; and, above all, we like our religion and see no good reason to abandon it. The book's major weakness, though, is that in summing up all the reasons that Jews reject Jesus, Klinghoffer fails to include the most important reasons of all: simple and profound faith, emotions like loyalty, love, nostalgia and guilt, and cherished cultural traditions like Passover Seders and latkes at Hanukkah time. Klinghoffer's intellectual pugnacity leads him to miss these far homelier reasons that Jews don't choose apostasy. And these affective Jews, as we could call them, are the ones most likely to enjoy Naomi Harris Rosenblatt's After the Apple . Rosenblatt's common-sense explications of Bible stories involving women are not meant for scholars or amateur disputationists, but they may be just the thing for spiritually curious women (or men) seeking role models or inspiration. It's never a bad time to re-visit Sarah's jealousy of Hagar, Ruth's loyalty to Naomi or Esther's resourcefulness in facing the genocidal Haman. Moreover, Rosenblatt, a Washington-based psychotherapist, avidly looks for contemporary lessons in these old stories. Although her therapeutic style can rob the Bible of its grandeur and mystery -- Sarah is "a role model for women . . . fortunate to live more than a third of their lives after childbearing age" -- she speaks to a kind of religious person more common than the rationalist readers that Pelikan and Klinghoffer seem to be after. Rosenblatt writes for people who want comfort and guidance from God. She is telling us that the Hebrew Bible is a joy to read. She's assuring us of something Pelikan and Klinghoffer surely believe but never come right out and say: The reason we translate the Bible -- and the reason we fight over it -- is that its wisdom persists. ? Mark Oppenheimer is the author of "Thirteen and a Day: The Bar and Bat Mitzvah Across America." From kendulf at shaw.ca Sun May 22 21:38:58 2005 From: kendulf at shaw.ca (Val Geist) Date: Sun, 22 May 2005 14:38:58 -0700 Subject: [Paleopsych] Re: Paleopsych References: <193.404e4571.2fc169b7@aol.com> Message-ID: <001201c55f16$aa30eee0$873e4346@yourjqn2mvdn7x> Dear Howard, Steve in a recent posting pointed to Dr. Bruce Lipton's work and book on the Biology of Belief. And Stephen is right in judging this an important development. I have not read the book, but opened up Lipton's website, read an essay and heard an interview, and I think I understand what Lipton is all about. Three cheers for his emerging - however, I found absolutely nothing new in principle, only in particular. He is in a line of brilliant "epigenetisits" to emerge, and he has apparently gotten much the same treatment eminent epigenticists before him have received. And that is what I want to deal with first. I was disappointed looking into his list of readings that none of these eminent men were cited be they Richard Goldschmidt, C. H. Waddington (who coined the word epigenetics over half a century ago) or more recently Soren Lovtrup. All challenged the conventional wisdom that heredity is deterministic, all challenged the conventional Neo-Darwinian wisdom - most shamelessly championed currently by Richard Dawkins - and all suffered severe rejection by Evolution's elite, none more than Richard Goldschmidt (1942). When I looked for that book, having heard all along how terribly mistaken Goldschmidt was, I could not find it in my universities library. I discovered that his books had been systematically removed. That this should happen to one of the centuries truly great cytogeneticists, was puzzling. At that time I suddenly ran into a fresh edition of Goldschmidt's book, reprinted at the behest of Stephen J. Gould. It is highly instructive to read from his pen just how the great Richard Goldschmidt was suddenly shunned. It boils down to three young turks going after him and destroying him: Ernst Mayr, George Gaylord Simpson and Bernhard Rensch. The means of destruction was ridicule using phrases out of context. There was little attention paid to the possibility that a great mind using his enormously rich experience and background at the end of a long life and prolific publication record was struggling trying to tell us something that he considers vitally important, but which clashes with conventional wisdom. To expedite matters: Goldschmidt's denigration robbed all but a few of us of the idea of flexible gene expression controlled by environment as fundamental to understanding speciation and thus evolution. And without that, there is no way that flexible environmentally controlled gene expression could enter into other fields - such as medicine! Next in line to be denigrated was the great embryologist, scholar of evolution and during WWII father of operations research on behalf of the British defense establishment C. H. Waddington. That's the father of Epigenetics. Note the title of one of his books: 1957 The Strategy of the Genes. If my memory serves me right, he was haughty and did not mince words and was openly contemptuous of Neo-Darwinism where Goldschmidt tried to be polite. When during a conversation with Enst Mayr I mentioned Waddington, Mayr became irritated, in fact noticeably so and dismissed Waddington as someone who would not fit in. That is remarkable as Mayr cites Waddington repeatedly in his own magnum opus, but in a remarkably limited way. The huge irony is that Mayr then writes in one of his last books on the Great Synthesis that, tragically, the embryologists have been left out of this great evolutionary synthesis. Having read Waddington, Ernst Mayr, a mind not to be trifled with - but also Dawkins and Stephen J. Gould could not see what Waddington saw, namely that phenotype plasticity was profoundly important to explain much of evolution that neo-Darwinism did not. I am totally puzzled by that conceptual blindness. Next in line is the great Swedish epigeneticist Soren Lovtrup. He wrote 1974 Epigenetics: A Treatise on Theoretical Biology and The Phylogeny of Vertebrata(1977), and Darwinism: The Refutation of a Myth, (S?ren L?vtrup, Soren Lovtrup) 1987 Croom Helm ISBN 0-7099-4153-6, 469 pages. He is a vociferous, but highly substantive opponent of Neo-Darwinism, and it totally ignored! I had the pleasure of meeting him a number of times and spending together nearly a fortnight in South Africa and Botswana's Okavango Delta. He has given up trying to communicate with the older generation, pinning his hope on the younger - which is also ignoring his excellent scholarship - see Bruce Lipton! Unfortunately, Lovtrup is not the best communicator, as well as being a very angry old man. However, taking time and effort to read him is rewarding. Lipton is a re-discoverer of epigenetics, as was I. He is dead on, but his discoveries are in principle not new and it's a pity he does not integrate his discoveries with earlier findings, strengthening his case and given due credit where credit is due. As I age I see again and again the same wheels being reinvented under different names - and Lipton is a case in point. I was delighted to hear that he in the are of health identified some of the same processes that I did and reported on in my 1978 Life Strategies.. book. After all the subtitle of that book is: Towards a biological theory of health. Independent re-discovery greatly strengthens the case for epigenetics, as this concept is played out with different facts. And I am delighted that Lipton pulled this concept into the area of health. In short: bring epigenetics into our understanding of Paleopsychology and quality of life. ----- Original Message ----- From: HowlBloom at aol.com To: paul.werbos at verizon.net ; paleopsych at paleopsych.org Sent: Saturday, May 21, 2005 9:51 PM Subject: [Paleopsych] Paul--is this yours? I ran across the following today while doing research for a national radio show--Coast to Coast--I'm about to do in 70 minutes (2am to 5am EST the night of 5/21 and morning of 5/22). It's exciting and sounds like the space-borne solar array you were talking about in an email several days ago. Is it one of the projects you've helped along? Howard Modules of the USC School of Engineering's Information Sciences Institute (ISI) proposed half-mile-long Space Solar Power System satellite self assemble with what the researchers call "hormonal" software,.funded by a consortium including NASA, the NSF, and the Electric Power Research Institute (EPRI) ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.322 / Virus Database: 266.11.14 - Release Date: 5/20/2005 -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sun May 22 22:04:30 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:04:30 -0400 (EDT) Subject: [Paleopsych] NYT: Safire: Chimera Message-ID: Chimera On Language by William Safire, New York Times Magazine, 5.5.22 http://www.nytimes.com/2005/05/22/magazine/22ONLANGUAGE.html Hank (Don't Call Me Henry) Greely, professor of law and of genetics at Stanford University, created a stir in the scientific world, not to mention in the zoological fraternity, when he told Sharon Begley of The Wall Street Journal, ''The centaur has left the barn.'' A centaur is the mythical beast dreamed up by the Greeks with the head and arms and torso of a man and the body and legs of a horse. It is one example of a chimera, best known as a fire-breathing she-monster mixing a lion's head, a goat's body and a serpent's tail that gave ancient Greek children nightmares. (It's always described as a ''she-monster''; you never hear about chimerical ''he-monsters.'') Because we'll be hearing more of chimeras, let's first get the pronunciation straight: it's ky-MEER-uh or ke-MAIR-uh (take your pick) and not SHIMM-a-ruh, which sounds like an activity of one's sister Kate. Until recently, the word meant ''crazy idea,'' expressed in dictionarese as ''fanciful notion, departure from reality,'' or in current pooh-poohing, ''bugaboo, scary illusion.'' Now, however, chimera's postmythic scientific meaning is coming to the fore: ''a combination of tissues of different genetic origin,'' or as defined by Jamie Shreeve in a prescient Times magazine article last month, ''an organism assembled out of living parts taken from more than one biological species.'' The old adjective chimerical is a modifier that goes to the early meaning of ''figment of imagination''; the newer chimeric is applied to genetic manipulation. What brought this into the public eye recently was an admonition to researchers by a committee of the National Academy of Sciences not to cross-breed species involving the human animal. This followed the rejection by the U.S. Patent Office of an application for making a humanzee, a proposed mixture of chimpanzee and human being (a name evidently preferred to chimpanbeing). The headline over the Wall Street Journal article was ''Now that Chimeras Exist, What if Some Turn Out Too Human?'' Medical researchers can have a serious purpose in implanting human cells in animals. For example, by using human cells to create a human immune system in a mouse, scientists can conduct experiments to enhance human immunity that would be unethical to try on human patients. The Stanford biologist Irving Weissman asked Greely's committee to come up with ethical guidelines for putting human nerve cells in a mouse brain to study diseases like Parkinson's and Alzheimer's -- but without ''humanizing'' the mouse. ''You don't want a human brain in a mouse with a person saying, 'Let me out,' '' Greely says. In a Library of Congress presentation this month with Michael Gazzaniga, the Dartmouth professor who pioneered cognitive neuroscience and is the author of ''The Ethical Brain,'' Greely observed, ''We care more about our brains and gonads than about our gallbladders.'' I immoderated that discussion about neuroethics and had a chance afterward to ask Greely how he came to the choice of words in his catchy comment, ''The centaur has left the barn.'' Wouldn't it have been more accurate to say ''is out of the barn?'' ''It's rooted in the old saying 'the horse is out of the barn,' of course,'' the lawyer-geneticist-ethicist replied. ''But to give it a modern feeling, I combined it with 'Elvis has left the building.' '' This guy knows how to fuse a chimeric phrase. BIG WET KISS Early in March, Senator Byron Dorgan, Democrat of North Dakota, proclaimed President Bush's Social Security ideas to be ''a big wet kiss to Wall Street.'' A couple of months later, when Bill Frist, the Senate majority leader, suggested what he considered a filibuster compromise, the Senate's Democratic leader, Harry Reid of Nevada, picked up the derisive phrase and gave it a slightly broader ideological scope: he called the Republican proposal ''a big wet kiss to the far right.'' I have been following this popular phrase closely (it's had more than 27,000 citations since Google started counting it) because of an interest in ''emoticons,'' a word coined in 1987. These punctupix use combinations of keyboard symbols -- asterisks, directional carets, hyphens, pound signs and the curvaceous tilde -- to signify emotional states. That symbol-art is more creative than simple initialese, like HAND for ''have a nice day'' or LOL for ''laughing out loud.'' (I am not so scornful of the secretive POS, which means ''ignore this message, or be most circumspect in your reply''; POS stands for ''parent over shoulder.'') The gradations of osculation include the soul kiss, also called the French kiss, in which the tongue is inserted into the partner's mouth (leading to the term tonsil hockey). There is also the butterfly kiss, seductively fluttering the eyelashes against the partner's cheek; the upside-down kiss, which should be self-explanatory; the passionate neck nuzzle, resulting in a bruise called a ''hickey'' or ''love bite'' and necessitating the wearing of a scarf for days; the air kiss, often blown by a ''walker,'' in which no physical contact is made; and the enthusiastic, juicy eyesucker, which I used to dutifully receive from my beloved grandma. A big wet kiss, however, is not a real kiss at all. The meaning of the phrase is ''fulsome praise,'' in its precise definition of ''lavish, excessive, immoderate, overweening.'' In its political usage, the attack phase is intended to leave the recipient with a big red hickey. Send comments and suggestions to: [2]safireonlanguage at nytimes.com. From checker at panix.com Sun May 22 22:04:39 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:04:39 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Can You Catch Obsessive-Compulsive Disorder? Message-ID: Can You Catch Obsessive-Compulsive Disorder? New York Times Magazine, 5.5.22 http://www.nytimes.com/2005/05/22/magazine/22OCD.html?pagewanted=print By LISA BELKIN To suffer from obsessive-compulsive disorder, many patients say, is to ''know you are crazy.'' Other forms of psychosis may envelop the sufferers until they inhabit the delusion. Part of the torture of O.C.D. is, as patients describe it, watching as if from the outside as they act out their obsessions -- knowing that they are being irrational, but not being able to stop. They describe thoughts crowding their minds, nattering at them incessantly -- anxious thoughts, sexual thoughts, violent thoughts, sometimes all at the same time. Is the front door locked? Are there germs on my hands? Am I a murderer if I step on an ant? And they describe increasingly elaborate rituals to assuage those thoughts -- checking and rechecking door locks, washing and rewashing hands, walking carefully, slowly and in bizarre patterns to avoid stepping on anything. They feel driven to do things they know make no sense. There are researchers who believe that some of this disturbing cacophony -- specifically a subset found only in children -- is caused by something familiar and common. They call it Pediatric Autoimmune Neuropsychiatric Disorders Associated With Streptococcal Infection, or, because every disease needs an acronym, Pandas. And they are certain it is brought on by strep throat -- or more specifically, by the antibodies created to fight strep throat. If they are right, it is a compelling breakthrough, a map of the link between bacteria and at least one subcategory of mental illness. And if bacteria can cause O.C.D., then an antibiotic might mitigate or prevent it -- a Promised Land of a concept to parents who have watched their children change overnight from exuberant, confident and familiar to doubt-ridden, fear-laden strangers. Child psychiatrists have long known that sometimes O.C.D. in children can be like that, that it can come on fast, out of the blue, like a plague, and then last anywhere from days to months. If the typical graph of O.C.D. symptoms is a sine curve -- with episodes that ramp up slowly, peak gradually, then abate just as slowly -- the graph of rapid-onset O.C.D. is saw-toothed -- flat, then a sudden spike, followed by a relatively sharp drop, then flat again. The patterns certainly look as if they could be two separate disorders, with similar symptoms but different causes. Across the country, many doctors are convinced of this and are putting young sudden-onset O.C.D. patients on long-term doses of antibiotics. ''If I were to place bets,'' says Judith Rapoport, the child psychiatrist who first brought O.C.D. to public attention with her book ''The Boy Who Couldn't Stop Washing,'' that bet would be on the side of those who believe in Pandas. But as certain as some researchers are, there are others, just as smart, with just as many impressive publications and titles, who think the theory is wrong or, at best, that it is too early to tell. And this group is warning that the Pandas hypothesis is misguided, perhaps even dangerous. ''Equivocal, controversial, unproven,'' Dr. Stanford Shulman, chief of infectious disease at Children's Memorial Hospital in Chicago, says of the theory. Pandas stands at a familiar, necessary and utterly frustrating moment in medicine -- in the gap between what doctors think and what they know. Practically every byte of scientific knowledge passes through a moment like this, on its way to being accepted as fact or dismissed as falsehood. It has always been so, but in recent years several things about the process have changed. Science now does its thinking in public, with each incremental advance readily available online. And those waiting for answers are less patient and more involved. They don't ask their doctors; they bring their own suggestions. They don't want to wait for the results of a two-year double-blind placebo-controlled clinical trial before they act. Which means that they often find themselves acting before all the facts are in. Can strep bacteria cause obsessive-compulsive disorder? Do these children need penicillin or Prozac? Will we look back on these questions years from now and think, How could we have believed? Or, rather, How could we have doubted? The most vocal voice in support of Pandas is Susan E. Swedo, a pediatrician and researcher at the National Institute of Mental Health. She was the first to identify the syndrome, and the one who gave it a name. She has been studying the relationship between strep and O.C.D. for her entire career. She began her work in the 80's, a time of discovery in the world of obsessive-compulsive disorder. Although the disease had long been known, it was not until 20 years ago that researchers began to understand how prevalent it was and not until a decade later that they came to see how often it occurred in children. In 1989, Rapoport published her best-selling book, taking the illness into the mainstream spotlight. When the television program ''20/20'' ran a segment about her book, it prompted 250,000 calls from worried parents who thought they recognized their children. And a good number of them, Rapoport says, were right. She estimates that more than one million children in the United States suffer from O.C.D. In fact, she argues, the disorder is one that often begins in childhood, which is why doctors should start looking for it then. Half of all adult O.C.D. patients look back and remember having repetitive thoughts and rituals when they were young, which is significantly higher than the percentage of adults with other psychiatric disorders who do. Rapoport strongly suspected that there was a medical model for at least some percentage of O.C.D. sufferers -- that the symptoms were not a result of emotional trauma (Freud's belief that it is caused by overly strict toilet training had long since fallen out of favor) but rather were caused by a biological trigger. She and her research fellows at the N.I.M.H. spent several years looking into it. Swedo was one of those fellows. Research had already shown that O.C.D. symptoms appear when there is damage to the basal ganglia, which is a cluster of neurons in the brain that acts as a gatekeeper for movement, thought and emotion. ''So we set out to find every known condition that involved abnormalities of the basal ganglia,'' Swedo remembers. Huntington's disease was one. Parkinson's was another. Also on the list was Sydenham's chorea -- a movement disorder known to medicine since before the Middle Ages, when it was called Saint Vitus' dance. About 70 percent of patients who develop Sydenham's also develop O.C.D. Sydenham's is caused by rheumatic fever; rheumatic fever is in turn caused by Group A beta-hemolytic streptococcal bacteria. In other words, strep throat. The biological cascade from strep to Sydenham's starts when the body, thinking it is fighting the infection, begins to fight itself in a process known as molecular mimicry. The protein sheath that coats each invading bacterium cell is remarkably similar to the one that coats the native cells that form a particular part of the body. In this case, the protein code on the strep bacteria is a close match with the code on the cells in the basal ganglia. So the antibodies mistake the basal ganglia for strep and attack. This, of course, will not happen to every child who has strep throat, or even to most children, in the same way that every child who gets strep does not get rheumatic fever. ''It's the wrong germ in the wrong child at the wrong time,'' says Swedo, who suspects that some children are genetically predisposed toward Pandas. By the mid-90's, Swedo had graduated to her own research laboratory at the National Institute of Mental Health. Back then the status of her research looked like this: O.C.D., she knew, could be caused by damage to the basal ganglia. Sydenham's, too, was a result of such damage. Strep, by all accounts, was the cause of the damage in Sydenham's patients. Sydenham's patients often developed O.C.D. Given all that, the next logical question seemed obvious: Can strep cause O.C.D.? Swedo turned her attention anew to that subgroup of patients who developed their symptoms seemingly overnight. She and her collaborators hypothesized that this difference in onset could be the key to something important, a separate category, a differentiating wrinkle in a familiar pattern. It might not be the key to decoding the cause of all O.C.D., but it might explain some percentage of cases. Swedo and her researchers put out a request among those who treat and suffer from O.C.D., looking for subjects -- children whose symptoms had come on suddenly. They received hundreds of calls and then determined that 109 of those children could accurately be described as having had a rapid onset of symptoms. The stories the parents told, while different in their particulars, were remarkably similar at their core. The symptoms came on so quickly that most parents could tell you the exact date that their children's personalities changed. All these children woke up one morning, in the words of one parent, ''full-blown somebody else.'' The exact nature of the obsessions and compulsions differed from child to child (a fact that makes all O.C.D. tricky to diagnose). Some could not stop washing their hands or insisting they needed to use the toilet or checking to make sure that doors were closed and locked. Some developed overwhelming separation anxiety or worried that they would harm someone or do something wrong. Some had one cluster of these symptoms during their first episode and a different set of symptoms the next time around. Nearly half complained of joint pain, but not always of a sore throat. They were fidgety and moody and obstinate. They had ''bad thoughts,'' some sexual, some violent, some frightening, that they could not get out of their heads. The children were then tested for evidence that they had recently had strep -- either via throat culture, which would find active infection, or by a blood test that measures antibodies remaining after the actual infection is gone, or, when the episode was too long ago for either test to be effective, researchers asked about a remembered history of strep. In a striking percentage of cases, the search for strep came up positive. Disagreement is what propels all of science. Proof and disproof seems almost a requirement on the road to consensus. Copernicus's theory that the planets revolve around the sun was not fully accepted until long after his death. Pythagoras and Aristotle each suggested that the world was round, but the idea was not widely accepted for many centuries. Dr. Ignaz Semmelweis was mocked and ostracized for suggesting that by simply washing their hands, doctors could prevent women from dying during childbirth. It would be another quarter-century before Louis Pasteur and Joseph Lister confirmed that destroying germs stops the spread of disease. Much more recently, doctors were exuberant when brain surgery seemed to halt the progression of Parkinson's disease and bone-marrow transplants seemed to beat back breast cancer. But the excitement dimmed as further study found the initial data to be overly optimistic. Perhaps most significant to the discussion of Pandas, strep has been proposed as the cause of a number of conditions over the years, including Kawaski disease, but subsequent studies have repudiated the theories. ''The history of medicine is full of these examples,'' says Dr. Barron Lerner, a medical historian at Columbia University Medical Center, describing fact later shown to be quackery, flights of fancy that turn out to be fact and many ideas that bounce for decades in the shades of gray between the two. ''What looks like it's there sometimes turns out not to be there,'' Lerner says, ''and what everybody is sure of sometimes turns out not to be certain.'' Swedo and her collaborators published several small preliminary studies during the late 90's, and their first major paper claiming that Pandas was a separate syndrome appeared in 1998 in The American Journal of Psychiatry. Called ''Pediatric Autoimmune Neuropsychiatric Disorders Associated With Streptococcal Infections: Clinical Description of the First 50 Cases,'' it is exactly that, a description of children who develop O.C.D. after exposure to Type A strep. In a way, the description is a tautology -- Pandas is classified as O.C.D. associated with strep, and therefore the only children who qualify for the diagnosis are those who have had recent strep. Swedo took the 109 rapid-onset cases and narrowed those to 50 that met her Pandas criteria, which means that 59 cases were triggered by something other than strep throat. She considers the results important, because at nearly 50 percent, the incidence of strep is far higher than would be expected in the general population and therefore statistically significant. But she agrees that her findings do not explain the cause of all O.C.D., or even all rapid-onset O.C.D. Despite the details still up in the air, the existence of Pandas was compelling to many doctors. They saw it as inherently logical, and it gave a name to some otherwise mysterious cases that passed through their waiting rooms. ''There is no doubt in my mind,'' says Tamar Chansky, a child psychologist specializing in childhood anxiety disorders and the author of ''Freeing Your Child From Obsessive Compulsive Disorder,'' which devotes a long section to recognizing Pandas. Not only is it real, says Chansky, who treats several patients who suffer from the disorder, but she has also noticed that each episode is often worse than the one before, creating the possibility that unless these children are treated prophylactically for strep, their O.C.D. episodes could be longer, more intense and more frequent. ''Yes, it is controversial, but I believe it is real,'' agrees Dr. Azra Sehic, a pediatrician in Kingston, Pa. One of the first times Sehic encountered Pandas was when she saw it in one of her patients, Maury Cronauer. Just before Memorial Day in 2003, when she was 6, Maury became ill with strep throat. She was treated with antibiotics and one morning soon after started acting ''odd,'' says her mother, Michelle, who is a nurse. A girl who never worried much about germs, Maury started washing her hands constantly, the most common symptom of O.C.D. By the next day she was hysterical, saying horrid thoughts were in her head. She wasn't sure she loved her parents. She thought she was going to cheat at school or steal something. She wanted the racing thoughts to go away, and at one point her parents found her curled in a ball in the laundry room, her eyes crammed shut and her hands over her ears. Sehic mentioned to Maury's parents that the strep might be the cause of her symptoms. She prescribed a longer course of antibiotics, to eliminate any lingering strep bacteria, which might signal the body to create more antibodies. The O.C.D. went away. A year and a half later, Maury got strep throat again, and the O.C.D. symptoms returned. She is now taking prophylactic penicillin, an approach that is also controversial. ''It is not proven that it will help her, but it is likely that it will, so we are trying,'' Sehic says. As Pandas was becoming widely known, and as doctors began using antibiotics as a first salvo against obsession, there was ever more research under way. Swedo was a co-author of 30 journal articles between 1998 and 2005. Across the country other lab groups took up the subject as well, and there are dozens more publications in which Swedo played no role. Some of these merely confirmed the existence of the subgroup Swedo had described. Other studies were designed to take knowledge of Pandas to the next level -- from description to proof. What Swedo had done was identify a group in which two things were true: O.C.D. developed suddenly, and the children had evidence of recent strep. But that does not prove that the strep caused the O.C.D. Nearly all of science is a search for cause and effect -- that A made B happen, that C made B stop. The bane of all science is coincidence. For example, a notable percentage of children develop their first signs of autism soon after a vaccination, and it is tempting to blame the shot for the symptoms. But autism as a rule tends to show itself during the years when children are also scheduled to receive fairly regular immunizations. So the odds are good that the two events will be temporally linked. Separating correlation from causation is where every research road becomes bumpy. ''It's been more complicated to follow up on this than we ever thought it was going to be,'' Rapoport says. There have been studies with results that were remarkably clear-cut -- the plasmapheresis trials, for instance. Plasmapheresis, also known as therapeutic plasma exchange, is essentially a cleansing of the blood, somewhat like dialysis. If strep antibodies were responsible for O.C.D. symptoms in Pandas patients, Swedo theorized, then clearing those antibodies from the bloodstream should prompt improvement. Because the procedure is so invasive, the only subjects enrolled were those in the worst shape. Of the 29 children in the trial, 10 received plasma exchange, 9 received intravenous immunoglobulin and 10 received a placebo. According to the results published in the journal Lancet in 1999, the children receiving plasma exchange became markedly better, while those receiving placebo treatment did not. Other studies had results that were somewhat murkier. One tested the theory that you could prevent Pandas by preventing strep. Simply treating strep does not prevent the onset of Pandas since the antibodies have already had a chance to form, which leaves prophylaxis as the most promising form of treatment. That is one way strep was first proved to cause rheumatic fever. When patients who had had rheumatic fever were given daily antibiotics, they did not get strep and they did not get a recurrence of rheumatic fever. Similarly, the hypothesis went, if strep causes Pandas, then preventing patients from getting strep would also prevent a recurrence of an episode of Pandas. So Swedo conducted a prophylaxis study. Half of a group of Pandas patients was put on daily doses of prophylactic antibiotics, while the other half was given a placebo. After several months, the placebo and antibiotic groups were switched. If prophylaxis works, then patients should have developed more, and more intense, episodes of O.C.D. while they were taking the placebo than while taking the antibiotics. But the antibiotic chosen for this particular study was a liquid, and unlike the case with pills, which can be counted, it was difficult for parents to keep track of whether a dose had been missed. Even one missed dose would leave a child vulnerable to strep, and some children in the antibiotic group did get sick. A percentage of those developed Pandas. At the same time, when children in the placebo group became ill, their parents figured out that what they had been dispensing was sugar water and, fearing that the sore throat would lead to a return of Pandas, went and got a prescription for penicillin. Not nearly as many of the control group got strep or Pandas as had been predicted. ''A lot was learned about parental behavior,'' Swedo says, ''but not a lot about Pandas.'' Roger Kurlan, a professor of neurology at the University of Rochester School of Medicine and Dentistry, is not a man who minces words. ''The only thing that's a proven fact about Pandas,'' he says, ''is that children with these symptoms have been observed.'' Everything else, most specifically the role of strep in causing the symptoms, ''is nothing but speculation.'' Kurlan and his collaborator Edward L. Kaplan, an expert in strep at the University of Minnesota Medical School, have become Swedo's most vocal critics. They describe strep and O.C.D. as two things that are ''true, true and unrelated.'' Yes, it is true that some children develop rapid-onset O.C.D. And yes, it is true that a high percentage of those test positive for strep. But that does not mean that the former is caused by the latter. ''In the prior two weeks, 90 percent of these kids might also have eaten pizza,'' Kurlan says. ''Can I make an association that pizza is linked to O.C.D.?'' ''If 100 kids fall out of a tree and break their arms and we test them for strep, there's going to be a very high percentage of children who have evidence of recent infection,'' echoes Stanford Shulman of Children's Memorial Hospital in Chicago. ''That doesn't mean strep is the reason they fell out of the tree.'' A more likely explanation for the presence of strep in children with Pandas, these doctors say, is that any infection, in fact any type of stress, can cause spikes in O.C.D. behavior. And they cite as an example children with Tourette's syndrome, who frequently have O.C.D. symptoms that ebb and flow with stress. Children with neurological disorders ''are sensitive to any number of things,'' Kurlan says. ''If their dog dies. If their parents are fighting. I've seen O.C.D. get worse with a cold, with hay fever, with pneumonia. If there is anything special about strep, I don't think anyone has been able to find it.'' Yes, some children appear to develop symptoms more suddenly than others, he says, but that could be because they have hidden their earlier symptoms from their parents, which O.C.D. patients are known to do. And, yes, he agrees, patients often improve after a positive strep test and a regimen of antibiotics. But because O.C.D. is cyclical, odds are that they would have improved without the test and the medicine anyway. Add to that the fact that some children are strep carriers. They will test positive for the bacteria any time they happen to be cultured, further skewing the cause-and-effect relationship that Swedo is trying to prove. Kurlan says that he understands why the idea of a bacterial cause for disturbing behavior is attractive to parents. A germ can be cured. A germ is not the parents' fault. ''It's a convenient link,'' he says, ''but it's very difficult to show a connection.'' Assigning blame where none exists can be dangerous, Kurlan says. Part of the harm is that of commission -- giving unnecessary medication. Patients like Maury Cronauer, he says, who take penicillin every day to prevent strep in the first place, are making themselves vulnerable to drug allergies and are promoting antibiotic resistance. And he disagrees with Swedo's view that plasmapheresis can be the answer for the most severely affected patients. The procedure leaves children vulnerable to serious infection, he says, which he considers too high a risk given that the symptoms will arguably run their course over time. A more insidious form of harm, however, is that of omission. While turning to antibiotics to cure their child's Pandas, parents might be ignoring other treatments that could alleviate what skeptics believe the child actually has -- plain old O.C.D. It may come on slowly or gradually, in the presence of strep or not; whatever the details, a child who cannot stop washing her hands needs to be treated with one of the many drugs and behavioral-therapy regimens that are successful in battling O.C.D., he says. ''If families are distracted by a simple answer and are therefore not tackling the more serious issues, that would be a disservice,'' Kurlan says. ''Worse, that would be bad medicine.'' I ndividuals are not statistics, and their stories are not proof. But as I met families and heard their tales, I came to more deeply understand why Swedo is so certain of her theory and Kurlan is so wary of it. One 10-year-old girl in New Jersey, for instance, illustrates the hazy, sometimes illusory, difference between Pandas and O.C.D. The girl's mother (who asked that her name not be used to protect her daughter's privacy) describes two distinct times, at age 4 and age 8, when her bubbly child became riddled with disturbing thoughts: ''My mouth is full of cavities'' or ''The waiter put poison in my soda.'' The first time, the mother says, her daughter's doctors were uncertain of the cause. But the mother, after doing her own research and suspecting that it might be Pandas, called the N.I.M.H. Someone there confirmed her suspicions. Soon after, the girl took antibiotics, and, her mother says, the symptoms went away in seven months. The second time it took almost a year. The girl has had behavioral therapy but is not taking any medication for O.C.D. because her mother does not think it is necessary. The one precaution the family takes is keeping a supply of rapid strep test kits in the house and using them regularly. Learning that her daughter had Pandas saved her own sanity, the woman says. ''It was like drowning in the middle of the ocean, and you grab onto something that will help you float.'' And yet. The second of the girl's two episodes, the mother says, was not brought on by strep but by a virus. By Swedo's definition, this would mean that the child did not have Pandas; that her parents think otherwise, Kurlan would argue, shows the danger of a bacterial scapegoat. The mother says that whatever caused the outbreaks -- strep infection, viral infection -- all that matters is that, at the moment, her daughter is fine. But when I ask the girl when she last had her bad thoughts, she tells me, ''Last week.'' Another story of another child, however, shows the damage that can be done if parents start with a psychological rather than a physical assumption. (These parents also didn't want their names used to protect their daughter's privacy.) This little girl was 6 last May, when according to her parents, she changed overnight, becoming clingy and asking the same question over and over and over and over again. Her mother was pregnant at the time, and a psychiatrist her parents knew suggested that their daughter feared the arrival of her new sibling and was looking for attention. So first her parents reassured her. Then they began to punish her, sending her to her room so she could ''think about her behavior and change it,'' her mother says. No one in the family, not even the girl's father, himself a doctor, linked any of this behavior to the raging strep infection she had three weeks earlier. They kept punishing her, and she kept insisting that she didn't want to act this way. ''Please stop punishing me for something I can't help,'' the mother recalls her daughter begging. The parents took her back to the pediatrician's office (they had already been there three times), where they were given a prescription for an antidepressant. Instead of having it filled, they took her to a pediatric psychiatrist, who asked, ''Has she been sick with a sore throat?'' Blood tests showed that her level of strep antibodies was twice as high as it should have been. Two months later, after several weeks of antibiotics and several sessions with Tamar Chansky for cognitive behavioral therapy, the little girl was acting like her old self again. From where Roger Kurlan and other doubters sit, the situation looks simple. The theory of Pandas, they say, has not been proved. Until the causal link to strep is made, these children simply have O.C.D., and anyone who thinks differently is fooling himself. From where Swedo and her supporters sit, things look equally simple. They agree that cause and effect has not yet been definitively proved. But they are adamant that what has been proved so far is too significant to be ignored and that further research is more than warranted. In the interim, they argue, logic dictates that any child who develops full-blown O.C.D. seemingly overnight should be given a throat culture or a strep-antibody test before she is sent to a psychiatrist. ''I'm all for empirical stringency,'' Chansky says, ''but in the meantime, there's something so basic that can be done. We're talking about a throat culture and maybe a blood test. What is the downside?'' The downside, Kurlan says, is that science is not supposed to guess. ''We would be testing children as if the results had meaning for their treatment,'' he says, ''and there is insufficient evidence that it does.'' Swedo is still looking for that evidence. Her most recent publication, in the April 2005 issue of Biological Psychiatry, describes a new study of prophylactic antibiotics, one in which administration of the medication was more closely controlled. The results: Those who received the antibiotics saw ''significant decreases'' in strep infections and in ''neuropsychiatric exacerbations'' over the course of a year. Kurlan, in turn, is conducting research of his own, a nationwide study of 80 patients -- half with a history of O.C.D. that meets the Pandas criteria and half with O.C.D. that does not. For two years, researchers have been logging the rates of strep and the episodes of O.C.D. in each group. If strep causes Pandas, then O.C.D. symptoms should be intensified in the Pandas group relative to their exposure to strep, while in the control group a variety of system-stressing triggers should cause a spike in symptoms. When the data are compiled and made public later this year, the findings may prove that Swedo is wrong. Or they may instead prove that she is right. Most likely, this latest research will simply lead to more research, as science accumulates its evidence one bit of data at a time. Lisa Belkin is a contributing writer for the magazine. Her last article was about Thomas Ellenson, a special-needs child in a mainstream school. From checker at panix.com Sun May 22 22:07:04 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:07:04 -0400 (EDT) Subject: [Paleopsych] Science Week: Animal Behavior: On Social Signals in Rodents Message-ID: I've found a treasure trove of articles from Science Week, fantastic ideas and short, too. So I'll be sending ten of them a day till I've exhausted the trove, in addition to up to ten a day from all other sources. This will go on for about five days. Let me know if this is too much or if you like getting them. Frank Animal Behavior: On Social Signals in Rodents http://scienceweek.com/2005/sw050527-2.htm The following points are made by Leslie B. Vosshall (Current Biology 2005 15:R255): 1) Animals use odors to communicate precise information about themselves to other members of their species. For instance, domesticated dogs intently sample scent marks left by other dogs, allowing them to determine the age, gender, sexual receptivity, and exact identity of the animal that left the mark behind.[1,2] Social communication in rodents is equally robust.[3-5] Male hamsters efficiently choose new female sexual partners over old ones, a phenomenon known as the "Coolidge Effect". The onset of estrus and successful fetal implantation in female mice are both modulated by male odors. Mice have the ability to discriminate conspecifics that differ in MHC odortype and can determine whether others of their species are infected by viruses or parasites, presumably a skill of use in selecting a healthy mate. 2) Such social odors are typically produced in urine or secreted from scent glands distributed over the body. Both volatile and non-volatile cues are known to be produced. The accessory olfactory system, comprising the vomeronasal organ and the accessory olfactory bulb, responds largely to non-volatile cues, while the main olfactory system receives volatile signals. Although mammalian pheromones are classically thought to activate the accessory olfactory system, several newly described pheromones are volatile and may act through the main olfactory system. Chemical signals have a number of advantages in social communication over signals that act on other sensory modalities: they are energetically cheap to produce, often being metabolic by-products; they are volatile and can therefore be broadcast within a large territory; and they can continue to emit signal after the animal has moved to a new location. 3) What are the specific, behaviorally active chemical signals present in urine? What sensory neurons respond to these cues? Can a single such compound be behaviorally active? A recent paper by Lin et al [6] succeeds spectacularly in answering all three questions. The authors applied chemistry, electrophysiology and behavior to this problem, and identified biologically active volatiles in male urine that activate both male and female main olfactory bulb mitral cells. They have elucidated the chemical identity of a single such male-specific urine component that both activates olfactory bulb mitral cells and elicits behaviors in female mice. The new study builds on earlier work from other laboratories that described regions in the olfactory bulb activated upon exposure to whole mouse urine. References (abridged): 1. Bekoff, M. (2001). Observations of scent-marking and discriminating self from others by a domestic dog (Canis familiaris): tales of displaced yellow snow. Behav. Processes 55, 75-79 2. Mekosh-Rosenbaum, V., Carr, W.J., Goodwin, J.L., Thomas, P.L., D'Ver, A., and Wysocki, C.J. (1994). Age-dependent responses to chemosensory cues mediating kin recognition in dogs (Canis familiaris). Physiol. Behav. 55, 495-499 3. Dulac, C. and Torello, A.T. (2003). Molecular detection of pheromone signals in mammals: from genes to behavior. Nat. Rev. Neurosci. 4, 551-562 4. Novotny, M.V. (2003). Pheromones, binding proteins and receptor responses in rodents. Biochem. Soc. Trans. 31, 117-122 5. Restrepo, D., Arellano, J., Oliva, A.M., Schaefer, M.L., and Lin, W. (2004). Emerging views on the distinct but related roles of the main and accessory olfactory systems in responsiveness to chemosensory signals in mice. Horm. Behav. 46, 247-256 6. Lin, D.Y., Zhang, S.Z., Block, E., and Katz, L.C. (2005). Encoding social signals in the mouse main olfactory bulb. Nature 2005 Feb 20[Epub ahead of print] PMID: 15724148 Current Biology http://www.current-biology.com -------------------------------- Related Material: ANIMAL BEHAVIOR: ON ANIMAL PERSONALITIES The following points are made by S.R. Dall (Current Biology 2004 14:R470): 1) Psychologists recognize that individual humans can be classified according to how they differ in behavioral tendencies [1]. Furthermore, anyone who spends time watching non-human animals will be struck by how, even within well-established groups of the same species, individuals can be distinguished readily by their behavioral predispositions. Evolutionary biologists have traditionally assumed that individual behavioral differences within populations are non-adaptive "noise" around (possibly) adaptive average behavior, though since the 1970s it has been considered that such differences may stem from competition for scarce resources [2]. 2) It is becoming increasingly evident, however, that across a range of taxa -- including primates and other mammals as well as birds, fish, insects and cephalopod molluscs -- behavior varies non-randomly among individuals along particular axes [3]. Comparative psychologists and behavioral biologists [3-5] are documenting that individual animals differ consistently in their aggressiveness, activity, exploration, risk-taking, fearfulness and reactivity, suggesting that such variation is likely to have significant ecological and evolutionary consequences [4,5] and hence be a focus for selection. From evolutionary and ecological viewpoints, non-random individual behavioral specializations are coming to define animal personalities [3], although they are also referred to as behavioral syndromes, coping styles, strategies, axes and constructs [3-5]. 3) The evolution of animal personality differences is poorly understood. Ostensibly, it makes sense for animals to adjust their behavior to current conditions, including their own physiological condition, which can result in behavioral differences if local conditions vary between individuals. It is unclear, however, why such differences should persist when circumstances change. In fact, even in homogenous environments interactions between individuals can favor the adoption of alternative tactics. For instance, competition for parental attention in human families may encourage later-born children to distinguish themselves by rebelling. In the classic Hawk Dove game model of animal conflicts over resources, if getting into escalated fights costs more than the resource is worth, a stable mix of pacifist (dove) and aggressive (hawk) tactics can evolve. This is because, as hawks become common, it pays to avoid fighting and play dove, and vice versa. 4) There are, however, two ways in which evolutionarily stable mixtures of tactics can be maintained by such frequency-dependent payoffs: individuals can adopt tactics randomly with a fixed probability that generates the predicted mix in a large population; alternatively, fixed proportions of individuals can play tactics consistently. Only the latter would account for animal personality differences. It turns out that consistent hawks and doves can be favored if the outcomes of fights are observed by future opponents and influence their decisions --being persistently aggressive will then discourage fights, as potential opponents will expect to face a costly contest if they challenge for access to the resource. At least in theory, therefore, personality differences can evolve when the fitness consequences of behavior depend both on an individual's behavioral history and the behavior of other animals. References (abridged): 1. Pervin, L. and John, O.P. (1999). Handbook of Personality. (: Guilford Press) 2. Wilson, D.S. (1998). Adaptive individual differences within single populations. Philos. Trans. R. Soc. Lond. B Biol. Sci 353, 199-205 3. Gosling, S.D. (2001). From mice to men: what can we learn about personality from animal research. Psychol. Bull. 127, 45-86 4. Sih, A., Bell, A.M., Johnson, J.C. and Ziemba, R.E. (2004). Behavioral syndromes: an integrative review. Q. Rev. Biol. in press. 5. Sih, A., Bell, A.M. and Johnson, J.C. (2004). Behavioral syndromes: an ecological and evolutionary overview. Trends Ecol. Evol. in press. Current Biology http://www.current-biology.com -------------------------------- Related Material: ANIMAL BEHAVIOR: ON ANTHROPOMORPHISM The following points are made by Clive D. Wynne (Nature 2004 428:606): 1) The complexity of animal behavior naturally prompts us to use terms that are familiar from everyday descriptions of our own actions. Charles Darwin (1809-1882) used mentalistic terms freely when describing, for example, pleasure and disappointment in dogs; the cunning of a cobra; and sympathy in crows. Darwin's careful anthropomorphism, when combined with meticulous description, provided a scientific basis for obvious resemblances between the behavior and psychology of humans and other animals. It raised few objections. 2) The 1890s saw a strong reaction against ascribing conscious thoughts to animals. In the UK, the canon of Conwy Lloyd Morgan (1852-1936) forbade the explanation of animal behavior with "a higher psychical faculty" than demanded by the data. In the US, Edward Thorndike (1874-1949) advocated replacing the use of anecdotes in the study of animal behavior with controlled experiments. He argued that when studied in controlled and reproducible environments, animal behavior revealed simple mechanical laws that made mentalistic explanations unnecessary. 3) This rejection of anthropomorphism was one of the few founding principles of behaviorism that survived the rise of ethological and cognitive approaches to studying animal behavior. But after a century of silence, recent decades have seen a resurgence of anthropomorphism. This movement was led by ethologist Donald Griffin, famous for his discovery of bat sonar. Griffin argued that the complexity of animal behavior implies conscious beliefs and desires, and that an anthropomorphic explanation can be more parsimonious than one built solely on behavioral laws. Griffin postulated, "Insofar as animals have conscious experiences, this is a significant fact about their nature and their lives." Animal communication particularly impressed Griffin as implying animal consciousness. 4) Griffin has inspired several researchers to develop ways of making anthropomorphism into a constructive tool for understanding animal behavior. Gordon Burghardt was keen to distinguish the impulse that prompts children to engage in conversations with the family dog (naive anthropomorphism) from "critical anthropomorphism", which uses the assumption of animal consciousness as a "heuristic method to formulate research agendas that result in publicly verifiable data that move our understanding of behavior forward." Burghardt points to the death-feigning behavior of snakes and possums as examples of complex and apparently deceitful behaviors that can best be understood by assuming that animals have conscious states. 5) But anthropomorphism is not a well-developed scientific system. On the contrary, its hypotheses are generally nothing more than informal folk psychology, and may be of no more use to the scientific psychologist than folk physics to a trained physicist. Although anthropomorphism may on occasion be a source of useful hypotheses about animal behavior, acknowledging this does not concede the general utility of an anthropomorphic approach to animal behavior.(1-4) References: 1. Blumberg, M. S. & Wasserman, E. A. Am. Psychol. 50, 133-144 (1995) 2. De Waal, F. B. M. Phil. Top. 27, 255-280 (1999) 3. Mitchell, R. W. et al. Anthropomorphism, Anecdotes and Animals (State Univ. New York Press, New York, 1997) 4. Wynne, C. D. L. Do Animals Think? (Princeton Univ. Press, Princeton, New Jersey, 2004) Nature http://www.nature.com/nature From checker at panix.com Sun May 22 22:07:26 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:07:26 -0400 (EDT) Subject: [Paleopsych] Science Week: Evolution: On Life on Early Earth Message-ID: Evolution: On Life on Early Earth http://scienceweek.com/2005/sw050513-1.htm The following points are made by Frances Westall (Science 2005 308:366): 1) Fifty years after the discovery of fossil microorganisms in 2-billion-year-old rocks from the Gunflint Formation in Ontario [1], Canada, there is renewed interest in the history of life on the early Earth. This resurgence of interest is in part due to the burgeoning field of astrobiology and the recognition of the importance of studying Earth's early environment and seeking life on other planets. Such renewed interest necessitates a better understanding of the problems surrounding the identification of very ancient traces of life and the development of more sophisticated methods of investigation. Resolution of such problems is crucial if we are to obtain reliable evidence for traces of life on other planets, such as Mars, with any reasonable degree of certainty. 2) When searching for evidence of past life, sedimentary environments are considered the most suitable because they are often formed in association with water, a fundamental requirement for life. There are only three known locations that host exposures of ancient sediments: Isua and Akilia in southwest Greenland, which are 3.8 to 3.7 billion years old (Ga), the Pilbara in northwestern Australia (3.5 to 3.3 Ga), and Barberton in eastern South Africa (3.5 to 3.3 Ga). These sediments, however, formed almost 1 billion years after the formation of the Earth (4.56 Ga). Any older sedimentary deposits, and with them any potential information on the origin of life and its initial evolution, have been destroyed by tectonic activity. Of the existing three exposures of ancient sediments, the Isua and Akilia rocks have been so altered by metamorphic changes over the past 3.8 billion years that they are no longer useful for microfossil studies. In contrast, large parts of the Pilbara and Barberton ancient terrains are exquisitely preserved, representing veritable goldmines for microfossil hunters. 3) Until recently, investigations of early life on Earth concentrated on the search for fossils of cyanobacteria [2]. These relatively large microorganisms (from a few to tens of micrometers in size) evolved a sophisticated metabolism for obtaining energy from sunlight (photosynthesis), producing oxygen as a by-product (oxygenic photosynthesis). The attention lavished on these microorganisms stems from early discoveries of fossil cyanobacteria [1], but since then the study of early life has moved into a more contentious, if more realistic, sphere. New questions are being raised: (i) What characteristics of life (structural and biogeochemical) also are produced by abiogenic processes and, consequently, how can we distinguish between signatures of past life and signatures of nonlife? (ii) What is the nature of the earliest preserved microorganisms, and (iii) what environments did they inhabit? The first question is a particularly thorny one -- and is especially pertinent to the search for life on other planets -- because we have no examples of the transition from nonlife to life. The life forms preserved in the oldest terrestrial sediments were already highly evolved compared with the earliest cell and with LUCA (last universal common ancestor). 4) Owing to the difficulties in distinguishing between life and nonlife, no one signature of life -- for example, the fractionated isotopic ratio, the molecular carbon composition, or an isolated microfossil -- should be considered unequivocal evidence for traces of past life. Hence, the most realistic approach to identifying evidence of past life is a global strategy that includes relevant environmental (habitat), structural (morphology), and biogeochemical (chemical composition, isotopic fractionation) information. Analyzing the geological context of rocks that potentially contain microfossils is crucial. Also needed is a better understanding of the differences in the spatial and temporal scale of interactions between microbes and their environment, and geological processes. For example, some microenvironments (such as volcanic rocks or hydrothermal veins) provide habitats suitable for microbes even though the overall environment may be inhospitable to life. [3-5] References (abridged): 1. S. A. Tyler, E. S. Barghoorn, Science 119, 606 (1954) 2. J. W. Schopf, Science 260, 640 (1993) 3. M. Brasier et al., Nature 416, 76 (2002) 4. J. M. Garcia-Ruiz et al., Science 302, [1194] (2003) 5. M. Van Zuilen, A. Lepland, G. Arrhenius, Nature 418, 627 (2002) Science http://www.sciencemag.org -------------------------------- Related Material: ORIGIN OF LIFE: IN SEARCH OF THE SIMPLEST CELL The following points are made by Eoers Szathmary (Nature 2005 433:469): 1) In investigating the origin of life and the simplest possible life forms, one needs to enquire about the composition and working of a minimal cell that has some form of metabolism, genetic replication from a template, and boundary (membrane) production. 2) Identifying the necessary and sufficient features of life has a long tradition in theoretical biology. But living systems are products of evolution, and an answer in very general terms, even if possible, is likely to remain purely phenomenological. Going deeper into mechanisms means having to account for the organization of various processes, and such organization has been realized in several different ways by evolution. Eukaryotic cells (such as those from which we are made) are much more complicated than prokaryotes (such as bacteria), and eukaryotes harbor organelles that were once free-living bacteria. A further complication is that multicellular organisms consist of building blocks -- cells -- that are also alive. So aiming for a general model of all kinds of living beings would be fruitless; instead, such models have to be tied to particular levels of biological organization. 3) Basically, there are two approaches to the "minimal cell": the top-down and the bottom-up. The top-down approach aims at simplifying existing small organisms, possibly arriving at a minimal genome. Some research to this end takes Buchnera, a symbiotic bacterium that lives inside aphids, as a rewarding example. This analysis is complemented by an investigation of the duplication and divergence of genes. Remarkably, these approaches converged on the conclusion that genes dealing with RNA biosynthesis are absolutely indispensable in this framework. This may be linked to the idea of life's origins in an "RNA world", although such an inference is far from immediate. 4) Top-down approaches seem to point to a minimum genome size of slightly more than 200 genes. Care should be taken, however, in blindly accepting such a figure. For example, although some gene set A and gene set B may not be common to all bacteria, that does not mean that (A and B) are dispensable. It may well mean that A or B is essential, because the cell has to solve a problem by using either A or B. Only experiments can have the final word on these issues. 5) A top-down approach will not take us quite to the bottom, to the minimal possible cells in chemical terms. All putative cells, however small, will have a genetic code and a means of transcribing and translating that code. Given the complexity of this system, it is difficult to believe, either logically or historically, that the simplest living chemical system could have had these components. 6) The bottom-up approach aims at constructing artificial chemical supersystems that could be considered alive. No such experimental system exists yet; at least one component is always missing. Metabolism seems to be the stepchild in the family: what most researchers in the field used to call metabolism is usually a trivial outcome of the fact that both template replication and membrane growth need some material input. This input is usually simplified to a conversion reaction from precursors to products. Nature http://www.nature.com/nature -------------------------------- Related Material: ORIGIN OF LIFE: ON TRANSITIONS FROM NONLIVING TO LIVING MATTER The following points are made by S. Rasmussen et al (Science 2004 303:963): 1) All life forms are composed of molecules that are not themselves alive. But in what ways do living and nonliving matter differ? How could a primitive life form arise from a collection of nonliving molecules? The transition from nonliving to living matter is usually raised in the context of the origin of life. But some researchers(1) have recently taken a broader view and asked how simple life forms could be synthesized in the laboratory. The resulting artificial cells (sometimes called protocells) might be quite different from any extant or extinct form of life, perhaps orders of magnitude smaller than the smallest bacterium, and their synthesis need not recapitulate life's actual origins. A number of complementary studies have been steadily progressing toward the chemical construction of artificial cells (2-5). 2) There are two approaches to synthesizing artificial cells. The top-down approach aims to create them by simplifying and genetically reprogramming existing cells with simple genomes. The more general and more challenging bottom-up approach aims to assemble artificial cells from scratch using nonliving organic and inorganic materials. 3) Although the definition of life is notoriously controversial, there is general agreement that a localized molecular assemblage should be considered alive if it continually regenerates itself, replicates itself, and is capable of evolving. Regeneration and replication involve transforming molecules and energy from the environment into cellular aggregations, and evolution requires heritable variation in cellular processes. The current consensus is that the simplest way to achieve these characteristics is to house informational polymers (such as DNA and RNA) and a metabolic system that chemically regulates and regenerates cellular components within a physical container (such as a lipid vesicle). 4) Two recent workshops(1) reviewed the state of the art in artificial cell research, much of which focuses on self-replicating lipid vesicles. David Deamer (Univ. of California, Santa Cruz) and Pier Luigi Luisi (ETH Zurich) each described the production of lipids using light energy, and the template-directed self-replication of RNA within a lipid vesicle. In addition, Luisi demonstrated the polymerization of amino acids into proteins on the vesicle surface, which acts as a catalyst for the polymerization process. The principal hurdle remains the synthesis of efficient RNA replicases and related enzymes entirely within an artificial cell. Martin Hanczyc (Harvard Univ.) showed how the formation of lipid vesicles can be catalyzed by encapsulated clay particles with RNA adsorbed on their surfaces. This suggests that encapsulated clay could catalyze both the formation of lipid vesicles and the polymerization of RNA. References (abridged): 1. http://www.ees.lanl.gov/protocells 2. C. Hutchinson et al., Science 286, 2165 (1999) 3. M. Bedau et al., Artif. Life 6, 363 (2000) 4. J. Szostak et al., Nature 409, 387 (2001) 5. A. Pohorille, D. Deamer, Trends Biotechnol. 20, 123 (2002) Science http://www.sciencemag.org From checker at panix.com Sun May 22 22:08:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:08:07 -0400 (EDT) Subject: [Paleopsych] Science Week: Origin of Life: In Search of the Simplest Cell Message-ID: Origin of Life: In Search of the Simplest Cell http://scienceweek.com/2005/sw050325-1.htm The following points are made by E?rs Szathmary (Nature 2005 433:469): 1) In investigating the origin of life and the simplest possible life forms, one needs to enquire about the composition and working of a minimal cell that has some form of metabolism, genetic replication from a template, and boundary (membrane) production. 2) Identifying the necessary and sufficient features of life has a long tradition in theoretical biology. But living systems are products of evolution, and an answer in very general terms, even if possible, is likely to remain purely phenomenological. Going deeper into mechanisms means having to account for the organization of various processes, and such organization has been realized in several different ways by evolution. Eukaryotic cells (such as those from which we are made) are much more complicated than prokaryotes (such as bacteria), and eukaryotes harbor organelles that were once free-living bacteria. A further complication is that multicellular organisms consist of building blocks -- cells -- that are also alive. So aiming for a general model of all kinds of living beings would be fruitless; instead, such models have to be tied to particular levels of biological organization. 3) Basically, there are two approaches to the "minimal cell": the top-down and the bottom-up. The top-down approach aims at simplifying existing small organisms, possibly arriving at a minimal genome. Some research to this end takes Buchnera, a symbiotic bacterium that lives inside aphids, as a rewarding example. This analysis is complemented by an investigation of the duplication and divergence of genes. Remarkably, these approaches converged on the conclusion that genes dealing with RNA biosynthesis are absolutely indispensable in this framework. This may be linked to the idea of life's origins in an "RNA world", although such an inference is far from immediate. 4) Top-down approaches seem to point to a minimum genome size of slightly more than 200 genes. Care should be taken, however, in blindly accepting such a figure. For example, although some gene set A and gene set B may not be common to all bacteria, that does not mean that (A and B) are dispensable. It may well mean that A or B is essential, because the cell has to solve a problem by using either A or B. Only experiments can have the final word on these issues. 5) A top-down approach will not take us quite to the bottom, to the minimal possible cells in chemical terms. All putative cells, however small, will have a genetic code and a means of transcribing and translating that code. Given the complexity of this system, it is difficult to believe, either logically or historically, that the simplest living chemical system could have had these components. 6) The bottom-up approach aims at constructing artificial chemical supersystems that could be considered alive. No such experimental system exists yet; at least one component is always missing. Metabolism seems to be the stepchild in the family: what most researchers in the field used to call metabolism is usually a trivial outcome of the fact that both template replication and membrane growth need some material input. This input is usually simplified to a conversion reaction from precursors to products. Nature http://www.nature.com/nature -------------------------------- Related Material: ORIGIN OF LIFE: ON TRANSITIONS FROM NONLIVING TO LIVING MATTER The following points are made by S. Rasmussen et al (Science 2004 303:963): 1) All life forms are composed of molecules that are not themselves alive. But in what ways do living and nonliving matter differ? How could a primitive life form arise from a collection of nonliving molecules? The transition from nonliving to living matter is usually raised in the context of the origin of life. But some researchers(1) have recently taken a broader view and asked how simple life forms could be synthesized in the laboratory. The resulting artificial cells (sometimes called protocells) might be quite different from any extant or extinct form of life, perhaps orders of magnitude smaller than the smallest bacterium, and their synthesis need not recapitulate life's actual origins. A number of complementary studies have been steadily progressing toward the chemical construction of artificial cells (2-5). 2) There are two approaches to synthesizing artificial cells. The top-down approach aims to create them by simplifying and genetically reprogramming existing cells with simple genomes. The more general and more challenging bottom-up approach aims to assemble artificial cells from scratch using nonliving organic and inorganic materials. 3) Although the definition of life is notoriously controversial, there is general agreement that a localized molecular assemblage should be considered alive if it continually regenerates itself, replicates itself, and is capable of evolving. Regeneration and replication involve transforming molecules and energy from the environment into cellular aggregations, and evolution requires heritable variation in cellular processes. The current consensus is that the simplest way to achieve these characteristics is to house informational polymers (such as DNA and RNA) and a metabolic system that chemically regulates and regenerates cellular components within a physical container (such as a lipid vesicle). 4) Two recent workshops(1) reviewed the state of the art in artificial cell research, much of which focuses on self-replicating lipid vesicles. David Deamer (Univ. of California, Santa Cruz) and Pier Luigi Luisi (ETH Zurich) each described the production of lipids using light energy, and the template-directed self-replication of RNA within a lipid vesicle. In addition, Luisi demonstrated the polymerization of amino acids into proteins on the vesicle surface, which acts as a catalyst for the polymerization process. The principal hurdle remains the synthesis of efficient RNA replicases and related enzymes entirely within an artificial cell. Martin Hanczyc (Harvard Univ.) showed how the formation of lipid vesicles can be catalyzed by encapsulated clay particles with RNA adsorbed on their surfaces. This suggests that encapsulated clay could catalyze both the formation of lipid vesicles and the polymerization of RNA. References (abridged): 1. http://www.ees.lanl.gov/protocells 2. C. Hutchinson et al., Science 286, 2165 (1999) 3. M. Bedau et al., Artif. Life 6, 363 (2000) 4. J. Szostak et al., Nature 409, 387 (2001) 5. A. Pohorille, D. Deamer, Trends Biotechnol. 20, 123 (2002) Science http://www.sciencemag.org -------------------------------- ORIGIN OF LIFE: MODELS OF PRIMITIVE CELLULAR COMPARTMENTS The following points are made by M.M. Hanczyc et al (Science 2003 302:618): 1) The bilayer membranes that surround all present-day cells and act as boundaries are thought to have originated in the spontaneous self-assembly of amphiphilic molecules into membrane vesicles (1-5). Simple amphiphilic molecules have been found in meteorites and have been generated under a wide variety of conditions in the laboratory, ranging from simulated ultraviolet irradiation of interstellar ice particles to hydrothermal processing under simulated early Earth conditions. 2) Molecules such as simple fatty acids can form membranes when the pH is close to the pK[sub-a] (K[sub-a] is the acid dissociation equilibrium constant) of the fatty acid carboxylate group in the membrane (3). Hydrogen bonding between protonated and ionized carboxylates may confer some of the properties of more complex lipids with two acyl chains, thus allowing the formation of a stable bilayer phase. Fatty acid vesicles may be further stabilized (to a wider range of pH and even to the presence of divalent cations) by the admixture of other simple amphiphiles such as fatty alcohols and fatty acid glycerol esters. Recent studies have shown that saturated fatty acid/fatty alcohol mixtures with carbon chain lengths as short as 9 can form vesicles capable of retaining ionic fluorescent dyes, DNA, and proteins (4). 3) Vesicles consisting of simple amphiphilic molecules could have existed under plausible prebiotic conditions on the early Earth, where they may have produced distinct chemical micro-environments that could retain and protect primitive oligonucleotides while potentially allowing small molecules such as activated mononucleotides to diffuse in and out of the vesicle. Furthermore, compartmentalization of replicating nucleic acids (or some other form of localization) is required to enable Darwinian evolution by preventing the random mixing of genetic polymers, thus coupling genotype and phenotype. If primordial nucleic acids assembled on mineral surfaces, the question arises as to how they eventually came to reside within membrane vesicles. Although dissociation from the mineral surface followed by encapsulation within newly forming vesicles (perhaps in a different location under different environmental conditions) is certainly a possibility, a direct route would be more satisfying and perhaps more efficient. 4) In summary: The clay montmorillonite is known to catalyze the polymerization of RNA from activated ribonucleotides. The authors report that montmorillonite accelerates the spontaneous conversion of fatty acid micelles into vesicles. Clay particles often become encapsulated in these vesicles, thus providing a pathway for the prebiotic encapsulation of catalytically active surfaces within membrane vesicles. In addition, RNA adsorbed to clay can be encapsulated within vesicles. Once formed, such vesicles can grow by incorporating fatty acid supplied as micelles and can divide without dilution of their contents by extrusion through small pores. These processes mediate vesicle replication through cycles of growth and division. The authors suggest the formation, growth, and division of the earliest cells may have occurred in response to similar interactions with mineral particles and inputs of material and energy. References (abridged): 1. J. M. Gebicki, M. Hicks, Nature 243, 232 (1973) 2. J. M. Gebicki, M. Hicks, Chem. Phys. Lipids 16, 142 (1976) 3. W. R. Hargreaves, D. W. Deamer, Biochemistry 17, 3759 (1978) 4. C. L. Apel, D. W. Deamer, M. N. Mautner, Biochim. Biophys. Acta 1559, 1 (2002) 5. P.-A. Monnard, C. L. Apel, A. Kanavarioti, D. W. Deamer, Astrobiology 2, 139 (2002) Science http://www.sciencemag.org From checker at panix.com Sun May 22 22:08:27 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:08:27 -0400 (EDT) Subject: [Paleopsych] Science Week: Mathematics: On Random Numbers Message-ID: Mathematics: On Random Numbers http://scienceweek.com/2005/sw050527-6.htm The following points are made by Gianpietro Malescio (Nature 2005 434:1073): 1) Making predictions is one of the main goals of science. Traditionally this implies writing down, and solving, the equations governing the system under investigation. When this method proves impossible we often turn to a stochastic approach. The term "stochastic" encompasses a variety of techniques that are based on a common feature: using unpredictable entities --random numbers -- to make predictions possible. 2) The origins of stochastic simulation can be traced to an experiment performed in the 18th century by Georges Louis Leclerc, Comte de Buffon ((1701-1788). Leclerc repeatedly tossed a needle at random on to a board ruled with parallel straight lines. From his observations, he derived the probability that the needle would intersect a line. Subsequently, Pierre Simon de Laplace (1749-1827) saw in this experiment a way to obtain a statistical estimate of pi. 3) Later, the advent of mechanical calculating machines allowed numerical "experiments" such as that performed in 1901 by William Thomson (Lord Kelvin) (1824-1907) to demonstrate the equipartition theorem of the internal energy of a gas. Enrico Fermi (1901-1954) was probably the first to apply statistical sampling to research problems, while studying neutron diffusion in the early 1930s. During their work on the Manhattan Project, Stanislaw Ulam (1909-1986), John von Neumann (1903-1957) and Nicholas Metropolis (1915-1999) rediscovered Fermi's method. They established the use of random numbers as a formal methodology, generating the "Monte Carlo method" -- named after the city famous for its gambling facilities. Today, stochastic simulation is used to study a wide variety of problems (many of which are not at all probabilistic), ranging from the economy to medicine, and from traffic flow to biochemistry or the physics of matter. 4) When the temporal evolution of a system cannot be studied by traditional means, random numbers can be used to generate an "alternative" evolution. Starting with a possible configuration, small, random changes are introduced to generate a new arrangement: whenever this is more stable than the previous one, it replaces it, usually until the most stable configuration is reached. Randomness cannot tell us where the system likes to go, but allows the next best thing: exploration of the space of the configurations while avoiding any bias that might exclude the region of the possible solution. If we are able to guess the probability distribution of the configurations, then instead of conducting a uniform random search we can perform an "importance" sampling, focusing our search on where the solution is more likely to be found. 5) Optimization problems are often solved using stochastic algorithms that mimic biological evolution. Although it may sound vaguely unpleasant, we come from a random search. In nature, new genetic variants are introduced through random changes (mutations) in the genetic pool while additional variability is provided by the random mixing of parent genes (by recombination). Randomness allows organisms to explore new "designs" which the environment checks for fitness, selecting those most suited to survival. But the optimal solution is not found once and for ever. A continually changing environment means evolution is an on-going process; it does not produce the "perfect" organism, but rather a dynamic balance of myriad organisms within an ecosystem. References: 1. Metropolis, N. Los Alamos Sci. 15, 125-130 (1987) 2. Chaitin, G. J. Sci. Am. 232, 47-52 (1975) 3. Calude, C. S. & Chaitin, G. J. Nature 400, 319-320 (1999) Nature http://www.nature.com/nature From checker at panix.com Sun May 22 22:08:53 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:08:53 -0400 (EDT) Subject: [Paleopsych] SW: On Advanced Ape Technology Message-ID: Animal Behavior: On Advanced Ape Technology http://scienceweek.com/2005/sc050225-1.htm [A tremendous trove here. Thanks for Eugen for forwarding Science Week's digest, though I don't recall looking at it before. I'm going into their archive, too. Lots of goodies, indeed! And the articles are short.] The following points are made by W.C. McGrew (Current Biology 2004 14:R1046): 1) The use of tools by animals to solve natural problems, especially in foraging, is well-known natural history [1]. The manufacture of tools, rather than merely using found objects, is far less common, being restricted to great apes and a few bird species [2]. Only chimpanzees and orangutans have tool-kits, that is, repertoires of different tool-types, that vary across populations [3-5]. Now comes the first report [6] of customary use of tool-sets -- two or more different types of tool used in sequence to achieve a single goal -- by a community of wild chimpanzees (Pan troglodytes) in the Congo Basin. The finding emerged from the innovative use of modern technology in field primatology: "spy-cams" to monitor shy apes in the jungle. 2) Non-human use of objects as tools is widespread: woodpecker finches prise out grubs with twigs, sea otters crack molluscs on stones, digger wasps tamp down burrow entrances with pebbles [1]. These animals, however, are "one-shot wonders", that is, each species is good at only one type of tool-use, and the objects are collected, used, and discarded on the spot, unaltered. Such simple acts require no inventiveness, being behavioral adaptations to fit certain task demands. 3) The fabrication of objects by transforming raw materials into instruments is a cognitive step forward, in that reshaping an item by reduction, combination, extension, and so on renders it more efficient [2]. Such elementary technology must be learned from others who are more proficient than the novice. Social learning probably underlies the making of palm leaf probes by New Caledonian crows, who use the resulting hooked instrument to extract insect larvae. Wild chimpanzees use twig, vine, bark, stem, or leaf-stalk probes to fish out ants, termites or honey from their arboreal or terrestrial cavities. Such flexible probes are clipped, peeled, stripped, split, etc. to the necessary specifications to do the job. Goodall first described this 40 years ago in the case of termite fishing by the chimpanzees of Gombe, Tanzania. 4) As long-term studies of wild chimpanzees proceeded, first in East Africa, then in West Africa, it became clear that populations of apes make and use a variety of tools, in foraging, social life, and body maintenance [2]. Moreover, each population has a different repertoire of elementary technology: some types are universal, such as bed-making; others are regional, such as nut-cracking in far West Africa [5]; and still others are unique to a single population, such as pestle-pounding at Bossou, Guinea. Systematic comparison across populations shows patterns that resemble those found cross-culturally when human societies are compared, for example, in that the present distribution of a custom reflects past diffusion from a likely single invention. Thus, a population's tool-kit is a technological profile of its material culture. 5) Further field studies revealed that chimpanzees spontaneously use tool-sets: a sequence of two or more tools needed to accomplish a single task. The first, anecdotal description was of an A-B-C-D sequence, in which a stout chisel-stick (A) was used to batter the entrance to a bees' nest containing honey. After this, a more pointed chisel-stick (B) weakened the barrier at a single point; then a bodkin-like stick (C) pierced the barrier; and finally a slender, flexible probe (D) dipped out the honey. Tool-sets are cognitively demanding, in that a correct ordering is required: thus, a dipstick is only useful once the honey reservoir has been made accessible by a bodkin. Late reports of tool-sets elsewhere emerged, but behavioral data were scarce, with most information coming (quasi-archaeologically) from the tools left behind []. 6) Sanz et al.[6] have now reported the use of two new tool-sets by chimpanzees in the Goualougo Triangle area of the Republic of Congo, in Central Africa. In this Congo Basin rainforest, wild chimpanzees tap termites from their earthen homes, using techniques that relate to the shape of the insects' mounds. For emergent, castle-like mounds, the apes perforate with a twig the mound's surface to re-open exit holes used by the termites. Having gained access, they use a slender probe to fish out the insects for food. This technique was known from scanty evidence, mostly of tools found at mounds, but Sanz et al.[6] have supplied the behavioral data to go with the artifacts. References (abridged): 1. Beck, B.B. (1980). Animal Tool Behavior. (New York: Garland STPM Press) 2. McGrew, W.C. (1992). Chimpanzee Material Culture. Implications for Human Evolution. (Cambridge: Cambridge University Press) 3. Van Schaik, C.P., Ancrenaz, M., Borgen, G., Galdikas, B., Knott, C.D., Singleton, I., Suzuki, A., Utami, S.S. and Merrill, M. (2003). Orangutan cultures and the evolution of material culture. Science 299, 102-105 4. McGrew, W.C., Tutin, C.E.G. and Baldwin, P.J. (1979). Chimpanzees, tools, and termites: cross-cultural comparisons of Senegal, Tanzania, and Rio Muni. Man 14, 185-214 5. Boesch, C. and Boesch, H. (1990). Tool use and tool making in wild chimpanzees. Folia Primat. 54, 86-99 6. Sanz, C., Morgan, D. and Gulick, S. (2004). New insights into chimpanzees, tools, and termites from the Congo Basin. Am. Nat. 164, 562-581 Current Biology http://www.current-biology.com -------------------------------- Related Material: COGNITIVE SCIENCE: NUMBERS AND COUNTING IN A CHIMPANZEE Notes by ScienceWeek: In this context, let us define "animals" as all living multi-cellular creatures other than humans that are not plants. In recent decades it has become apparent that the cognitive skills of many animals, especially non-human primates, are greater than previously suspected. Part of the problem in research on cognition in animals has been the intrinsic difficulty in communicating with or testing animals, a difficulty that makes the outcome of a cognitive experiment heavily dependent on the ingenuity of the experimental approach. Another problem is that when investigating the non-human primates, the animals whose cognitive skills are closest to that of humans, one cannot do experiments on large populations because such populations either do not exist or are prohibitively expensive to maintain. The result is that in the area of primate cognitive research reported experiments are often "anecdotal", i.e., experiments involving only a few or even a single animal subject. But anecdotal evidence can often be of great significance and have startling implications: a report, even in a single animal, of important abstract abilities, numeric or conceptual, is worthy of attention, if only because it may destroy old myths and point to new directions in methodology. In 1985, T. Matsuzawa reported experiments with a female chimpanzee that had learned to use Arabic numerals to represent numbers of items. This animal (which is still alive and whose name is "Ai") can count from 0 to 9 items, which she demonstrates by touching the appropriate number on a touch-sensitive monitor. Ai can also order the numbers from 0 to 9 in sequence. The following points are made by N. Kawai and T. Matsuzawa (Nature 2000 403:39): 1) The author report an investigation of Ai's memory span by testing her skill in numerical tasks. The authors point out that humans can easily memorize strings of codes such as phone numbers and postal codes if they consist of up to 7 items, but above this number of items, humans find memorization more difficult. This "magic number 7" effect, as it is known in human information processing, represents an apparent limit for the number of items that can be handled simultaneously by the human brain. 2) The authors report that the chimpanzee Ai can remember the correct sequence of any 5 numbers selected from the range 0 to 9. 3) The authors relate that in one testing session, after choosing the first correct number in a sequence (all other numbers still masked), "a fight broke out among a group of chimpanzees outside the room, accompanied by loud screaming. Ai abandoned her task and paid attention to the fight for about 20 seconds, after which she returned to the screen and completed the trial without error." 4) The authors conclude: "Ai's performance shows that chimpanzees can remember the sequence of at least 5 numbers, the same as (or even more than) preschool children. Our study and others demonstrate the rudimentary form of numerical competence in non-human primates." Nature http://www.nature.com/nature -------------------------------- Related Material: EVOLUTION: ON THE MENTALITY OF CROWS The following points are made by N.J. Emery and N.S. Clayton (Science 2004 306:1903): 1) Throughout folklore, the corvids (crows, jays, ravens, and jackdaws) have been credited with intelligence. Recent experiments investigating the cognitive abilities of corvids have begun to reveal that this reputation has a factual basis. These studies have found that some corvids are not only superior in intelligence to birds of other avian species (perhaps with the exception of some parrots), but also rival many nonhuman primates. 2) Traditionally, studies of complex cognition have focused on monkeys and apes [1]. However, there is no reason to assume that complex cognition is restricted only to the primates [2]. Indeed, the social intelligence hypothesis [3] states that intelligence evolved not to solve physical problems, but to process and use social information, such as who is allied with whom and who is related to whom, and to use this information for deception [4]. There is evidence that some other large-brained social animals, such as cetaceans, demonstrate similar levels of intelligence as primates [5]. Corvids also appear to meet many of the criteria for the use of social knowledge in their interactions with conspecifics. 3) The crow has a brain significantly larger than would be predicted for its body size, and it is relatively the same size as the chimpanzee brain. The relative size of the forebrain in corvids is significantly larger than in other birds (with the exception of some parrots) [2], particularly those areas thought to be analogous to the mammalian prefrontal cortex: the nidopallium and mesopallium. This enlargement of the "avian prefrontal cortex" may reflect an increase in primate-like intelligence in corvids. 4) To fully appreciate how corvid and ape psychology are similar, it is important to describe how corvids may represent their physical and social worlds, and how these forms of mental representation may be similar or dissimilar to those used by apes in solving similar problems. The authors use the term "understanding" to convey the idea that corvids and apes reason about a domain (physical or social) in a way that transcends basic associative and reinforcement processes. 5) Tool use is defined as "the use of an external object as a functional extension of mouth, beak, hand, or claw, in the attainment of an immediate goal". Although many birds, primates, and other animals use tools, it is not clear whether any of these species appreciate how tools work and the forces underlying their function. Perhaps the most convincing candidates are New Caledonian crows, who display extraordinary skills in making and using tools to acquire otherwise unobtainable foods. In the wild, they make two types of tools. Hook tools are crafted from twigs by trimming and sculpting until a functional hook has been fashioned and are used to poke out insect larvae from holes in trees using slow deliberate movements. 6) The crows also manufacture stepped-cut Pandanus leaves, which are used to probe for prey under leaf detritus, using a series of rapid back-and-forth movements or slow deliberate movements that spear the prey onto the sharpened end or the barbs of the leaf, if the prey is located in a hole. These tools are consistently made to a standardized pattern and are carried around on foraging expeditions. The manufacture of stepped tools appears to be lateralized at the population level and tool use at the individual level. 7) There are many aspects of corvid and ape cognition that appear to use the same cognitive tool kit: causal reasoning, flexibility, imagination, and prospection. The authors suggest that nonverbal complex cognition may be constructed through a combination of these cognitive tools. Although corvids and apes may share these cognitive tools, this convergent evolution of cognition has not been built on a convergent evolution of brains. Although the ape neocortex and corvid nidopallium are both significantly enlarged, their structures are very different, with the ape neocortex having a laminar arrangement and the avian pallium having a nuclear arrangement [2]. It is unclear what implications these structural differences have. However, cognition in corvids and apes must have evolved through a process of divergent brain evolution with convergent mental evolution. The authors suggest this conclusion has important implications for understanding the evolution of intelligence, given that it can evolve in the absence of a prefrontal cortex. References (abridged): 1. M. Tomasello, J. Call, Primate Cognition (Oxford Univ. Press, New York, 1997) 2. N. J. Emery, N. S. Clayton, in Comparative Vertebrate Cognition: Are Primates Superior to Non-Primates? L. J. Rogers, G. Kaplan, Eds. (Kluwer Academic, New York, 2004), pp. 3-55 3. N. K. Humphrey, in Growing Points in Ethology, P. P. G. Bateson, R. A. Hinde, Eds. (Cambridge Univ. Press, Cambridge, 1976), pp. 303-317 4. R. W. Byrne, A. Whiten, Machiavellian Intelligence: Social Evolution in Monkeys, Apes and Humans (Clarendon Press, Oxford, 1988) 5. L. Marino, Brain Behav. Evol. 59, 21 (2002) Science http://www.sciencemag.org From checker at panix.com Sun May 22 22:09:13 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:09:13 -0400 (EDT) Subject: [Paleopsych] SW: Dark Matter and the Early Universe Message-ID: Cosmology: Dark Matter and the Early Universe http://scienceweek.com/2005/sa050225-5.htm The following points are made by J. Diemand et al (Nature 2005 433:389): 1) The Universe was nearly smooth and homogeneous before a redshift of z = 100, about 20 million years after the Big Bang[1]. After this epoch, the tiny fluctuations imprinted upon the matter distribution during the initial expansion began to collapse because of gravity. The properties of these fluctuations depend on the unknown nature of dark matter[2-4], the determination of which is one of the biggest challenges in present-day science[5]. 2) The cosmological parameters of our Universe and initial conditions for structure formation have recently been measured via a combination of observations, including the cosmic microwave background (CMB), distant supernovae, and the large-scale distribution of galaxies. Cosmologists now face the outstanding problem of understanding the origin of structure in the Universe from its strange mix of particles and vacuum energy. 3) Most of the mass of the Universe must be made up of a kind of non-baryonic particle[1] that remains undetected in laboratory experiments. The leading candidate for this "dark matter" is the neutralino, the lightest supersymmetric particle, which is predicted to solve several key problems in the standard model for particle physics[5]. This cold dark matter (CDM) candidate is not completely collisionless. It can collide with baryons, thus revealing its presence in laboratory detectors, although the cross-section for this interaction is extremely small. In a cubic-meter detector containing 10^(30) baryon particles, only a few collisions per day are expected from the 10^(13) dark-matter particles that flow through the experiment as the Earth moves through the Galaxy. 4) The neutralino is its own anti-particle, and can self-annihilate, creating a shower of new particles including gamma-rays[5]. The annihilation rate increases as the density squared; the central regions of the Galaxy and its satellites will therefore give the strongest signal. However, the expected rate is very low -- the flux of photons on Earth is the same as we would receive from a single candle placed on Pluto. Numerous experiments using these effects are under way that may detect the neutralino within the next decade. Furthermore, in the next few years the Large Hadron Collider (LHC) at CERN will confirm or rule out the concepts of supersymmetry (SUSY). 5) The authors report supercomputer simulations of the concordance cosmological model, which assumes neutralino dark matter (at present the preferred candidate), and find that the first objects to form are numerous Earth-mass dark-matter haloes about as large as the Solar System. They are stable against gravitational disruption, even within the central regions of the Milky Way. The authors expect over 10^(15) to survive within the Galactic halo, with one passing through the Solar System every few thousand years. The nearest structures should be among the brightest sources of gamma-rays (from particle particle annihilation). References (abridged): 1. Peebles, P. J. E. Large-scale background temperature and mass fluctuations due to scale-invariant primeval perturbations. Astrophys. J. 263, L1 L5 (1982) 2. Hofmann, S., Schwarz, D. J. & St?cker, H. Damping scales of neutralino cold dark matter. Phys. Rev. D 64, 083507 (2001) 3. Berezinsky, V., Dokuchaev, V. & Eroshenko, Y. Small-scale clumps in the galactic halo and dark matter annihilation. Phys. Rev. D 68, 103003 (2003) 4. Green, A. M., Hofmann, S. & Schwarz, D. J. The power spectrum of SUSY-CDM on sub-galactic scales. Mon. Not. R. Astron. Soc. 353, L23 L27 (2004) 5. Jungman, G., Kamionkowski, M. & Griest, K. Supersymmetric dark matter. Phys. Rep. 267, 195 373 (1996) Nature http://www.nature.com/nature -------------------------------- Related Material: ASTROPHYSICS: DARK MATTER AND DARK ENERGY The following points are made by Sean Carroll (Nature 2004 429:27): 1) Humans seem to be extremely unimportant in the grand scheme of the Universe. This insight is often associated with Copernicus (1473-1543), who suggested (although not for the first time) that the Earth was not the center of the Solar System. A bigger step towards calibrating our insignificance was taken by Edwin Hubble (1889-1953), who determined that astrophysical nebulae are really separate galaxies in their own right. We now think there are about one hundred billion such galaxies in the observable Universe, with perhaps one hundred billion stars per galaxy. 2) But a metaphysically distinct blow to our importance came with the introduction of the idea of dark matter -- we are not even made of the same stuff that comprises most of the Universe. The need for dark matter, in the sense of "matter we cannot see", was noticed in 1933 by Fritz Zwicky (1898-1974), when studying the dynamics of the Coma cluster of galaxies. When galaxies are orbiting each other, their typical velocities will depend on the total mass involved, but when we observe clusters of galaxies, the velocities are consistently much higher than we would expect from the mass we actually see in stars and gas. Vera Rubin and others have driven the point home by examining individual galaxies. As we move away from the central galactic region, the velocity of orbiting gas becomes systematically higher than it should be. These observations imply the existence of an extended, massive halo of dark matter. Indeed, the picturesque galaxies we see in astronomical images are really just splashes of visible matter collected at the bottom of these more substantial, yet invisible, halos. 3) Of course, the air we breathe is invisible and transparent, just like dark matter. A sensible first guess might be that the extra mass we infer is ordinary matter, just in some form we cannot see. But we have independent ways to measure the amount of ordinary matter, through its influence on the early-Universe processes of primordial nucleosynthesis and the evolution of density perturbations. These constraints imply that ordinary matter falls far short of what is needed to explain galaxies and clusters (perhaps one-fifth of the total). Not only is dark matter "dark", it is a completely new kind of particle --something outside the standard model of particle physics, something not yet detected in any laboratory here on Earth. 4) And we have not even mentioned dark energy -- the mysterious form of energy that is smoothly distributed throughout space and (at least approximately) constant through time. Independent observations of high-redshift supernovae, the microwave background radiation, and the distribution of large-scale structure all require the existence of dark energy. The featureless, persistent nature of dark energy convinces us that it is not even a particle at all. About 70% of our current Universe is dark energy and 25% is dark matter. This leaves all the stuff we have directly observed at a paltry 5% of the whole Universe. References: 1. Krauss, L. Quintessence: The Mystery of the Missing Mass (Basic Books, New York, 2001) 2. Peebles, P. J. E. From Precision Cosmology to Accurate Cosmology online at http://arxiv.org/abs/astro-ph/0208037 3. Rees, M. Our Cosmic Habitat (Princeton Univ. Press, Princeton, 2003) Nature http://www.nature.com/nature -------------------------------- Related Material: ASTROPHYSICS: ON THE NATURE OF DARK MATTER The following points are made by K. Zioutas et al (Science 2004 306:1485): 1) Astrophysical observations reveal that galaxies and clusters of galaxies are gravitationally held together by vast halos of dark (i.e., nonluminous) matter. Theoretical reasoning points to two leading candidates for the particles that may make up this mysterious form of matter: weakly interacting massive particles (WIMPs) and theoretical particles called "axions". Particle accelerators have not yet detected either of the two particles, but recent astrophysical observations provide hints that both particles may exist in the Universe, although definitive data are still lacking. Dark matter need not consist exclusively of only one of these two types of particles. 2) Precise measurements of the cosmic microwave background have shown that dark matter makes up about 25% of the energy budget of the Universe; visible matter in the form of stars, gas, and dust only contributes about 4%. However, the nature of dark matter remains a mystery. To explain it, we must go beyond the standard model of elementary particles and look toward more exotic types of particles. 3) One such particle is the neutralino, a WIMP that probably weighs as much as 1000 hydrogen atoms (henceforth, we refer to the neutralino as a generic WIMP). Neutralinos are postulated by supersymmetric models, which extend the standard model to higher energies. To date, no neutralinos have been created in particle accelerators, but in the future they may be produced in the world's most powerful particle accelerator, the Large Hadron Collider currently being built at CERN. A recent precise measurement of the magnetic dipole moment of the muon favors the existence of new particles such as neutralinos. 4) Another possibility for the direct detection of neutralinos is to seek evidence for the tiny nuclear recoils produced by interactions between neutralinos (created when the Universe was very young and very hot) and atomic nuclei. Because such interactions are rare and the effects small, they can only be detected in experiments that are conducted underground, where the high-energy cosmic radiation is suppressed by several orders of magnitude. 5) Astrophysical observations could provide indirect evidence for neutralinos. On astrophysical scales, collisions of neutralinos with ordinary matter are believed to slow them down. The scattered neutralinos, whose velocity is degraded after each collision, may then be gravitationally trapped by objects such as the Sun, Earth, and the black hole at the center of the Milky Way galaxy, where they can accumulate over cosmic time scales. Such dense agglomerates could therefore yield an enhanced signal for the postulated neutralinos of cosmic origin.(1-5) References (abridged): 1. P. Jean et al., Astron. Astrophys. 407, L55 (2003) 2. F. Aharonian et al., Astron. Astrophys. 425, L13 (2004) 3. R. Irion, Science 305, 763 (2004) 4. R. D. Peccei, H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977) 5. R. D. Peccei, H. R. Quinn, Phys. Rev. D 16, 1791 (1977) From checker at panix.com Sun May 22 22:09:25 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:09:25 -0400 (EDT) Subject: [Paleopsych] SW: Pigeon Homing and Highways Message-ID: Animal Cognition: Pigeon Homing and Highways http://scienceweek.com/2005/sc050211-1.htm The following points are made by H-P. Lipp et al (Current Biology 2004 14:1239): 1) The most widely accepted explanation for pigeon homing over distances of 20 km and more is that they rely on a "map-and-compass" strategy. It has remained undisputed that pigeons have an internal clock and an internal sense of compass direction home and that this latter sense depends on the position of the sun, if visible. Yet directional knowledge alone is not sufficient for successful homing, and so pigeons must also have a large-scale mental map containing information about their current position with regard to their loft [1]. 2) Mechanisms of position determination and the nature of the mental map used by homing pigeons have remained controversial for decades. Supporters of the magnetic theory of pigeon homing claim a predominant role of the earth's magnetic field for both compass and map mechanisms [2]. Others propose a major role of the olfactory system and atmospheric gradients [3,4]. Although vision is helpful yet not mandatory for successful long-distance homing [5], there is general agreement that pigeons rely at least partially on visual cues for flights within their familiar home range, 2-4 km around the loft. Whether the local visual information is used by the birds for homing from distant release sites -- a strategy coined "pilotage" -- has been equally controversial [2]. 3) Likewise, the nature of the objects used by pigeons for pilotage has been debated. Breeders of racing pigeons have often observed that large flocks of homing pigeons fly along major highways, and it is a familiar observation for most pigeon breeders that the birds often do not approach the home loft according to a straight compass direction from the release site. Early attempts to identify topographic guide-rails used by homing pigeons (e.g., roads, railways, powerlines) by means of airplane tracking have yielded equivocal results. Some studies reported positive evidence; helicopter tracking studies even found that pigeons were circling over road crossings. However, even in these positive cases, observations were rare and anecdotal. 4) The authors present an analysis of 216 GPS-recorded pigeon tracks over distances up to 50 km. Experienced pigeons released from familiar sites during 3 years around Rome, Italy, were significantly attracted to highways and a railway track running toward home, in many cases without anything forcing them to follow such guide-rails. Birds often broke off from the highways when these veered away from home, but many continued their flight along the highway until a major junction, even when the detour added substantially to their journey. The degree of road following increased with repeated releases but not flight length. Significant road following (in 40%-50% of the tracks) was mainly observed from release sites along northwest-southeast axis. 5) The authors suggest their data demonstrate the existence of a learned road-following homing strategy of pigeons and the use of particular topographical points for final navigation to the loft. Apparently, the better-directed early stages of the flight compensated the added final detour. During early and middle stages of the flight, following large and distinct roads is likely to reflect stabilization of a compass course rather than the presence of a mental roadmap. A cognitive (roadmap) component manifested by repeated crossing of preferred topographical points, including highway exits, is more likely when pigeons approach the loft area. However, it might only be expected in pigeons raised in an area characterized by navigationally relevant highway systems. References (abridged): 1. Gould, J.L. (2004). Animal navigation. Curr. Biol. 14, R221-R224 2. Wiltschko, R. and Wiltschko, W. (2003). Avian navigation: from historical to modern concepts. Anim. Behav. 65, 257-272 3. Papi, F. (1990). Olfactory navigation in birds. Experientia 46, 352-363 4. Wallraff, H.G. (2004). Avian olfactory navigation: its empirical foundation and conceptual state. Anim. Behav. 67, 189-204 5. Schmidt-Koenig, K. and Schlichte, H.-J. (1972). Homing in pigeon with impaired vision. Proc. Natl. Acad. Sci. USA 69, 2446-2447 Current Biology http://www.current-biology.com -------------------------------- Related Material: ANIMAL BEHAVIOR: ON MAGNETORECEPTION IN THE HOMING PIGEON The following points are made by C.V. Mora et al (Nature 2004 432:508): 1) Two conflicting hypotheses compete to explain how a homing pigeon can return to its loft over great distances. One proposes the use of atmospheric odors[1] and the other the Earth's magnetic field[2-4] in the "map" step of the "map and compass" hypothesis of pigeon homing[5]. Although magnetic effects on pigeon orientation provide indirect evidence for a magnetic "map", numerous conditioning experiments have failed to demonstrate reproducible responses to magnetic fields by pigeons. This has led to suggestions that homing pigeons and other birds have no useful sensitivity to the Earth's magnetic field. 2) The authors made a series of modifications to an existing operant conditioning procedure to fulfill two conditions that seem to be vital for magnetic discrimination learning in non-avian species. These are that (1) the magnetic stimulus discriminated is a localized, non-uniform magnetic anomaly superimposed on the uniform background field of the Earth, and (2) movement by the experimental subjects is necessary to produce the behavioral response measured in the experiments. Although this combination of experimental parameters mitigates against rapid achievement of powerful discrimination by separating the stimulus, response and reinforcement in both space and time --compared with standard key-pecking experiments -- failure to fulfill either or both of the above conditions has characterized all the unsuccessful or irreproducible attempts to condition pigeons and many other species to magnetic fields. 3) Using a Yes-No signal-detection procedure, four individually trained pigeons were required to discriminate between the presence and absence of an induced magnetic field anomaly while freely walking in a wooden tunnel. The intensity profile of the anomaly was "wave-shaped" and peaked in the center of the tunnel at 189 micro tesla (microT) (background level of 44 microT) with an inclination of -80 deg (background level of -64 deg). The birds were conditioned to jump onto a platform at one end of the tunnel when the anomaly was present and onto an identical platform at the other end of the tunnel when the anomaly was absent. Choice of the correct platform was rewarded with food whereas incorrect choices were punished with a time penalty. 4) In summary: The authors demonstrate that homing pigeons (Columba livia) can discriminate between the presence and absence of a magnetic anomaly in a conditioned choice experiment. This discrimination is impaired by attachment of a magnet to the cere (part of the beak), local anaesthesia of the upper beak area, and bilateral section of the ophthalmic branch of the trigeminal nerve, but not of the olfactory nerve. These results suggest that magnetoreception (probably magnetite-based) occurs in the upper beak area of the pigeon. Traditional methods of rendering pigeons anosmic might therefore cause simultaneous impairment of magnetoreception so that future orientation experiments will require independent evaluation of the pigeon's magnetic and olfactory systems. References (abridged): 1. Papi, F., Fiore, L., Fiaschi, V. & Benvenuti, S. Olfaction and homing in pigeons. Monit. Zool. Ital. (N.S.) 6, 85-95 (1972) 2. Gould, J. L. The case for magnetic sensitivity in birds and bees (such as it is). Am. Sci. 68, 256-267 (1980) 3. Moore, B. R. Is the homing pigeon's map geomagnetic? Nature 285, 69-70 (1980) 4. Walcott, C. Magnetic orientation in homing pigeons. IEEE Trans. Magnet. 16, 1008-1013 (1980) 5. Kramer, G. Wird die Sonnenh?he bei der Heimfindeorientierung verwertet? J. Ornithol. 94, 201-219 (1953) Nature http://www.nature.com/nature -------------------------------- Related Material: MAGNETITE IN A VERTEBRATE MAGNETORECEPTOR Notes by ScienceWeek: An enormous variety of what are essentially experiments in viability has occurred during the more than 3.5 billion years of biological evolution on Earth, and among these experiments is a striking diversity of biological devices that function to sense changes in the environment of the organism. Consider, for example, magnetic field detection: The following points are made by C.E. Diebel et al (Nature 2000 406:299): 1) The key behavioral, physiological, and anatomical components of a magnetite-based magnetic sense have been previously demonstrated in rainbow trout (Oncorynchus mykiss), with candidate receptor cells located within a discrete sub-layer of the olfactory tissues (olfactory lamellae) in the nose of the trout. These receptor cells were shown to contain iron-rich crystals similar in size and shape to magnetite crystals extracted from salmon. 2) The authors now demonstrate that these crystals, mapped to individual receptors by *confocal and atomic force microscopy, are magnetic: the crystals are uniquely associated with dipoles detected by *magnetic force microscopy. Analysis of their magnetic properties identifies the crystals as *single-domain magnetite particles. In addition, 3-dimensional reconstruction of the candidate receptors using confocal and atomic microscopy imaging confirm that several magnetite crystals are arranged in a chain of approximately 1 micron length within the receptor, and that the receptor is a multi-lobed single cell. 3) The authors suggest these results are consistent with a magnetite-based detection mechanism, since 1-micron chains of single-domain magnetite crystals are highly suitable for the behavioral and physiological responses to magnetic intensity previously reported for the trout. 4) The authors conclude that "understanding should now be sought of how the chains of crystals could transduce a magnetic field into an electrical signal in the nervous system." Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: confocal and atomic force microscopy: In general, a "confocal microscope" is a microscope in which an aperture in the illuminating system confines the illumination to a small spot on the specimen, and a corresponding aperture in the imaging system (which may be the same aperture in reflecting and fluorescence devices) allows only light transmitted, reflected, or emitted by the same spot to contribute to the image. By suitable mechanical or optical means, the spots are made to scan the specimen as in a television raster. Compared to conventional microscopy, confocal techniques offer improved resolution and improved rejection of out-of-focus noise. In atomic force microscopy, a tip is fixed to a cantilever whose position is monitored while the tip scans the surface. The force between the tip and the surface determines the position of the cantilever. When recorded in atomic resolution, the image represents a map of atomic forces at the surface. The advantage of atomic force microscopy is that the probed surface does not need to be electrically conducting. magnetic force microscopy: This technique is capable of determining magnetic domain structure in a variety of magnetic materials, including small particles with a spatial resolution of less than 100 nanometers. Because it is sensitive to magnetic forces, the technique can also image magnetic structures that are covered by a layer of non-magnetic material. single-domain magnetite particles: An oxide of iron, magnetite (magnetic iron ore) is attracted by a magnet but does not attract particles of iron to itself. In this context, the term "domain" refers to a region in which magnetic moments are uniformly arrayed. From checker at panix.com Sun May 22 22:09:46 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:09:46 -0400 (EDT) Subject: [Paleopsych] SW: On Social Selection for Eccentricity Message-ID: Evolution: On Social Selection for Eccentricity http://scienceweek.com/2005/sa050211-1.htm The following points are made by Michel Chapuisat (Current Biology 2004 14:1003): 1) In nature, colorful patterns usually constitute a signal; they may deter competitors, frighten predators, or attract mates. The standard view on animal signaling is that variation in ornamentation carries information about the condition and quality of the signaler [1,2]. For example, the black-and-yellow stripes of wasps are a signal of danger to other species. But there is more to it than that. Recently, Tibbetts [3] reported an experimental study showing that paper wasps use intraspecific variation in facial and abdominal markings to recognize individuals. A new comparative analysis by the same author [4] has revealed that species with a flexible nest-founding strategy have more variable markings than those with obligate single or multiple foundresses. This new work suggests that complex social interactions may select for individual distinctiveness and raises interesting questions about the costs and benefits of revealing individuality in social groups. 2) Polistes paper wasps form a widespread, species-rich group of social insects [5]. They build small, open paper nests in protected places. All paper wasps are eusocial: one or a few individuals monopolize reproduction, while other individuals defend the colony, forage, and care for the brood. After overwintering, mated females -- the queens -- found new nests. The species differ in their nest-founding habits, following one of three possible strategies: they may have an obligate single foundress, where only one queen starts a nest; they may have obligate multiple foundresses, where two or more queens start a nest together; or they may show flexible nest-founding, where either a single queen or multiple queens start a nest. 3) Paper wasp colonies are well known for having a dominance hierarchy [5]. In species with an obligate single foundress or obligate multiple foundresses, dominant queens usually monopolize all reproduction, and other females behave as workers. In species with a flexible nest-founding strategy, the social interactions tend to be more complex. There are even some theoretical and empirical indications that queens engage in reproductive transactions whereby they yield part of the reproductive potential to other females in order to make them stay and cooperate peacefully. Complex alliances of this kind require that wasps are able to accurately recognize individuals. 4) Polistes fuscatus individuals have highly variable markings on their face and abdomen, such as the presence or absence of conspicuous yellow eyebrows [3]. Together, these markings yield dozens of unique patterns, suggesting they may serve for visual recognition of individuals. Indeed, wasps that had experimentally altered markings were found to receive more aggression than control wasps that had been painted without altering their markings [3]. Importantly, the aggression was transient and declined with time as wasps became familiar with the new markings. This elegant study showed that wasps use visual cues to distinguish individuals. Further, it suggested that variable markings might undergo selection for improved individual recognition in species with complex social interactions. References (abridged): 1. Maynard Smith, J. and Harper, D. (2003). Animal signals. (Oxford: Oxford University Press) 2. In Animal signals: signalling and signal design in animal communication. (2000). Espmark, Y., Amundsen, T. and Rosenqvist, G. eds. (Trondheim, Norway: Tapir, Academic Press) 3. Tibbetts, E.A. (2002). Visual signals of individual identity in the wasp Polistes fuscatus. Proc. R. Soc. Lond. B 269, 1423-1428 4. Tibbetts, E.A. (2004). Complex social behavior can select for variability in visual features: a case study in Polistes wasps. Proc. R. Soc. Lond. B 271, 1955-1960 5. In Natural history and evolution of paper-wasps. (1996). Turillazzi, S. and West-Eberhard, M.J. eds. (Oxford: Oxford University Press) Current Biology http://www.current-biology.com -------------------------------- Related Material: ON HONEYBEE SOCIAL BEHAVIOR, GENES, AND THE ENVIRONMENT Notes by ScienceWeek: The so-called social insects live in societies that rival human societies in complexity and internal cohesion. Honey bees, for example, apparently always follow 3 rules: a) they live in colonies with overlapping generations; b) they care cooperatively for offspring other than their own; and, c) they maintain a reproductive division of labor. The following points are made by Gene E. Robinson (American Scientist 1998 86:456): 1) Genes do not play an exclusive role in regulating behavior: biologists have long realized that behavior is influenced by genes, the environment, and interactions between the two. 2) Genes never act alone. They must operate in an environment where they code for proteins that participate in many systems in an organism, with these systems in turn influencing the expression of genes. Consequently, biologists must take a broad approach in assessing the impact of any gene. 3) The research group of the author uses the Western honey bee, Apis mellifera. Honey bees pass through different life stages as they age, and their behavioral responses to environmental and social stimuli change in predictable ways. Although worker bees go through a consistent path of behavioral development, this path is not rigidly determined. Bees can accelerate, retard, or even reverse their behavioral development in response to changing environmental and colony conditions. 4) Experimental evidence indicates that juvenile hormone, one of the most important hormones influencing insect development, helps time the pace of behavioral maturation in honey bees. The rate of endocrine-mediated behavioral development is influenced by inhibitory social interactions. Older bees inhibit the behavioral development of younger bees: the rate of behavioral development is negatively correlated with the proportion of older bees in a colony. Inhibitory social interactions that influence the rate of behavioral development involve chemical communication between colony members. 5) Evidence from the laboratory of the author in 1993 indicated the so-called mushroom bodies in the bee brain are involved in the behavioral changes occurring during maturation, the volume of the bodies increasing, and the volume increase associated with an increase in synapses with neurons from brain regions devoted to sensory input. The author suggests this was the first report of brain plasticity in an invertebrate. 6) The author suggests that, in general, two-way interactions between the nervous system and the genome contribute fundamentally to the control of social behavior. Information about social conditions that is acquired by the nervous system is likely to induce changes in genomic function that in turn produce adaptive modifications of the structure and function of the nervous system. 7) The author proposes a new research initiative called "sociogenomics", defined as a "wide-ranging approach to identify genes that influence social behavior, determining the influence of these genes on underlying neural and endocrine mechanisms, and exploring the effects of the environment -- particularly the social environment -- on gene action." American Scientist http://www.americanscientist.org -------------------------------- Related Material: ON GENES AND COMPLEX SOCIAL BEHAVIOR The following points are made by M.J. Krieger and K.G. Ross (Science 2002 295:328): 1) The evolution of complex social behavior is among the most important events in the history of life. Interest in the genes underlying the expression of key social traits is strong because knowledge of the genetic architecture will lead to increasingly realistic models of social evolution, while identification of the products of major genes can elucidate the molecular bases of social behavior. Few studies have succeeded in showing that complex social behaviors have a heritable basis, and fewer still have suggested that variation in these behaviors is attributable to the action of one or few genes of major effect. No candidate genes with major effects on key social polymorphisms have been identified previously. 2) The fire ant Solenopsis invicta displays a fundamental social polymorphism that appears to be under simple genetic control. A basic feature of fire ant colony social organization, the number of egg-laying queens, is associated with variation at the gene Gp-9. In the US, where this species has been introduced, colonies composed of workers bearing only the (B) allele at Gp-9 invariably have a single queen (monogyne social form), whereas colonies with workers bearing the alternate (b) allele have multiple queens (polygyne social form). The two social forms differ in many key reproductive and life history characteristics. 3) The authors report they sequenced the gene Gp-9 and found that it encodes a pheromone-binding protein, a crucial molecular component in same-species (conspecific) chemical recognition. The authors suggest this indicates that differences in worker Gp-9 genotypes between social forms may cause differences in the abilities of workers to recognize queens and regulate their numbers. The authors conclude: "This study demonstrates that single genes of major effect can underlie the expression of complex behaviors important in social evolution." From checker at panix.com Sun May 22 22:10:14 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:10:14 -0400 (EDT) Subject: [Paleopsych] SW: On Social Parasitism Message-ID: Evolutionary Biology: On Social Parasitism http://scienceweek.com/2005/sa050114-1.htm The following points are made by J.A. Thomas and J. Settele (Nature 2004 432:283): 1) Ants are such formidable predators that perhaps 100,000 other species of insect have evolved mechanisms to coexist with them[1]. Adaptations include armor to resist attack, mimicry to avoid detection, and secretions such as honeydew to feed or appease them[2]. In general, both partners benefit: in return for honeydew, the ants protect aphids from enemies. But natural selection can also favor cheats. It is a short evolutionary step from possessing the attributes to live safely among ants to deploying them against a colony. 2) Thus, among insects as diverse as butterflies, crickets, beetles and flies are specialist "social parasites", perhaps 10,000 species in all, equipped to penetrate the highly protected chambers inside ant nests and feed, isolated from enemies, on the rich resources concentrated there. New work[3] has provided the first molecular-genetic reconstruction of one such evolutionary pathway, that of large blue butterflies (genus Maculinea), including the pathway's divergence into two remarkable strategies for exploiting ants. 3) The large blues form a small genus that has become an icon for conservation across Europe and Asia. The adults fly in summer, laying eggs on specific plants. After two to three weeks of eating flowers, the caterpillar settles beneath its food plant to await discovery by red ants (Myrmica). By secreting hydrocarbons that mimic those made by Myrmica[4], the caterpillar tricks a foraging worker into taking it into the nest, where it is placed among the ant grubs. In most species -- the "predatory" large blues -- the caterpillar then moves to safer chambers, returning periodically to binge-feed on ant grubs. But in two "cuckoo" species, the caterpillars remain among the brood and become increasingly integrated with their society. Nurse ants feed them directly, neglecting their own brood, which may be cut up and recycled to feed the parasites[5]. 4) Cuckoo-feeding is an efficient way to exploit Myrmica, resulting in six times more butterflies per nest than is achieved by the predatory species. The downside is that social acceptance is won only through secreting chemicals that so closely match the recognition codes of one host species that survival with any other ant is unlikely. Thus, a typical population of a cuckoo Maculinea species depends exclusively on a single Myrmica species -- which, however, differs in different regions of Europe. Predatory Maculinea are more generalist; nevertheless, each species survives three to five times better with a single (and different) species of Myrmica. References (abridged): 1. Elmes, G. W. in Biodiversity Research and its Perspectives in East Asia (eds Lee, B. H., Kim, T. H. & Sun, B. Y.) 33-48 (Chonbuk Natl Univ., Korea, 1996) 2. Hoelldobler, B. & Wilson, E. O. The Ants (Springer, Berlin, 1990) 3. Als, T. D. et al. Nature 432, 386-390 (2004) 4. Akino, T., Knapp, J. J., Thomas, J. A. & Elmes, G. W. Proc. R. Soc. Lond. B 266, 1419-1426 (1999) 5. Elmes, G. W., Wardlaw, J. C., Schoenrogge, K. & Thomas, J. A. Ent. Exp. Appl. 110, 53-63 (2004) Nature http://www.nature.com/nature -------------------------------- Related Material: AN INTERESTING CASE OF ANT-PLANT MUTUALISM Notes by ScienceWeek: In biology, symbiosis is an intimate and protracted association of individuals of different species, and mutualism is a type of symbiosis in which both participants receive benefits from the association. An intriguing mutualism is that between ants and Acacia trees. In East Africa, one finds Acacia trees that are "ant-guarded": the ants live on the trees inside modified thorns (pseudogalls), patrol the branches, and attack any insect or vertebrate herbivore, thus protecting the plant, but also preserving the plant for the use of the ant. But this plant requires cross-pollination by visiting insects in order to reproduce, and what one observes is that during the pollination periods the ant-guards essentially remain in the guard-house and cross-pollination by visiting insects proceeds without difficulty. Which of course provokes the question of what are the signals involved in this delicate bit of cooperative maneuvering? P.G. Willmer and G.N. Stone ((Nature 1997 388:165) report that during the pollination period, the young Acacia flowers apparently release a volatile chemical that deters the ant-guards. The ants thus patrol before and after pollination, but not during the pollination period itself. Nature http://www.nature.com/nature -------------------------------- Related Material: ANTAGONISTIC SYMBIOSIS: ABDUCTION OF ONE SPECIES BY ANOTHER The following points are made by McClintock and Baker (American Scientist May/Jun 1998): 1) Many interactions between species occur on a covert level and are difficult to perceive. These interactions are chemical, not physical, and rely on substances such as pheromones that attract mates, as well as toxins that repel or kill predators, competitors, and other enemies. Chemical interactions can profoundly alter the conventional scenarios posited by ecologists studying predators and their prey. 2) The research of the authors has focused on secondary metabolites, chemicals that do not seem to be required for any of the primary metabolic processes such as energy production, respiration, or photosynthesis. They have found that sessile and sluggish organisms on the Antarctic ocean floor are much threatened by invertebrate predators and competitors, and the threatened organisms have developed chemical defenses to ward off their enemies. 3) In one unusual adaptation, the amphipod crustacean Hyperiella dilatata captures the sea butterfly Clione antarctica (a snail without a shell). The sea butterfly is held alive in a sustained forced attachment on the back of the crustacean, and experiments and analysis reveal that the sea butterfly secretes a previously undescribed beta-hydroxyketone that turns away fish that are normally predators of the crustacean. The authors suggest this unique association -- the abduction of one species by another --is unprecedented in the annals of behavioral and chemical ecology. American Scientist http://www.americanscientist.org -------------------------------- Related Material: INITIAL CHEMICAL SIGNAL IN INSECT-PLANT-INSECT TROPHIC TRIANGLES Notes by ScienceWeek: Both corn and cotton plants, when attacked by plant-eating insects, release a volatile substance that specifically attracts other insects that are the natural predators of the plant-eating insects. The following points are made by J. H. Tumlinson et al (Science 1997 276:945): 1) The authors studied the trophic triangle of the beet armyworm caterpillar (Spodoptera exigua Hubner), corn seedlings (Zea mays L.), and the parasitic wasp (Cotesia marginiventris). 2) The authors have isolated and synthesized the chemical substance responsible for the initial signal. They have named the substance volicitin. It is present in the oral secretions of the caterpillar, and it induces the damaged corn seedlings to release a volatile blend of terpenoids and indole, which calls in the parasitic female wasps that are the natural enemies of the caterpillars. The wasps lay eggs in the caterpillars, and the hatched larvae destroy the caterpillars by eating them. 3) Mechanically damaged plants exposed to synthetic volicitin, in the absence of caterpillar attack, release the usual volatiles that attract the wasps. Plants mechanically damaged but not exposed to volicitin do not release the volatiles. Science http://www.sciencemag.org From checker at panix.com Sun May 22 22:10:34 2005 From: checker at panix.com (Premise Checker) Date: Sun, 22 May 2005 18:10:34 -0400 (EDT) Subject: [Paleopsych] SW: On Ordinary People as Torturers Message-ID: Social Psychology: On Ordinary People as Torturers http://scienceweek.com/2005/sb050107-5.htm The following points are made by S.T. Fiske et al (Science 2004 306:1482): 1) Initial reactions to the events at Abu Ghraib prison in Iraq were shock and disgust. How could Americans be doing this to anyone, even to Iraqi prisoners of war? Some observers immediately blamed "the few bad apples" presumably responsible for the abuse. However, many social psychologists knew that it was not that simple. Society holds individuals responsible for their actions, as the military court-martial recognizes, but social psychology suggests we should also hold responsible peers and superiors who control the social context. 2) Social psychological evidence emphasizes the power of social context; in other words, the power of the interpersonal situation. Social psychology has accumulated a century of knowledge about how people influence each other for good or ill [1]. Meta-analysis, the quantitative summary of findings across a variety of studies, reveals the size and consistency of such empirical results. Recent meta-analyses document reliable experimental evidence of social context effects across 25,000 studies of 8 million participants [2]. Abu Ghraib resulted in part from ordinary social processes, not just extraordinary individual evil. Meta-analyses suggests that the right (or wrong) social context can make almost anyone aggress, oppress, conform, and obey. 3) Virtually anyone can be aggressive if sufficiently provoked, stressed, disgruntled, or hot [3-5]. The situation of the 800th Military Police Brigade guarding Abu Ghraib prisoners fit all the social conditions known to cause aggression. The soldiers were certainly provoked and stressed: at war, in constant danger, taunted and harassed by some of the very citizens they were sent to save, and their comrades were dying daily and unpredictably. Their morale suffered, they were untrained for the job, their command climate was lax, their return home was a year overdue, their identity as disciplined soldiers was gone, and their own amenities were scant. Heat and discomfort also doubtless contributed. 4) The fact that the prisoners were part of a group encountered as enemies would only exaggerate the tendency to feel spontaneous prejudice against outgroups. In this context, oppression and discrimination are synonymous. One of the most basic principles of social psychology is that people prefer their own group and attribute bad behavior to outgroups. Prejudice especially festers if people see the outgroup as threatening cherished values. This would have certainly applied to the guards viewing their prisoners at Abu Ghraib, but it also applies in more "normal" situations. A recent sample of US citizens on average viewed Muslims and Arabs as not sharing their interests and stereotyped them as not especially sincere, honest, friendly, or warm. 5) Even more potent predictors of discrimination are the emotional prejudices ("hot" affective feelings such as disgust or contempt) that operate in parallel with cognitive processes. Such emotional reactions appear rapidly, even in neuroimaging of brain activations to outgroups. But even they can be affected by social context. Categorization of people as interchangeable members of an outgroup promotes an amygdala response characteristic of vigilance and alarm and an insula response characteristic of disgust or arousal, depending on social context; these effects dissipate when the same people are encountered as unique individuals. References (abridged): 1. S. T. Fiske, Social Beings (Wiley, New York, 2004) 2. F. D. Richard, C. F. Bond, J. J. Stokes-Zoota, Rev. Gen. Psychol. 7, 331 (2003) 3. B. A. Bettencourt, N. Miller, Psychol. Bull. 119, 422 (1996) 4. M. Carlson, N. Miller, Sociol. Soc. Res. 72, 155 (1988) 5. M. Carlson, A. Marcus-Newhall, N. Miller, Pers. Soc. Psychol. Bull. 15, 377 (1989) Science http://www.sciencemag.org -------------------------------- Related Material: MEDICAL BIOLOGY: ON SURVIVING TORTURE The following points are made by Richard F. Mollica (New Engl. J. Med. 2004 351:5): 1) The shocking, unfiltered images from the Abu Ghraib prison in Iraq have focused the world's attention on the plight of torture survivors. Physicians in the US are confronted as never before with the need to identify and treat the physical and psychological sequelae of extreme violence and torture. Yet this is not a new role for medical practitioners. More than 45 countries are currently suffering from the destruction caused by mass violence.(1) The 20th century has been called the "refugee century", with tens of millions of people violently displaced from their homes. Millions of these people have resettled in the US, and refugees, asylum seekers, and illegal immigrants now commonly enter our health care institutions.(2) 2) Despite routine exposure to the suffering of victims of human brutality, health care professionals tend to shy away from confronting this reality. The author states that he and his colleagues have cared for more than 10,000 torture survivors, and in their experience, whether in Bosnia and Herzegovina, Cambodia, East Timor, or the US, clinicians avoid addressing torture-related symptoms of illness because they are afraid of opening a Pandora's box: they believe they will not have the tools or the time to help torture survivors once they have elicited their history. 3) Unfortunately, survivors and clinicians may conspire to create a relationship founded on the avoidance of all discussion of trauma. In one instance, a middle-aged Cambodian woman had had an excellent relationship with her American doctor for nine years, but he had no idea that she had been tortured. He had had only partial success in controlling her type 2 diabetes. After attending a training session on treating the effects of terrorism after the events of September 11, 2001, the doctor asked the patient for the first time whether she had undergone extreme violence or torture. She revealed that two of her children had died of starvation in Cambodia, her husband had been taken away violently and disappeared, and she had been sexually violated under the Khmer Rouge. More recently, in the US, her remaining daughter had been nearly fatally stabbed by a gang that burglarized her home. Since September 11, the patient had taken to barricading herself in her house, leaving only to see her doctor. When the doctor became aware of the patient's traumatic history, he used a screening tool to explore the effects of her traumas, diagnosing major depression. Over time, he was able to treat the depression with medication and counseling, eventually bringing the diabetes under control as well. 4) The author concludes: Torture and its human and social effects are now in the global public eye. Medical professionals must relinquish their fears and take the lead in healing the wounds inflicted by the most extreme acts of human aggression. Commitment to a process that begins with a simple but courageous act -- asking the right question -- bespeaks the belief that medicine is a potent antidote to the practices of torturers.(3-5) References: 1. Krug EG, Dahlberg LL, Mercy JA, Zwi AB, Lozano R, eds. World report on violence and health. Geneva: World Health Organization, 2002. 2. Bramsen I, van der Ploeg HM. Use of medical and mental health care by World War II survivors in the Netherlands. J Trauma Stress 1999;12:243-261 3. Goldfeld AE, Mollica RF, Pesavento BH, Faraone SV. The physical and psychological sequelae of torture: symptomatology and diagnosis. JAMA 1988;259:2725-2729. [Erratum, JAMA 1988;260:478 4. Mollica RF. Waging a new kind of war: invisible wounds. Sci Am 2000;282:54-57 5. Cassano P, Fava M. Depression and public health: an overview. J Psychosom Res 2002;53:849-857 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: NEUROBIOLOGY: ON THE BRAIN AND VIOLENCE Notes by ScienceWeek: Although human violence has been a major focus of research in psychiatry, psychology, and the social sciences, neurobiological studies of human violence have been relatively uncommon. Neurobiology, however, is a major component in our understanding of human behavior: genetics, environment, brain structure and brain function are all involved in both ordinary behavior and in violent behavior. The following points are made by C.M. Filley et al (The Scientist 2001 2 Apr): 1) The authors point out that in adults, the role of brain damage in violence remains unclear. A brain lesion by itself is rarely sufficient to cause violent behavior, and most individuals with brain damage do not commit criminal acts. But we cannot assume that the brains of violent individuals are invariably normal. The neurologic status of the brains of violent persons has not been adequately assessed by detailed neurological examination, neuropsychological testing, *magnetic resonance imaging, or *functional neuroimaging. Studies of murderers have suggested a high prevalence of neurologic dysfunction, and some individuals with traumatic brain injury, epilepsy, dementia, and sleep disorders have been observed to exhibit excessive violence. Violence is more likely among those with severe mental illness, particularly psychosis, and violence is exacerbated by the use of alcohol and other psychoactive substances. 2) The authors point out that detailed analysis of the neurobehavioral aspects of violence is complex: a) The cause of violence is multifactorial, and a direct correlation between brain dysfunction and a violent act is rarely possible. b) Identification of brain lesions is imperfect given the limitations of diagnostic classifications, the limitations of the neurologic examination, the limitations of neuroimaging technologies, the limitations of neuropsychological assessment, and the limitations of neurochemical analysis. c) Some subject samples, such as prisoners or those with severe neurologic or psychiatric disease, are necessarily based on violent persons who are apprehended or hospitalized. Conclusions are therefore based only on those whose records are analyzed, and the potential for violence in the general population remains unknown. 3) There is the possibility of a neurogenetic contribution to violent behavior. Although no single gene for human violence has been discovered, data from molecular genetics indicate that multiple genes may interact to predispose individuals to violent behavior. Observations in mouse *knockout models have suggested that targeted disruption of single genes can induce aggressiveness in males and diminish nurturing in females. Aggression in animals and humans is also likely related to genes regulating central nervous system *serotonin metabolism. 4) In general, males are much more likely to commit violent acts than are females, but genetic factors may not explain this discrepancy. Socioeconomic and cultural influences play a major role. Unemployment, lower educational level, alcohol abuse, and access to firearms all contribute to violent crime among males. The *XYY chromosomal disorder serves to highlight difficulties in establishing an influence of gender on violence. 5) Although no "violence center" exists in the brain, the *limbic system and the *frontal lobes are areas most implicated in violence. The limbic system is the neuroanatomic substrate for many aspects of emotion. The limbic system structure most often implicated in violent behavior is the *amygdala: placidity has been described in humans with bilateral amygdala damage, whereas violence has been observed in those with abnormal electrical activity in the amygdala. The frontal lobes are apparently the areas of the most advanced functions of the brain. In particular, the *orbitofrontal cortices are involved in the inhibition of aggression: individuals with orbitofrontal injury have been found to display antisocial traits that justify the diagnosis of "acquired sociopathy", and some of these individuals have an increased risk of violent behavior. A balance apparently exists between the potential for impulsive aggression mediated by limbic structures, and the control of this drive by the influence of the orbitofrontal regions. 6) The authors conclude: "Whereas dysfunction of a discrete brain region, isolated neurochemical system, or single gene will not likely emerge as a direct cause of violence, all may contribute." The Scientist http://www.the-scientist.com -------------------------------- Notes by ScienceWeek: magnetic resonance imaging: Magnetic resonance imaging (MRI) is essentially a technique for examining morphology (as opposed to _functional_ magnetic resonance imaging, which is a technique for examining anatomical correlates of function). In general, MRI involves magnetic coils producing a static magnetic field parallel to the long axis of the patient or subject, combined with inner concentric magnetic coils producing a static magnetic field perpendicular to the long axis. A radio-frequency coil specifically designed for the head perturbs the static fields to generate a magnetic resonance image. The interaction physics in this technique is that between the magnetic fields and atomic nuclei in brain tissue. "Sliced" views can be obtained from any angle, and the resolution is quite high and on the order of millimeters for magnetic field strengths of 1.5 tesla. functional neuroimaging: Functional magnetic resonance imaging (fMRI) is based on the fact that oxyhemoglobin, the oxygen-carrying form of hemoglobin, has a different magnetic resonance signal than deoxyhemoglobin, the oxygen-depleted form of hemoglobin. Activated brain areas utilize more oxygen, which transiently decreases the levels of oxyhemoglobin and increases the levels of deoxyhemoglobin, and within seconds the brain microvasculature responds to the local change by increasing the flow of oxygen-rich blood into the active area. This local response thus leads to an increase in the oxyhemoglobin-deoxyhemoglobin ratio, which forms the basis for the fMRI signal in this technique. Because of its high spatial resolution (millimeters) and high temporal resolution (seconds) compared to other imaging techniques, fMRI is now the technology of choice for studies of the functional architecture of the human brain. Positron emission (PET) tomography is a technique for producing cross-sectional images of the body after ingestion and systemic distribution of safely metabolized positron-emitting agents. The images are essentially functional or metabolic, since the ingested agents are metabolized in various tissues. Fluoro-deoxyglucose and H(sub2)O(sup15) are common agents used for cerebral applications, and in cerebral applications of central importance to the technique is the fact that changes in the cellular activity of the brains of normal, awake humans and unanesthetized laboratory animals are invariably accompanied by changes in local blood flow and also changes in oxygen consumption. knockout models: In general, in this context, "knockout technology" involves the generation of a mutant organism (usually a mouse) with a missing specific gene. serotonin metabolism: A neurotransmitter substance involved in nearly everything occurring in the brain, including psychological states such as anxiety and depression, and dysfunctions producing migraine and epilepsy. XYY chromosomal disorder: Humans ordinarily have 46 chromosomes. Of this number, 44 are not sex-related and are called "autosomal". Two chromosomes, X and Y, are sex-related. An individual with two X chromosomes is a female; an individual with one X and one Y chromosome is a male. Approximately 1 in 1000 males have an extra Y chromosome (total 47 chromosomes), and this abnormality is denoted as "47,XYY". Such individuals are often characterized by tallness, severe acne, and sometimes skeletal malformations and mental deficiency. It has been suggested that the presence of an extra Y chromosome in an individual may cause him to be more aggressive and prone to criminal behavior, but recent studies of the general population have cast doubt on the validity of this linkage. limbic system: In general, this refers to those cortical and subcortical structures ("cortical" refers to cerebral cortex) concerned with the emotions. The most prominent anatomical components of the limbic system are the cingulate gyrus, the hippocampus, and the amygdala, all "deep brain" structures and not visible on the exterior surface of the brain. frontal lobes: One of the four lobes of the brain. The other lobes are the parietal lobe, the temporal lobe, and the occipital lobe. Each hemisphere has these 4 lobes. amygdala: A cellular complex in the temporal lobe that forms part of the limbic system. The major functional correlates of the amygdala are autonomic nervous system behavior, emotional behavior, and sexual behavior. orbitofrontal cortices: The orbitofrontal cortex lies directly under the forehead skull. From ljohnson at solution-consulting.com Sun May 22 22:15:19 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sun, 22 May 2005 16:15:19 -0600 Subject: [Paleopsych] Re: Paleopsych In-Reply-To: <001201c55f16$aa30eee0$873e4346@yourjqn2mvdn7x> References: <193.404e4571.2fc169b7@aol.com> <001201c55f16$aa30eee0$873e4346@yourjqn2mvdn7x> Message-ID: <42910477.4080605@solution-consulting.com> Val, this is a fascinating contribution. Did you cc it to Lipton himself? I read the book a few months back and found it stimulating, but I was out of my depth, so I appreciate your adding such background. The story about Goldschmidt's books being removed was particularly disturbing but all too familiar. Lynn Johnson Val Geist wrote: > Dear Howard, > > > > Steve in a recent posting pointed to Dr. Bruce Lipton's work and book > on the Biology of Belief. And Stephen is right in judging this an > important development. I have not read the book, but opened up > Lipton's website, read an essay and heard an interview, and I think I > understand what Lipton is all about. Three cheers for his emerging - > however, I found absolutely nothing new in principle, only in > particular. He is in a line of brilliant "epigenetisits" to emerge, > and he has apparently gotten much the same treatment eminent > epigenticists before him have received. And that is what I want to > deal with first. > > > > I was disappointed looking into his list of readings that none of > these eminent men were cited be they Richard Goldschmidt, C. H. > Waddington (who coined the word epigenetics over half a century ago) > or more recently Soren Lovtrup. All challenged the conventional wisdom > that heredity is deterministic, all challenged the conventional > Neo-Darwinian wisdom - most shamelessly championed currently by > Richard Dawkins - and all suffered severe rejection by Evolution's > elite, none more than Richard Goldschmidt (1942). When I looked for > that book, having heard all along how terribly mistaken Goldschmidt > was, I could not find it in my universities library. I discovered that > his books had been systematically removed. That this should happen to > one of the centuries truly great cytogeneticists, was puzzling. At > that time I suddenly ran into a fresh edition of Goldschmidt's book, > reprinted at the behest of Stephen J. Gould. It is highly instructive > to read from his pen just how the great Richard Goldschmidt was > suddenly shunned. It boils down to three young turks going after him > and destroying him: Ernst Mayr, George Gaylord Simpson and Bernhard > Rensch. The means of destruction was ridicule using phrases out of > context. There was little attention paid to the possibility that a > great mind using his enormously rich experience and background at the > end of a long life and prolific publication record was struggling > trying to tell us something that he considers vitally important, but > which clashes with conventional wisdom. To expedite matters: > Goldschmidt's denigration robbed all but a few of us of the idea of > flexible gene expression controlled by environment as fundamental to > understanding speciation and thus evolution. And without that, there > is no way that flexible environmentally controlled gene expression > could enter into other fields - such as medicine! > > > > Next in line to be denigrated was the great embryologist, scholar of > evolution and during WWII father of operations research on behalf of > the British defense establishment C. H. Waddington. That's the father > of Epigenetics. Note the title of one of his books: 1957 The Strategy > of the Genes. If my memory serves me right, he was haughty and did not > mince words and was openly contemptuous of Neo-Darwinism where > Goldschmidt tried to be polite. When during a conversation with Enst > Mayr I mentioned Waddington, Mayr became irritated, in fact > noticeably so and dismissed Waddington as someone who would not fit > in. That is remarkable as Mayr cites Waddington repeatedly in his own > magnum opus, but in a remarkably limited way. The huge irony is that > Mayr then writes in one of his last books on the Great Synthesis that, > tragically, the embryologists have been left out of this great > evolutionary synthesis. Having read Waddington, Ernst Mayr, a mind not > to be trifled with - but also Dawkins and Stephen J. Gould could not > see what Waddington saw, namely that phenotype plasticity was > profoundly important to explain much of evolution that neo-Darwinism > did not. I am totally puzzled by that conceptual blindness. > > > > Next in line is the great Swedish epigeneticist Soren Lovtrup. He > wrote 1974 Epigenetics: A Treatise on Theoretical Biology and The > Phylogeny of Vertebrata(1977), and Darwinism: The Refutation of a > Myth, (S?ren L?vtrup, Soren Lovtrup) 1987 Croom Helm > ISBN 0-7099-4153-6, 469 pages. He is a vociferous, but highly > substantive opponent of Neo-Darwinism, and it totally ignored! I had > the pleasure of meeting him a number of times and spending together > nearly a fortnight in South Africa and Botswana's Okavango Delta. He > has given up trying to communicate with the older generation, pinning > his hope on the younger - which is also ignoring his excellent > scholarship - see Bruce Lipton! Unfortunately, Lovtrup is not the best > communicator, as well as being a very angry old man. However, taking > time and effort to read him is rewarding. > > > > Lipton is a re-discoverer of epigenetics, as was I. He is dead on, but > his discoveries are in principle not new and it's a pity he does not > integrate his discoveries with earlier findings, strengthening his > case and given due credit where credit is due. As I age I see again > and again the same wheels being reinvented under different names - and > Lipton is a case in point. I was delighted to hear that he in the are > of health identified some of the same processes that I did and > reported on in my 1978 Life Strategies.. book. After all the subtitle > of that book is: Towards a biological theory of health. Independent > re-discovery greatly strengthens the case for epigenetics, as this > concept is played out with different facts. And I am delighted that > Lipton pulled this concept into the area of health. In short: bring > epigenetics into our understanding of Paleopsychology and quality of life. > > ----- Original Message ----- > From: HowlBloom at aol.com > To: paul.werbos at verizon.net ; > paleopsych at paleopsych.org > Sent: Saturday, May 21, 2005 9:51 PM > Subject: [Paleopsych] Paul--is this yours? > > I ran across the following today while doing research for a > national radio show--Coast to Coast--I'm about to do in 70 minutes > (2am to 5am EST the night of 5/21 and morning of 5/22). It's > exciting and sounds like the space-borne solar array you were > talking about in an email several days ago. > > Is it one of the projects you've helped along? Howard > > Modules of the USC School of Engineering's Information Sciences > Institute (ISI) proposed half-mile-long Space Solar Power System > satellite self assemble with what the researchers call "hormonal" > software,.funded by a consortium including NASA, the NSF, and the > Electric Power Research Institute (EPRI) > > ---------- > Howard Bloom > Author of The Lucifer Principle: A Scientific Expedition Into the > Forces of History and Global Brain: The Evolution of Mass Mind > From The Big Bang to the 21st Century > Visiting Scholar-Graduate Psychology Department, New York > University; Core Faculty Member, The Graduate Institute > www.howardbloom.net > www.bigbangtango.net > Founder: International Paleopsychology Project; founding board > member: Epic of Evolution Society; founding board member, The > Darwin Project; founder: The Big Bang Tango Media Lab; member: New > York Academy of Sciences, American Association for the Advancement > of Science, American Psychological Society, Academy of Political > Science, Human Behavior and Evolution Society, International > Society for Human Ethology; advisory board member: > Youthactivism.org; executive editor -- New Paradigm book series. > For information on The International Paleopsychology Project, see: > www.paleopsych.org > for two chapters from > The Lucifer Principle: A Scientific Expedition Into the Forces of > History, see www.howardbloom.net/lucifer > For information on Global Brain: The Evolution of Mass Mind from > the Big Bang to the 21st Century, see www.howardbloom.net > > ------------------------------------------------------------------------ > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > ------------------------------------------------------------------------ > No virus found in this incoming message. > Checked by AVG Anti-Virus. > Version: 7.0.322 / Virus Database: 266.11.14 - Release Date: 5/20/2005 > >------------------------------------------------------------------------ > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendulf at shaw.ca Mon May 23 18:25:10 2005 From: kendulf at shaw.ca (Val Geist) Date: Mon, 23 May 2005 11:25:10 -0700 Subject: [Paleopsych] Re: Paleopsych References: <193.404e4571.2fc169b7@aol.com> <001201c55f16$aa30eee0$873e4346@yourjqn2mvdn7x> <42910477.4080605@solution-consulting.com> Message-ID: <006c01c55fc4$c17116d0$873e4346@yourjqn2mvdn7x> Dear Lynn, Thanks for the message. I did not cc my earlier message to Lipton. I have since listened to a radio interview and I am a bit disappointed. It sounded a touch too simplistic and the politics a bit naive, which did not appear so in his writings. He is trying to sell his book and pulling out all the stops. I wished for a bit more disinterested scholarship and a touch less self-hero worship. However, he does point out crucially important matter. Best regards, Val Geist ----- Original Message ----- From: Lynn D. Johnson, Ph.D. To: The new improved paleopsych list Sent: Sunday, May 22, 2005 3:15 PM Subject: Re: [Paleopsych] Re: Paleopsych Val, this is a fascinating contribution. Did you cc it to Lipton himself? I read the book a few months back and found it stimulating, but I was out of my depth, so I appreciate your adding such background. The story about Goldschmidt's books being removed was particularly disturbing but all too familiar. Lynn Johnson Val Geist wrote: Dear Howard, Steve in a recent posting pointed to Dr. Bruce Lipton's work and book on the Biology of Belief. And Stephen is right in judging this an important development. I have not read the book, but opened up Lipton's website, read an essay and heard an interview, and I think I understand what Lipton is all about. Three cheers for his emerging - however, I found absolutely nothing new in principle, only in particular. He is in a line of brilliant "epigenetisits" to emerge, and he has apparently gotten much the same treatment eminent epigenticists before him have received. And that is what I want to deal with first. I was disappointed looking into his list of readings that none of these eminent men were cited be they Richard Goldschmidt, C. H. Waddington (who coined the word epigenetics over half a century ago) or more recently Soren Lovtrup. All challenged the conventional wisdom that heredity is deterministic, all challenged the conventional Neo-Darwinian wisdom - most shamelessly championed currently by Richard Dawkins - and all suffered severe rejection by Evolution's elite, none more than Richard Goldschmidt (1942). When I looked for that book, having heard all along how terribly mistaken Goldschmidt was, I could not find it in my universities library. I discovered that his books had been systematically removed. That this should happen to one of the centuries truly great cytogeneticists, was puzzling. At that time I suddenly ran into a fresh edition of Goldschmidt's book, reprinted at the behest of Stephen J. Gould. It is highly instructive to read from his pen just how the great Richard Goldschmidt was suddenly shunned. It boils down to three young turks going after him and destroying him: Ernst Mayr, George Gaylord Simpson and Bernhard Rensch. The means of destruction was ridicule using phrases out of context. There was little attention paid to the possibility that a great mind using his enormously rich experience and background at the end of a long life and prolific publication record was struggling trying to tell us something that he considers vitally important, but which clashes with conventional wisdom. To expedite matters: Goldschmidt's denigration robbed all but a few of us of the idea of flexible gene expression controlled by environment as fundamental to understanding speciation and thus evolution. And without that, there is no way that flexible environmentally controlled gene expression could enter into other fields - such as medicine! Next in line to be denigrated was the great embryologist, scholar of evolution and during WWII father of operations research on behalf of the British defense establishment C. H. Waddington. That's the father of Epigenetics. Note the title of one of his books: 1957 The Strategy of the Genes. If my memory serves me right, he was haughty and did not mince words and was openly contemptuous of Neo-Darwinism where Goldschmidt tried to be polite. When during a conversation with Enst Mayr I mentioned Waddington, Mayr became irritated, in fact noticeably so and dismissed Waddington as someone who would not fit in. That is remarkable as Mayr cites Waddington repeatedly in his own magnum opus, but in a remarkably limited way. The huge irony is that Mayr then writes in one of his last books on the Great Synthesis that, tragically, the embryologists have been left out of this great evolutionary synthesis. Having read Waddington, Ernst Mayr, a mind not to be trifled with - but also Dawkins and Stephen J. Gould could not see what Waddington saw, namely that phenotype plasticity was profoundly important to explain much of evolution that neo-Darwinism did not. I am totally puzzled by that conceptual blindness. Next in line is the great Swedish epigeneticist Soren Lovtrup. He wrote 1974 Epigenetics: A Treatise on Theoretical Biology and The Phylogeny of Vertebrata(1977), and Darwinism: The Refutation of a Myth, (S?ren L?vtrup, Soren Lovtrup) 1987 Croom Helm ISBN 0-7099-4153-6, 469 pages. He is a vociferous, but highly substantive opponent of Neo-Darwinism, and it totally ignored! I had the pleasure of meeting him a number of times and spending together nearly a fortnight in South Africa and Botswana's Okavango Delta. He has given up trying to communicate with the older generation, pinning his hope on the younger - which is also ignoring his excellent scholarship - see Bruce Lipton! Unfortunately, Lovtrup is not the best communicator, as well as being a very angry old man. However, taking time and effort to read him is rewarding. Lipton is a re-discoverer of epigenetics, as was I. He is dead on, but his discoveries are in principle not new and it's a pity he does not integrate his discoveries with earlier findings, strengthening his case and given due credit where credit is due. As I age I see again and again the same wheels being reinvented under different names - and Lipton is a case in point. I was delighted to hear that he in the are of health identified some of the same processes that I did and reported on in my 1978 Life Strategies.. book. After all the subtitle of that book is: Towards a biological theory of health. Independent re-discovery greatly strengthens the case for epigenetics, as this concept is played out with different facts. And I am delighted that Lipton pulled this concept into the area of health. In short: bring epigenetics into our understanding of Paleopsychology and quality of life. ----- Original Message ----- From: HowlBloom at aol.com To: paul.werbos at verizon.net ; paleopsych at paleopsych.org Sent: Saturday, May 21, 2005 9:51 PM Subject: [Paleopsych] Paul--is this yours? I ran across the following today while doing research for a national radio show--Coast to Coast--I'm about to do in 70 minutes (2am to 5am EST the night of 5/21 and morning of 5/22). It's exciting and sounds like the space-borne solar array you were talking about in an email several days ago. Is it one of the projects you've helped along? Howard Modules of the USC School of Engineering's Information Sciences Institute (ISI) proposed half-mile-long Space Solar Power System satellite self assemble with what the researchers call "hormonal" software,.funded by a consortium including NASA, the NSF, and the Electric Power Research Institute (EPRI) ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------------------------------------------------------------------- _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych -------------------------------------------------------------------------- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.322 / Virus Database: 266.11.14 - Release Date: 5/20/2005 ---------------------------------------------------------------------------- _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.322 / Virus Database: 266.11.15 - Release Date: 5/22/2005 -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Mon May 23 19:44:32 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:44:32 -0400 (EDT) Subject: [Paleopsych] NYT: Advertisers Want Something Different Message-ID: Advertisers Want Something Different http://www.nytimes.com/2005/05/23/business/media/23adco.html By [2]STUART ELLIOTT BBDO Worldwide in New York, [3]General Electric's longtime advertising agency, was not getting the message. The agency had been offering G.E. its panoply of traditional marketing ideas, leaning heavily on the standard 30-second television spot. But Judy Hu, general manager for global advertising and branding at G.E., demanded something daring. What she eventually got fit the bill: an online campaign with a virtual sprouting seed that computer users can tend and send to people they know by e-mail. "They kept bringing us what they thought we wanted," said Ms. Hu of her exchange with BBDO a couple years ago. "It took a while to make them believe we wanted something different." The world of advertising turns upside down when the advertisers - not the agencies - are the ones pushing the envelope. But that is what has been happening. The advertising business is undergoing an upheaval, forcing executives to radically change how they do business. Marketers are trying desperately to stay ahead of the technological innovations that are changing how consumers view their messages - and are putting pressure on their agencies to adapt. The ad firms are more eager to please than ever. The major public agencies face shrinking profit margins and sagging stock prices, leading to a shakeout and a frenzied effort to cut costs. It's unclear if the traditional agencies will be nimble enough to halt a slow decline. Already, many famous names are vanishing: N. W. Ayer; Bates; Bozell; D'Arcy Masius Benton & Bowles; Earle Palmer Brown; Lintas; Warwick Baker O'Neill. The big agencies also face a throng of hip new rivals, which have pounced on the opportunity and are looking to steal business. Those boutiques use their oddball names - like 180, Amalgamated, Mother, Nitro, Soul, StrawberryFrog, Taxi and Zig - as branding devices to signal they are not about business as usual. "Clients are looking at the results they're getting and they're not happy," said Miles S. Nadal, chairman and chief executive of [4]MDC Partners in Toronto, the parent of innovative, creatively focused agencies like Crispin Porter & Bogusky and Kirshenbaum Bond & Partners. "Historically, agencies pushed clients," Mr. Nadal said. "Today, clients are pushing the agencies. The same-old, same-old is not being accepted." But some agencies may be moving too slowly. "There's an incredible ability to cling to what's been done because there's a comfort in that," said Ian B. Rowden, executive vice president and chief marketing officer for the Wendy's brand at [5]Wendy's International in Dublin, Ohio. "There's a lot of talk but less action," said Mr. Rowden, who has also held senior marketing posts at [6]Coca-Cola and [7]Callaway Golf. "The old model still drives a lot of things." The origins of the industry's current problems are many: the dot-com bust, the fallout from 9/11 and the explosive growth of technologies that help consumers avoid ads - like digital video recorders, iPods and satellite radio. Madison Avenue is still trying to regain its footing. Industry employment, which peaked at 496,500 in 2000, fell 14.4 percent, to 424,900, last year, according to the Labor Department. "The onus is on the agencies to make sure they have the right creative talent," said Lauren Rich Fine, an analyst who follows the ad industry for Merrill Lynch, but "I suspect that's more difficult than ever," she added, after the "massive layoffs of the last few years." Ad spending in the United States, which once grew reliably year after year, declined in 2001 for the first time in four decades - and by the largest percentage since the Depression year of 1938. While ad spending has rebounded since then, the growth rate is slower than during its heyday of the 1990's. "I used to think the agencies were capable of double-digit revenue growth" each year, Ms. Fine said, but "now I look at them as mid-to-high single digits." Worse yet for agencies, profit margins have been shrinking significantly as clients, facing relentless competition and consolidation in categories like automobiles, fast food and telecommunications, are anxiously squeezing every nickel of waste from their ad budgets. "In the 80's, we used to fight with clients over creative. In the 90's, it was about strategy. Now, it's only about money," said Jonathan Bond, co-chairman of Kirshenbaum Bond & Partners in New York. So in a trend-conscious industry, economizing is the new black. For instance, when Kirshenbaum Bond recently filmed a commercial for the Liberty Mutual Insurance Company, retelling the tale of the Trojan horse, "instead of building a massive set, we used miniatures," said Rob Feakins, vice chairman and executive creative director. That saved about $150,000, or about 10 percent of the budget for the commercial, he estimated. Also, when planning a campaign that calls for several commercials, "we try to 'gang up the shoots.' Do two a day," Mr. Feakins said. "And we always think of shooting them on one location with a minimum of crew moves, because the crew moves kill you." Some cost-conscious marketers are even turning over responsibility for agency relationships to procurement departments. "The corporate world, reacting to recessions and Wall Street pressures, is challenging the agencies," said Alan Krinsky, principal at Alan Krinsky Associates in New York, which advises agencies and advertisers on issues like procurement. "Accountability is still a gray area." The stock prices of the giant holding companies that own almost all the big agencies have been weak. The shares of the world's largest agency company in revenue, the [8]Omnicom Group in New York, parent of BBDO, are down 8.1 percent from their 52-week high and 13.8 percent from their five-year peak. For the [9]WPP Group, which owns agencies like Young & Rubicam and is the No. 2 agency company behind Omnicom, the share price is down 11.7 percent and 29.1 percent, respectively. The third-largest agency company, the [10]Interpublic Group of Companies, which owns agencies like Deutsch and McCann Erickson, has suffered accounting problems that have led to an investigation by the Securities and Exchange Commission as well as the loss of large clients for creative and media-buying assignments. The company's shares are down 15.7 percent from their 52-week high and 73.7 percent from their five-year peak. "It's almost accepted that the model is broken and it's time for a new approach," said Carl Johnson, a longtime executive at traditional advertising agencies like TBWA Worldwide. He and four other high-profile refugees from mainstream agencies are now partners in a creatively focused New York boutique named Anomaly. "No one comes to us for more of the same," Mr. Johnson said. "Our last resort is an ad, if we can't think of anything else." Anomaly works with the wireless licensing group of ESPN, part of the [11]Walt Disney Company, on not only marketing but also "some product and content development," Mr. Johnson said. The agency shares in the revenue by keeping an equity stake in whatever is produced. "This way, we get paid for the quality of our output," he added, "not the quantity of our input." Anomaly is among a rash of boutiques that have started up to capitalize on the desire among marketers to do things differently - and the inability of many bigger agencies to accomplish that. In some instances, traditional agencies are diversifying, forming units to specialize in nontraditional tasks. The Kaplan Thaler Group in New York, for instance, opened a division called KTG Buzz to focus on, well, marketing that generates buzz. "Creativity used to be, 'Think inside the box.' Then it was, 'Think outside the box.' Now, there's no box," said Linda Kaplan Thaler, chief executive of Kaplan Thaler, part of the [12]Publicis Groupe. BBDO responded to G.E.'s pressure by devoting more attention and resources to units like Atmosphere, specializing in interactive campaigns, playing down its decades-long concentration on producing big-budget television commercials. Under a new chief executive, Andrew Robertson, the agency even dismissed its most senior creative leader, replacing him with an executive from another agency known for a slick, successful series of long commercials on the Web, known as BMW Films, which won much acclaim - and revved up BMW sales. Ms. Hu of G.E. is pleased with BBDO's response. "I feel they're stepping up to the plate," she said, adding that Mr. Robertson "is trying very hard to change the direction of that agency." For instance, she said, a top BBDO New York executive, Brett Shevack, now plays host to regular brainstorming sessions known as Project Inspire, where people from the agency, G.E. corporate and G.E. business units meet "to brainstorm a particular problem." The meetings "can sometime lead to wild and wacky ideas that might have not been considered before," Ms. Hu said. "And that's a good thing." Online marketing at G.E. is by far the fastest-growing part of the ad budget, scheduled to increase 97 percent this year from 2004, Ms. Hu said. She declined to provide dollar amounts, citing company policy. And in research last fall to gauge response to a previous viral campaign, she added, 80 percent of respondents said the G.E. ads they saw online made them think of the company as "innovative," and 94 percent agreed the online ads made G.E. seem "more appealing." So besides the virtual seed, which is being tested this week by G.E. employees, there will be more unconventional campaigns to come, Ms. Hu said, including ads made available on cellphones and an online game, played with virtual windmills, to encourage energy conservation. Mr. Robertson said the changes he is making at BBDO for G.E. and other clients like [13]FedEx, [14]PepsiCo and Visa are wrenching but necessary. "It's getting easier and easier for consumers to switch from things that aren't engaging them to those that will," Mr. Robertson said. "And they are." "You can look at that and say, 'Oh, my God! The sky is falling,' " he added, "or you can look at it as a huge opportunity to create content for your clients that does engage." From checker at panix.com Mon May 23 19:44:50 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:44:50 -0400 (EDT) Subject: [Paleopsych] open Democracy: Becky Hogge: The Great Firewall of China Message-ID: Becky Hogge: The Great Firewall of China http://www.opendemocracy.net/media-edemocracy/china_internet_2524.jsp 5.5.20 Google is doing business with a communist China notorious for internet censorship. Not only techno-libertarians should worry, says Becky Hogge. In December 1993, talking to Time magazine, technologist and civil libertarian [62]John Gilmore created one of the first verses in internet lore: The net interprets censorship as damage and routes around it. But according to a [63]report published by George Soross Open Net Initiative ([64]ONI), the Chinese government are doing a great job of disproving this theory. On 11 May, [65]Google announced it would set up shop in the Peoples Republic by the end of 2005. What can this mean for the citizens of China, and the citizens of the internet? The Chinese effort to censor the internet is a feat of technology, legislation and manpower. According to the BBC, which is almost completely blocked within the great firewall of China (as it is known among techies), 50,000 different Chinese authorities do nothing but monitor traffic on the internet. No single law exists to permit this mass invasion of privacy and proscription of free speech. Rather, hundreds of articles in dozens of pieces of legislation work to obfuscate the mandate of the government to maintain political order through censorship. According to [66]Internet Filtering in China in 2004-2005: A Country Study, the most rigorous survey of Chinese internet filtering to date, Chinas censorship regime extends from the fatpipe backbone to the street cyber-caf?. Chinese communications infrastructure allows packets of data to be filtered at choke points designed into the network, while on the street liability for prohibited content is extended onto multiple parties author, host, reader to chilling effect. All this takes place under the watchful eye of machine and human censors, the latter often volunteers. The ramifications of this system, as the ONIs [67]John Parley stressed when he delivered the report to the US-China Economic and Security Review Commission in April, should be of concern to anyone who believes in participatory democracy. The ONI found that 60% of sites relating to opposition political parties were blocked, as were 90% of sites detailing the [68]Nine Commentaries, a series of columns about the Chinese Communist Party published by the Hong Kong-based [69]Epoch Times and associated by some with the banned spiritual movement [70]Falun Gong. The censorship does not end at the World Wide Web. New internet-based technologies, which looked to lend hope to free speech when ONI filed its last report on China in 2002, are also being targeted. Although email censorship is not as rampant as many (including the Chinese themselves) believe, blogs, discussion forums and bulletin boards have all been targeted through various measures of state control. What then, of Chinas 94 million web surfers? One discussion thread at Slashdot, the well-respected and popular discussion forum for techno-libertarians, is telling. When a well-meaning westerner offered a list of links prefaced with assuming that you can read Slashdot, here are a few web pages that your government would probably prefer you not to read, one poster, [71]Hung Wei Lo responded: I have travelled to China many times and work with many H1-Bs [temporary workers from outside US] from all parts of China. All of them are already quite knowledgeable about all the information provided in the links above, and most do not hesitate to engage in discussions about such topics over lunch. The fact that you feel all 1.6 billion Chinese are most certainly blind to these pieces of information is a direct result of years of indoctrination of Western (Im assuming American) propaganda. Indeed, the recent anti-Japanese protests have been [72]cited by some as an example of how the Chinese people circumvent their states diligent censorship regime using networked technologies such as mobile text messages (SMS), instant messaging, emails, bulletin boards and blogs to communicate and organise. The argument here of course is that the authorities were ambivalent towards these protests one blogger reports that the state sent its own SMS during the disturbances: We ask the people to express your patriotic passion through the right channel, following the law and maintaining order. China will have to keep up with the slew of emerging technologies making untapped networked communication more sophisticated by the day RSS feeds, social bookmarking systems like del.icio.us and Furl and fledgling Voice over Internet Protocol ([73]VoIP, or telephony over the internet) packages such as Skype. Judging by the past record, it cannot be assumed that the state censorship machinery will not be able to meet these future challenges. What does this mean for the internet? As the authors of the ONI report point out, China has the opportunity to export its censorship technology and methodology to states such as Vietnam, North Korea, Uzbekistan and Kyrgyzstan, to whom it already acts as a regional internet access provider. Further, as the second largest market in the world, it is a natural attractor for global web firms. The announcement that Google has secured a licence to operate in China has prompted many to ask how the US company will practice business there whilst staying true to its informal company motto [74]Dont be evil. Already Google has been accused of collaborating with the Chinese government by omitting from its Google News service links blocked by the state. If these two experts in internet traffic Google in cataloguing it and China in censoring it start working together, what can we expect? Will Google attempt to persuade the Chinese government to open up the free flow of information? Could the Chinese government force Google to hand over search logs and other identifiable information? It is not only repressive regimes that have an interest in the censorship of the internet. Technologies now used by the Chinese, like choke points for packet filtering, were advocated in the 1990s by rightsholder lobbies in the [75]National Information Infrastructure talks in the United States. And the acceptance of VoIP as a mainstream telephony solution has been slowed by the concerns of US and British security services that conversations cannot be [76]tapped. What the situation in China demonstrates to techno-libertarians is that they can no longer rely on John Gilmores old maxim: from now on, the internet may need a little human help routing around the damage of censorship. References 62. http://www.toad.com/gnu/ 63. http://www.opennetinitiative.net/studies/china/ONI_China_Country_Study.pdf 64. http://www.opennetinitiative.net/ 65. http://www.theregister.co.uk/2005/05/11/google_china/print.html 66. http://www.opennetinitiative.net/studies/china/ 67. http://blogs.law.harvard.edu/palfrey/2005/04/14 68. http://english.epochtimes.com/jiuping.asp 69. http://www.rsf.org/article.php3?id_article=13769 70. http://en.wikipedia.org/wiki/Falun_Gong 71. http://slashdot.org/comments.pl?sid=149180&cid=12506552 72. http://news.bbc.co.uk/1/hi/world/asia-pacific/4496163.stm 73. http://www.fcc.gov/voip/ 74. http://investor.google.com/conduct.html 75. http://en.wikipedia.org/wiki/NII 76. http://www.computerweekly.com/Article132467.htm From checker at panix.com Mon May 23 19:45:02 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:45:02 -0400 (EDT) Subject: [Paleopsych] LAT: Definitional Drift: Math Goes Postmodern Message-ID: Definitional Drift: Math Goes Postmodern http://www.latimes.com/news/opinion/commentary/la-oe-wertheim16may16,0,2648360,print.story By Margaret Wertheim Margaret Wertheim is the "Quark Soup" science columnist for LA Weekly and is working on a book about the role of imagination in theoretical physics. May 16, 2005 A baker knows when a loaf of bread is done and a builder knows when a house is finished. Yogi Berra told us "it ain't over till it's over," which implies that at some point it is over. But in mathematics things aren't so simple. Increasingly, mathematicians are confronting problems wherein it is not clear whether it will ever be over. People are now claiming proofs for two of the most famous problems in mathematics -- the Riemann Hypothesis and the Poincare Conjecture -- yet it is far from easy to tell whether either claim is valid. In the first case the purported proof is so long and the mathematics so obscure no one wants to spend the time checking through its hundreds of pages for fear they may be wasting their time. In the second case, a small army of experts has spent the last two years poring over the equations and still doesn't know whether they add up. In popular conception, mathematics is the ultimate resolvable discipline, immune to the epistemological murkiness that so bedevils other fields of knowledge in this relativistic age. Yet Philip Davis, emeritus professor of mathematics at Brown University, has pointed out recently that mathematics also is "a multi-semiotic enterprise" prone to ambiguity and definitional drift. Earlier this year, Davis gave a lecture to the mathematics department at USC titled "How Do We Know When a Problem Is Solved?" Often, he told the audience, we cannot tell, for "the formulation and solution of problems change throughout history, throughout our own lifetimes, and even through our rereadings of texts." Part of the difficulty resides in the notion of what we mean by a solution, or as Davis put it: "What kind of answer will you accept?" Take, for instance, the task of trying to determine whether a very large number is prime -- that is, it cannot be split evenly into the product of any smaller components, except 1. (Six is the product of 2 by 3, so it is not prime; 7 has no smaller factors, so it is.) Determining primeness has huge practical consequences -- prime numbers are widely used in computer security codes, for instance -- yet when the number is large it can take an astronomical amount of computer time to determine its primeness unequivocally. Mathematicians have invented statistical methods that will give a probabilistic answer that will tell you, for instance, a given number is 99.99% certain to be prime. Is it a solution? Davis asked. Other problems can also be addressed by brute computational force, but many mathematicians feel intrinsically uncomfortable with this approach. Said Davis: "It is certainly not seen as an aesthetic solution." A case in point is the four-color map theorem, which famously asserts that any map can be colored with just four colors (no two adjoining sections may be the same color). The problem was first stated in 1853 and over the years a number of proofs have been given, all of which turned out to be wrong. In 1976, two mathematicians programmed a computer to exhaustively examine all the possible cases, determining that each case does indeed hold. Many mathematicians, however, have refused to accept this solution because it cannot be verified by hand. In 1996, another group came up with a different (more checkable) computer-assisted proof, and in December this new proof was verified by yet another program. Still, there are skeptics who hanker after a fully human proof. Both the Poincare Conjecture (which seeks to explain the geometry of three-dimensional spheres) and the Riemann Hypothesis (which deals with prime numbers) are among seven leading problems chosen by the Clay Mathematics Institute for million-dollar prizes. The institute has its own rules for determining whether any one of these problems has been solved and hence whether the prize should be awarded. Critically, the decision is made by a committee, which, Davis said, "comes close to the assertion that mathematics is a socially constructed enterprise." Another of the institute's million-dollar problems is to find solutions to the Navier-Stokes equations that describe the flow of fluids. Because these equations are involved in aerodynamic drag they have immense importance to the aerospace and automotive industries. Yacht designers must also wrestle with these legendarily difficult equations. Over lunch, Davis told a story about yacht racing. He had recently talked to an applied mathematician who helped design a yacht that won the America's Cup. This yachtsman couldn't have cared less if the Navier-Stokes equations were solved; what mattered to him was that, practically speaking, he could model the equations on his computer and predict how water would flow around his hull. "Proofs," said Davis, "are just one of the tools that mathematicians now use." We may never fully solve the Navier-Stokes equations, but according to Davis it will not matter. Like so many other fields, mathematics is becoming less about some Platonic ideal of ultimate answers, and more a functional project of computational simulation and communal negotiation. Dare we say it: Math is becoming postmodern. From checker at panix.com Mon May 23 19:45:22 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:45:22 -0400 (EDT) Subject: [Paleopsych] This Is London: The great gender gap Message-ID: The great gender gap http://www.thisislondon.co.uk/insiders/guides/articles/18664137?version=1 The current trend shows women prefer reading about love and family while men opt for sex and violence By David Sexton, Evening Standard 16 May 2005 Some people like to think that men and women are becoming increasingly alike in our unprecedentedly equal society. A few even try to maintain that nearly all gender differences are acquired rather than innate. A quick way to realise neither claim is true is to look at what men and women read for pleasure. Let us take the two nationwide bestselling mass market fiction titles, after the dreadful productions of Dan Brown and Patricia Cornwell have been discounted. Love and Devotion by Erica James (Orion, 536 pages, ?6.99) sold 18,188 copies last week; The Increment by Chris Ryan (Arrow, 440 pages, ?6.99) was close behind with 17,687 copies. Both books are competent, even enjoyable examples of their different genres. Having just come out in paperback, they will be picked up at airports everywhere this summer to serve on the beach, being undemanding yet entertaining enough. They are written for readerships as automatically and emphatically divided by gender as public toilets. Reading them together doesn't just suggest that men and women are different, it makes it seem extraordinary that the sexes should be able to communicate at all, let alone to the extent of mating successfully. Not that that's commonly seen, these days, in the longer term. Chris Ryan is an alumnus of one of our most commercial creative writing schools, the Bravo Two Zero SAS patrol which came to grief in the First Gulf War. Having told his own tale in The One That Got Away, he has produced a series of military thrillers. Matt Browning, a super-tough soldier, has been thrown out of the SAS, having failed the test for the ultradeadly eight-man unit of cold-blooded assassins called "The Increment". Now running a bar on the Costa del Sol, Matt is blackmailed by the secret services into coming out of retirement for "one last mission", on behalf of a sinister French pharmaceutical billionaire, resident in London, who crosses the Channel each weekend in his own private armoured train, pictured looming up excitingly on the cover of the paperback. Soon Matt realises that he has been double-crossed and is being hunted down by the very same SAS unit to which he briefly belonged. In a grande finale, Matt assaults the villain-laden armoured train, blazing away with grenades and machine guns, although there is also some hand-to-hand work with Napoleonic sabres, collected by the frog plutocrat. There are women in The Increment, or as one officer dubs them, muffins. There is Matt's old girlfriend Gill who wants to settle down and have kids. She is tortured to death by the Increment, making Matt truly furious. "Now I take my revenge in the reddest blood," he vows. Then there is Orlena: "She had the classical, sculpted beauty of an Eastern European. Her skin was as white as snow ... Her lips were thick and red.." Orlena orders Matt to make love to her as if it was his "last time on earth". Later he is obliged to shoot her but, the sporting type, she bears him no grudge and reappears later to save his life. Meanwhile, Matt has been solaced by Eleanor, the sister of a murdered comrade: "Sex and danger, he reflected, as his arms pinned her back down on the mattress, pushing her down. One inevitably leads to the other." What we do not find in The Increment are scenes involving children or families. Well, one 12-year-old girl appears briefly, being tortured by Matt to extract information from her father. But otherwise this entire element of life, highly rated by at least half the population, has been quietly dropped. Can it be that such joys, or, as it might be, ties and burdens, are not what men read about for relaxation, given the choice? Instead, a profusion of hardware is lovingly detailed, from weaponry - the AN-49 has a much faster rate of fire than the AK-47, as well as greater accuracy, due to its redesigned recoil - to vehicles - there's a harrowing moment when, being on the run, Matt has to trade down from his Porsche Boxster to an 11-year-old Ford Escort - culminating of course in that very special train. And the basic human action depicted in The Increment is not nurture, affection, the feeling of community or love but establishing dominance by killing your enemies. This may not be what most men actually do day to day, but we have here proof that it is a world many men find appealing to inhabit in fantasy. Erica James has written 10 or so novels, just like Chris Ryan, and built up a similarly sized following, being three times shortlisted for the Parker Romantic Novel of the Year Award. There is some violence in Love and Devotion but not much. Two brothers have a brief fistfight outside a sherry party and a grandad smashes up his own pergola and rose garden with a spade, while having a mental breakdown. But there are no automatic weapons whatsoever. Harriet Swift is 32, single, childless, working in IT. When her sister and her husband are killed in a car crash, Harriet gives up her job, her flat in Oxford and her dull boyfriend to go back to the family home in Cheshire to look after her orphaned niece and nephew, Carrie and Joel, nine and four. Will she find a man to share this challenge with her? Perhaps the likeable, dependable but somehow not very sexy bookshop owner, Miles, whom she has known since childhood? Surely not his brother, the fiendishly cruel bi-sexual Cambridge English don Dominic, despite his dashing hat and ebony crucifix? Might it just be Will, the 46-year-old antique dealer, slim, handsome, witty, kind, divorced but extraordinarily supportive and tolerant to his two student daughters? A clue, perhaps, may be found in "the way Will could make her weak with longing, and the way he could make her climax for ever and for ever." But Love and Devotion is far more densely fleshed out than this summary of the central plot line suggests. The whole social world Harriet inhabits, from her workplace to the children's schools to the neighbours, is patiently detailed. All of the affairs and divorces and illnesses, are carefully drawn out across the generations. The sentences groan under the weight of keeping all these relationships in play. "While Mum was off paying the bill and Gemma was in the loo, Suzie watched her grandmother dig around inside her purse for a tip for their waitress." "Will knew from Gemma that Suzie was bored and missing her friends from university." Every action derives its meaning from its social context. There are moments when the book feels as densely related as a real family gathering, full of in-laws, cousins, nephews, uncles and aunts - and about as appealing to rough male tastes. But Erica James's overwhelming female readers evidently relish such ample peopling of the fictional realm within which Harriet seeks for romantic fulfilment, and, contrariwise, don't pine for Boxsters and AN-49's. Matt, generally to be found spraying the room with bullets, would not fit in well to the world of Love and Devotion; nor would Harriet cope impressively with the deadly assassins of The Increment, despite her cute beret. These worlds can never meet. Somehow their readers must, though, as they lie side by side on the beach. Or more likely back to back. From checker at panix.com Mon May 23 19:45:36 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:45:36 -0400 (EDT) Subject: [Paleopsych] Guardian: (Blackburn) Whose truth? Message-ID: Whose truth? http://books.guardian.co.uk/print/0,3858,5197825-110738,00.html John Banville follows Simon Blackburn on the ultimate philosopher's quest in Truth John Banville Saturday May 21, 2005 Truth: A Guide for the Perplexed by Simon Blackburn 210pp, Allen Lane, ?12.99 In his keynote campaign speech delivered prior to the Vatican election, the man who would be pope, Cardinal Ratzinger, startled the world, or that section of it that was bothered to listen to him, with a tirade against what he saw as a modern-day surrender to relativism. We might have expected and certainly would not have been surprised by an abjuration of concupiscence and lust, say - although it is true that as Benedict XVI he did fulminate against the "filth" afflicting even his own church, and when a Catholic cleric uses such words we know he is talking about sex - but on the face of it relativism might have seemed pretty low down in the peccancy order. Not so, and certainly not so if, as the cardinal did, you have spent the past few decades heading up what used to be known as the Inquisition. The point for Papa Ratzi, as the friskier outlets of the Italian media lost no time in dubbing him, is that relativism, however wishy-washy and New Ageist it may seem to the rest of us, strikes at the very foundation stone of the church, which is the conviction that there exist truths which are eternal, unchallengeable and verified by faith - in a word, absolute. Without absolute truths, and the reiterated insistence of their enduring reality, the Catholic church would have no basis, a fact to which Ratzinger and, we assume, the vast majority of cardinals, were acutely alive when they sealed themselves into the Sistine Chapel to choose a successor to their dead capo di capi. But are there absolute truths, outside the cloistered imaginations of the Princes of the Church and their more submissive subjects? One is tempted to adapt Kafka's remark about hope, and say that no doubt there is truth, an infinite amount of truth - but not for us. Simon Blackburn is professor of philosophy at Cambridge, and the author of fine popularising books such as The Oxford Dictionary of Philosophy and Being Good: A Short Introduction to Ethics. He is learned, astute, admirably sensible, and possesses an elegant and clear prose style. Truth is based on the texts of the Gifford lectures delivered last year at the University of Glasgow, and on other, occasional lectures and articles written over the past four or five years. One would never use the word ragbag to describe a work by such a graceful synthesiser, but some parts of the book have the air of having been shoehorned in, for instance a short, closing chapter defending David Hume's philosophical cosmopolitanism against attacks by the likes of Donald Davidson, and part of another chapter spiritedly repudiating what might be termed Richard Rorty's radical pragmatism; both these excursuses have the air of being frolics of their own. Blackburn opens his introduction with a rousing call to arms, which might be a preparation for an assault on the likes of Rorty and other "fuzzy" - the adjective is Rorty's own - postmodernist philosophers and pundits, and which would likely be much to the taste of the latest Vicar of Christ: "There are real standards. We must fight soggy nihilism, scepticism and cynicism. We must not believe that anything goes. We must not believe that all opinion is ideology, that reason is only power, that there is no truth to prevail. Without defences against postmodern irony and cynicism, multiculturalism and relativism, we will all go to hell in a handbasket." Immediately, however, a caveat is entered - indeed, a broad bannerful of caveats is unfurled. Many, Blackburn allows, will regard our beliefs and insistences as mere noise, the crackling of thorns under an empty pot. "There are people," he writes, "who are not impressed by our conviction, or by our pride and our stately deportment." These will include relativists, postmodernists, subjectivists, pragmatists. However, it is Blackburn's intention, he tells us, not to join battle in the philosophy wars but to tread his way delicately among the warring parties, with us at his heels and under his protection. His book, he writes, is "about a war of ideas and attitudes". He does not state his own position directly - as why should he? - but it may be helpful to the reader to know that he is of the quasi-realist school. If realism holds that there is a world independent of mind and our judgments are based on it, quasi-realism takes the Humean position that our judgments may have no independent object yet that they behave as if they had. This may sound like the have-your-cake-and-eat-it as against the slash-and-burn schools of philosophy, but it is a perfectly sensible position to occupy. He opens with the melancholy observation that for the classical sceptic "a clash of countervailing arguments ... led to peaceful suspension of belief, whereas in our own times it is seen more as a licence for people to believe what they like". He illustrates this decline much further on in the book, in the pages devoted to opposing the all-accommodating Rorty. For Rorty, language is for "coping, not copying", and he sees, as he has written, "the employment of words as the use of tools to deal with the environment, rather than as an attempt to represent the intrinsic nature of that environment ... " Blackburn links Rorty with William James, whom Blackburn had earlier accused of flirting with the danger of "abolishing the distinction between wishful thinking and accuracy", a danger into which Rorty, in Blackburn's reading, frequently strays with unangelic fearlessness. Rorty is quoted on feminist accounts of the difference between men and women: "The question of whether these differences were there (huddled together deep down within the entity, waiting to be brought to light by deconstructing excavators), or are only there in the entity after the feminist has finished reshaping the entity into a social construct nearer her heart's desire, seems to me of no interest whatever." This displays surely a breathtakingly cavalier attitude to the possible facts of the matter, and sounds remarkably like a Humpty Dumptyan determination that words shall mean what the speaker commands them to mean. The sceptical argument that undercuts the quest for truth Blackburn christens the "variation of subjectivities", and traces it back to the codification drawn up by Sextus Empiricus in the second or third centuries AD. Sextus posits various modes of scepticism, all of which, Blackburn writes, "try to show that things appear differently to different sensibilities, that there is no neutral or authoritative decision procedure awarding victory to just one of these; hence that we should suspend judgment about things themselves". It is the elusiveness of the "thing itself", the Kantian Ding an sich, which continues to give philosophers a headache. What may we know, and by what means may we know it? At the heart of the book these questions falter almost into silence before the great ice wall that is Nietzsche, the "arch debunker", as Blackburn's chapter heading has it. For Nietzsche, "facts is precisely what there is not, only interpretations", and "Truth is the kind of error without which a certain species of life could not live." In The Will to Power he states the thing baldly: "There exists neither 'spirit', nor reason, nor thinking, nor consciousness, nor soul, nor will, nor truth: all are fictions that are of no use." We immediately ask, No use to whom? and, If not these fictions, what is there that we can use? But Nietzsche spurns all our querulous wheedlings, and wonders how in our "constant fluttering around the single flame of vanity ... an honest and pure urge for truth could have arisen among men". "They are deeply immersed in illusions and dream images; their eye glides only over the surface of things and sees "forms"; their feeling nowhere leads into truth, but contents itself with the reception of stimuli, playing, as it were, a game of blind man's buff on the backs of things." Despite its subtitle, Truth is less a guide for the perplexed than a guided tour through the philosophical perplexities in which, despite three millennia of hard thinking, man is still mired. Time it is that defeats all our attempts to fix reality, facts, truth. Perhaps it would be possible to glimpse the really existing truth of things if we could halt the onward rush of time, but we cannot, hence we are left floundering in a Heraclitean flux, a blurred immanence which we call reality. For all the excitement of the chase after truth, perhaps we would do best to follow the example of the ancient sceptics, for whom, Blackburn writes, "it was an admirable consequence of their scepticism that they lost conviction, lost enthusiasm as it were for holding one opinion rather than another. With epoche or suspension of judgment, came the desired ataraxia or tranquillity." One suspects Professor Blackburn would deplore any such retreat into quietistic bliss, and would instead admonish us with the title of another of his books: Think. o John Banville's Prague Pictures is published by Bloomsbury From checker at panix.com Mon May 23 19:45:46 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:45:46 -0400 (EDT) Subject: [Paleopsych] John Rawlston Saul: Globalism has had its day, eh? Message-ID: John Rawlston Saul: Globalism has had its day, eh? http://www.thestar.com/NASApp/cs/ContentServer?pagename=thestar/Layout/Article_Type1&c=Article&cid=1116671137159&call_pageid=1011789353817&col=1011789353403&DPL=IvsNDS/7ChAX&tacodalogin=yes May 22, 2005. 01:00 AM Globalism has had its day, eh? John Ralston Saul calls us to reject consumerism; Philip Marchand is none the wiser [73]PHILIP MARCHAND _________________________________________________________________ The Collapse of Globalism: And the Reinvention of the World by John Ralston Saul Viking Canada, 309 pages, $36 _________________________________________________________________ Near the beginning of his new book, The Collapse of Globalism: And the Reinvention of the World, John Ralston Saul quotes the 18th century philosopher Montesquieu: "Where there is commerce, people are better behaved." The remark is reminiscent of a similar comment by Samuel Johnson: "A man is seldom so innocently employed as when he is getting money." I'm not sure Saul entirely believes either man. I have a sneaking suspicion that Saul believes people are truly well behaved, and innocently employed, only when reading books such as The Collapse of Globalism. But Saul is not against commerce or even capitalism, as such -- throughout his book, in fact, he continually refers favourably to capitalists as opposed to "managers." What he is really opposed to is "globalism" -- a philosophy of international economic advancement through free trade, privatization of government enterprises, de-regulation, low taxes for business, high-tech agriculture and so on. Saul's book argues that globalism has had its day, chastened by failure to deliver the goods in Latin America, and by countries such as New Zealand and Malaysia, which have reversed earlier globalist policies. Before examining this argument, however, it is worthwhile to remember why globalism held such appeal in the '80s and '90s. V.S. Naipaul's 1964 book, An Area of Darkness, described an India not only mired in terrible poverty but enchanted by a mystique of poverty, crippled by a caste system, sunk under the weight of a government bureaucracy embodied by an "Inspector of Forms and Stationery" Naipaul met on an Indian railway. He also described village tailors and other tradesmen who had not the remotest concept of service or quality of goods. It was globalism that promised to make that tailor behave better through real competition and real commerce. Saul excuses the stagnant, bureaucratic India portrayed by Naipaul: "What Western critics forget is that this relentlessly stable approach permitted Indians to catch their breath after the disorder and violence of independence." You can believe that if you want to. India is now booming, in part because much of the bureaucracy has been dismantled. In his most recent book about India, published in 1990, Naipaul was already noting signs of economic vitality and a new social freedom, complete with improved agriculture and housing. Saul, again, would not argue against economic progress or a basically free market economy, but he insists that India's new economic vitality is not due to pure globalism. India has unleashed entrepreneurial energies in its own distinctive fashion, complete with government directives, and the same is true for China. What Saul argues against is globalism's central thesis that "civilization should be seen through economics, and economics alone." Saul also opposes the corollary of that thesis, which is a belief in the unstoppable progress of technology. In so doing, Saul takes his place in a long line of fundamentally conservative critics of political economy, like the Victorian critic John Ruskin, who insisted that there was no wealth but life. "Is it possible that a sizable portion of our growth in trade relates not to a revival of capitalism, but to a decline into consumerism?" Saul asks rhetorically. "Note how many of the leading modern economic historians equate consumerism not with wealth creation and societal growth, but with inflation and the decline of citizenship. Why? Because there is a constant surplus of goods that relate neither to structural investment nor to a sense of economic value, let alone to societal value." This is classic Ruskin. "Wise consumption is a far more difficult art than wise production," the latter proclaimed. "Twenty people can gain money for one who can use it." So I think I know what Saul means in that paragraph. Yet there's a nagging lack of clarity. I'm never sure what the word "consumerism" really signifies. The only definition of consumerism I can think of is people buying stuff intellectuals have no use for. Socks and underwear, fine, but video games -- everybody knows they have no economic or social value. I guess that's what consumerism means. But then what causes a "decline into consumerism?" Many of Saul's rhetorical techniques leave me mildly puzzled. There is his habit of setting up good/bad dichotomies that are never completely explained -- "management" versus "leadership," for example, or "efficient" people versus "effective" people. He even differentiates between ethics and morality. "Those who preach Globalization can't seem to tell the difference between ethics and morality," he says impatiently. "Through ethics the health of the public good can be measured." On the other hand, "morality is a weapon of religious and social righteousness." This latter definition of morality seems to owe something to Nietzsche. Anyway, it's wrong. This is part of a larger problem of Saul's prose style, which I might almost say is moralistic, but which certainly takes an abstract and magisterial tone. What Saul wants to get across is very complex, and requires a very patient attempt to explain terms and give concrete examples and underline the logic of argument, and he just won't do that. The result is writing that is very hard to digest: "The technocratic approach to global warming simply ignores reality and replaces it with competing interests and an idea of proof that, because it is projected toward the past not the future, denies the central human capacity of prudence." Say again? To be fair, Saul did, in the previous sentence, define prudence as "the capacity to live in reality," and that helps a bit. The pity is that we all want some understanding of the world economic situation, and a lot of us who are a bit hazy on what hedge funds actually do are nonetheless prepared to grapple with the intricacies of international finance if clearly presented. For what it's worth, I agree with Saul on a number of his key points. I agree that the sheer size of large corporations has a stultifying effect on employees and managers. I am heartened and intrigued by his portrait of a country such as Italy: "About 23 per cent of Italians work in companies with fewer than 10 employees," Saul writes. "The equivalent figure in Britain is 7 per cent, in the United States 3 per cent." This is part of the reason why I'd rather be living in the town of Assisi right now than in Toronto, Liverpool or Houston. I also agree that there's something radically wrong with our public finances when government depends so hugely on gambling revenues, a form of taxation falling largely on the poorest and most vulnerable citizens. And I agree, of course, that there's more to civilization than gross domestic product (GDP). Unfortunately, I don't feel I have any deeper understanding of the post-globalism world than I did before reading this book. _______________________________________________________________ Toronto Star literary critic Philip Marchand appears weekly. From checker at panix.com Mon May 23 19:46:29 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:46:29 -0400 (EDT) Subject: [Paleopsych] Science Week: Einstein, Lorentz, and the Ether Message-ID: History of Science: Einstein, Lorentz, and the Ether http://scienceweek.com/2005/sa050304-1.htm The following points are made by John Stachel (Nature 2005 433:215): 1) During the 19th century, the mechanistic world-view -- based on Isaac Newton's formulation in the Principia (1687) of the kinematics and dynamics of corpuscles of matter, and crowned by his stunningly successful theory of gravitation -- was challenged first by the optics, then by the electrodynamics of moving bodies. By the mid 1800s Newton's corpuscular theory of light was no longer tenable. To explain Snell's law of refraction, this theory assumed that light corpuscles speed up on encountering a medium of higher refractive index. But in 1849, Leon Foucault (1819-1868) and Hippolyte Fizeau (1819-1896) showed that, in fact, light slowed down, as predicted by the rival wave theory espoused by Newton's contemporary Christiaan Huygens (1629-1695). The problem now was to fit the wave theory of light into the newtonian picture of the world. 2) Indeed, the ether -- the medium through which light waves were assumed to propagate in the absence of ordinary, ponderable matter -- seemed to provide a physical embodiment of Newton's absolute space. But elucidating the relation between ether and ponderable matter presented grave problems: did moving matter drag the ether with it -- either totally or partially -- or did the ether remain immobile? It proved impossible to reconcile the consequences of any of these hypotheses with all the experimental results on the optics of moving bodies. By the last third of the nineteenth century, many physicists were acutely aware of this problem. 3) By 1865, James Clerk Maxwell (1831-1879) had demonstrated that light could be interpreted as wave-like oscillations of the electric and magnetic fields, obeying what we now call the Maxwell equations for these fields. It was realized that the optical problems were only a special case of similar problems in reconciling the electrodynamics of moving bodies with newtonian kinematics and dynamics. Towards the end of the century, however, Hendrik Antoon Lorentz (1853-1928) seemed to overcome all these problems through his interpretation of Maxwell's equations. Lorentz assumed that the electromagnetic ether is entirely immobile, in which case there would be no dragging of the ether. 4) Although in newtonian mechanics it is impossible to distinguish any preferred inertial frame (this result is often referred to as the galileian principle of relativity), at first the situation seemed different for electrodynamics and optics. The rest frame of the ether provided a preferred inertial frame, and motion through it should have been detectable. Yet all attempts to detect the translational motion of the Earth through the ether by means of optical, electrical or magnetic effects consistently failed. Lorentz succeeded in explaining why: according to his theory, no such effect should be detectable by any experiment sensitive to first order in (v/c), where v is the speed of the moving object through the ether and c is the speed of light in that medium. Until the 1880s, no experiment with greater sensitivity had been performed, and Lorentz's explanation of the failure of all previous experiments was a crowning achievement of his theory. 5) Newton's mechanics now seemed to have successfully met the challenge of optics and electrodynamics. But the seeds of its downfall had already been planted. Lorentz's explanation led him to introduce a transformation from newtonian absolute time to a new time variable in each inertial frame moving through the ether. As the relation between absolute time and this time varied from place to place in each inertial frame, Lorentz called this new variable the "local time" of that frame, regarding the local time as a purely formal expression. But Henri Poincare (1854-1912), the great mathematician who concerned himself extensively with problems of physics, was able to give a physical interpretation of this time variable within the context of newtonian kinematics: it is the time that clocks at rest in a frame moving through the ether would read if they were synchronized using light signals, without taking into account the motion of that frame. This was an important hint that the problems of the electrodynamics and optics of moving bodies were connected with the concept of time. But it was Einstein who made the final break with the concept of absolute time by asserting that the local time of any inertial frame is as physically meaningful as that of any other, because there is no absolute time with which they can be compared.[1-5] References (abridged): 1. Einstein, A. Ann. Phys. (Leipz.) 17, 132-148 (1905) 2. Einstein, A. Ann. Phys. (Leipz.) 17, 549-560 (1905) 3. Einstein, A. Ann. Phys. (Leipz.) 17, 891-921 (1905) 4. Einstein, A. Ann. Phys. (Leipz.) 18, 639-641 (1905) 5. Einstein, A. Ann. Phys. (Leipz.) 19, 289-306 (1906) Nature http://www.nature.com/nature -------------------------------- Related Material: EINSTEIN ON PHYSICS AND REALITY The following points are made by A. Einstein and L. Infeld (citation below): 1) What are the general conclusions which can be drawn from the development of physics? Science is not just a collection of laws, a catalogue of unrelated facts. It is a creation of the human mind, with its freely invented ideas and concepts. Physical theories try to form a picture of reality and to establish its connection with the wide world of sense impressions. Thus the only justification for our mental structures is whether and in what way our theories form such a link. 2) We have seen new realities created by the advance of physics. But this chain of creation can be traced back far beyond the starting point of physics. One of the most primitive concepts is that of an object. The concepts of a tree, a horse, any material body, are creations gained on the basis of experience, though the impressions from which they arise are primitive in comparison with the world of physical phenomena. A cat teasing a mouse also creates, by thought, its own primitive reality. The fact that the cat reacts in a similar way toward any mouse it meets shows that it forms concepts and theories which are its guide through its own world of sense impressions. 3) "Three trees" is something different from "two trees." Again "two trees" is different from "two stones." The concepts of the pure numbers 2, 3, 4..., freed from the objects from which they arose, are creations of the thinking mind which describe the reality of our world. 4) The psychological subjective feeling of time enables us to order our impressions, to state that one event precedes another. But to connect every instant of time with a number, by the use of a clock, to regard time as a one-dimensional continuum, is already an invention. So also are the concepts of Euclidean and non-Euclidean geometry, and our space understood as a three-dimensional continuum. 5) Physics really began with the invention of mass, force, and an inertial system. These concepts are all free inventions. They led to the formulation of the mechanical point of view. For the physicist of the early 19th century, the reality of our outer world consisted of particles with simple forces acting between them and depending only on the distance. He tried to retain as long as possible his belief that he would succeed in explaining all events in nature by these fundamental concepts of reality. The difficulties connected with the deflection of the magnetic needle, the difficulties connected with the structure of the ether, induced us to create a more subtle reality. The important invention of the electromagnetic field appears. A courageous scientific imagination was needed to realize fully that not the behavior of bodies, but the behavior of something between them. that is, the field, may be essential for ordering and understanding events. 6) Later developments both destroyed old concepts and created new ones. Absolute time and the inertial coordinate system were abandoned by the relativity theory. The background for all events was no longer the one-dimensional time and the three-dimensional space continuum, but the four-dimensional time-space continuum, another free invention, with new transformation properties. The inertial coordinate system was no longer needed. Every coordinate system is equally suited for the description of events in nature. 7) The quantum theory again created new and essential features of our reality. Discontinuity replaced continuity. Instead of laws governing individuals, probability laws appeared. 8) The reality created by modern physics is, indeed, far removed from the reality of the early days. But the aim of every physical theory still remains the same. With the help of physical theories we try to find our way through the maze of observed facts, to order and understand the world of our sense impressions. We want the observed facts to follow logically from our concept of reality. Without the belief that it is possible to grasp the reality with our theoretical constructions, without the belief in the inner harmony of our world, there could be no science. This belief is and always will remain the fundamental motive for all scientific creation. Throughout all our efforts, in every dramatic struggle between old and new views, we recognize the eternal longing for understanding, the ever-firm belief in the harmony of our world, continually strengthened by the increasing obstacles to comprehension. Adapted from: The Evolution of Physics: From Early Concepts to Relativity and Quanta. A. Einstein and L. Infeld. Simon and Schuster 1938, p.254. -------------------------------- Related Material: THE YEAR 1905: EINSTEIN'S ANNUS MIRABILIS The following points are made by Arthur I. Miller (citation below): 1) By the spring of 1905, the 26-year-old Einstein had decided that physicists were "out of their depth". From calculations based on Planck's radiation law, Einstein drew the astounding "general conclusion" that light can be a particle and a wave, and in fact both at once, a wave/particle duality. Therefore the electromagnetic world-picture could not succeed, because Lorentz's theory could represent radiation, or light, only as a wave, and so could never provide a way to explain how the electron's mass is generated by its own radiation. 2) Whereas Planck had discovered certain peculiarities about the energy of radiation, Einstein set out to explore the structure of radiation itself. Einstein's particles of light differed fundamentally from Newton's in ways that even he did not yet fully realize. Around the third week of May 1905, Einstein sent his friend Habicht what are surely some of the greatest understatements in the history of science. He wrote that he had only some "inconsequential babble" for his friend, whom he lambasted for neither writing nor visiting him during Easter: "So what are you up to, you frozen whale, you smoked, dried, canned piece of soul... I promise you four papers." 3) The first paper is the light quantum paper that Einstein referred to as "very revolutionary". The second suggested a means to measure the size of atoms using diffusion and viscosity of liquids. The third paper explored Brownian motion using methods of the molecular theory of heat. Einstein wrote: "The fourth paper is only a draft at this point, and is an electrodynamics of moving bodies which employs a modification of the theory of space and time; the purely kinematic part of this paper will surely interest you." 4) What is so incredible about this outburst of creativity is that by late May two papers were completed and the third was in draft form." [Editor's note: The fourth paper, the so-called relativity paper, was completed a few weeks later in June 1905.] Adapted from: Arthur I. Miller: Einstein, Picasso: Space, Time, and the Beauty That Causes Havoc. Basic Books, New York 2001, p.189. From checker at panix.com Mon May 23 19:46:59 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:46:59 -0400 (EDT) Subject: [Paleopsych] SW: On Conservation of Mass Message-ID: Theoretical Physics: On Conservation of Mass http://scienceweek.com/2005/sa050107-1.htm The following points are made by Frank Wilczek (Physics Today 2004 December): 1) Is the conservation of mass as used in classical mechanics a consequence of the conservation of energy in special relativity? Superficially, the case might appear straightforward. In special relativity we learn that the mass of a body is its energy at rest divided by the speed of light squared [m = E/c^(2)]; and for slowly moving bodies, it is approximately that. Since energy is a conserved quantity, this equation appears to supply an adequate candidate, E/c^(2), to fill the role of mass in the "culture of force". 2) However, that reasoning will not withstand scrutiny. The gap in its logic becomes evident when we consider how we routinely treat reactions or decays involving elementary particles. To determine the possible motions, we must explicitly specify the mass of each particle coming in and of each particle going out. Mass is a property of isolated particles, whose masses are intrinsic properties -- that is, all protons have one mass, all electrons have another, and so on. (For experts: "Mass" labels irreducible representations of the Poincare group.) There is no separate principle of mass conservation. Rather, the energies and momenta of such particles are given in terms of their masses and velocities, by well-known formulas, and we constrain the motion by imposing conservation of energy and momentum. In general, it is simply not true that the sum of the masses of what goes in is the same as the sum of the masses of what goes out. 3) Of course when everything is slowly moving, then mass does reduce to approximately E/c^(2). It might therefore appear as if the problem, that mass as such is not conserved, can be swept under the rug, for only inconspicuous (small and slowly moving) bulges betray it. The trouble is that as we develop mechanics, we want to focus on those bulges. That is, we want to use conservation of energy again, subtracting off the mass-energy exactly (or rather, in practice, ignoring it) and keeping only the kinetic part E - mc^(2) ~= mv^(2)/2. But you can't squeeze two conservation laws (for mass and nonrelativistic energy) out of one (for relativistic energy) honestly. Ascribing conservation of mass to its approximate equality with E/c^(2) begs an essential question: Why, in a wide variety of circumstances, is mass-energy accurately walled off, and not convertible into other forms of energy? 4) To explain why most of the energy of ordinary matter is accurately locked up as mass, we must first appeal to some basic properties of nuclei, where almost all the mass resides. The crucial properties of nuclei are persistence and dynamical isolation. The persistence of individual nuclei is a consequence of baryon number and electric charge conservation, and the properties of nuclear forces, which result in a spectrum of quasi-stable isotopes. The physical separation of nuclei and their mutual electrostatic repulsion -- Coulomb barriers --guarantee their approximate dynamical isolation. That approximate dynamical isolation is rendered completely effective by the substantial energy gaps between the ground state of a nucleus and its excited states. Since the internal energy of a nucleus cannot change by a little bit, in response to small perturbations it does not change at all. 5) Because the overwhelming bulk of the mass-energy of ordinary matter is concentrated in nuclei, the isolation and integrity of nuclei--their persistence and lack of effective internal structure--go most of the way toward justifying the zeroth law (the law of the conservation of mass). But note that to get this far, we needed to appeal to quantum theory and special aspects of nuclear phenomenology. For it is quantum theory that makes the concept of energy gaps available, and it is only particular aspects of nuclear forces that insure substantial gaps above the ground state. If it were possible for nuclei to be very much larger and less structured -- like blobs of liquid or gas -- the gaps would be small, and the mass-energy would not be locked up so completely. Physics Today http://www.physicstoday.org -------------------------------- Related Material: THEORETICAL PHYSICS: ON THE CONCEPT OF FORCE The following points are made by Frank Wilczek (Physics Today 2004 October): 1) Newton's second law of motion, F = ma, is the soul of classical mechanics. Like other souls, it is insubstantial. The right-hand side is the product of two terms with profound meanings. Acceleration is a purely kinematical concept, defined in terms of space and time. Mass quite directly reflects basic measurable properties of bodies (weights, recoil velocities). The left-hand side, on the other hand, has no independent meaning. Yet clearly Newton's second law is full of meaning, by the highest standard: It proves itself useful in demanding situations. Splendid, unlikely looking bridges, like the Erasmus Bridge (known as the Swan of Rotterdam), do bear their loads; spacecraft do reach Saturn. 2) The paradox deepens when we consider force from the perspective of modern physics. In fact, the concept of force is conspicuously absent from our most advanced formulations of the basic laws. It doesn't appear in Schroedinger's equation, or in any reasonable formulation of quantum field theory, or in the foundations of general relativity. Astute observers commented on this trend to eliminate force even before the emergence of relativity and quantum mechanics. 3) In his 1895 Dynamics, the prominent physicist Peter G. Tait, who was a close friend and collaborator of Lord Kelvin (1824-1907) and James Clerk Maxwell (1831-1879), wrote "In all methods and systems which involve the idea of force there is a leaven of artificiality.... there is no necessity for the introduction of the word "force" nor of the sense-suggested ideas on which it was originally based."(1) 4) Particularly striking, since it is so characteristic and so over-the-top, is what Bertrand Russell (1872=1970) had to say in his 1925 popularization of relativity for serious intellectuals, /The ABC of Relativity/: "If people were to learn to conceive the world in the new way, without the old notion of "force," it would alter not only their physical imagination, but probably also their morals and politics.... In the Newtonian theory of the solar system, the sun seems like a monarch whose behests the planets have to obey. In the Einsteinian world there is more individualism and less government than in the Newtonian."(2) The 14th chapter of Russell's book is entitled "The Abolition of Force." (3,4) References (abridged): 1. P. G. Tait, Dynamics, Adam & Charles Black, London (1895) 2. B. Russell, The ABC of Relativity, 5th rev. ed., Routledge, London (1997) 3. I. Newton, The Principia, I. B. Cohen, A. Whitman, trans., U. of Calif. Press, Berkeley (1999) 4. S. Vogel, Prime Mover: A Natural History of Muscle, Norton, New York (2001), p. 79 Physics Today http://www.physicstoday.org From checker at panix.com Mon May 23 19:46:49 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:46:49 -0400 (EDT) Subject: [Paleopsych] SW: Alcohol and Cognitive Function in Women Message-ID: Medical Biology: Alcohol and Cognitive Function in Women http://scienceweek.com/2005/sb050225-2.htm The following points are made by M.J. Stampfer et al (New Engl. J. Med. 2005 352:245): 1) Habitual excess alcohol intake impairs the brain,[1] but the effect of moderate consumption is unclear. A cognitive benefit from moderate alcohol intake is plausible, given the strong link between moderate alcohol intake and the decreased risk of cardiovascular disease[2,3]. Cognitive impairment and cardiovascular disease share common risk factors.[4] In addition, Ruitenberg et al[5] reported that moderate alcohol consumption was related to a decreased risk of both vascular and nonvascular dementia and proposed that moderate alcohol consumption may increase the release of brain acetylcholine. Most studies, but not all, have tended to show that moderate drinkers do better on cognitive tests than nondrinkers. However, few studies have had samples that were large enough to yield statistically significant results or to assess long-term stable patterns of alcohol intake and very early signs of cognitive decline. Also, many studies have been limited by inadequate control for confounding, and none have examined specific alcoholic beverages. 2) The Nurses' Health Study began in 1976, when 121,700 female registered nurses, 30 to 55 years of age, completed a mailed questionnaire about their lifestyle and health. Every two years, the authors mailed follow-up questionnaires, and in 1980 they added a food-frequency questionnaire. Starting in 1995, the authors identified participants in the Nurses' Health Study who were 70 years of age or older for a study of cognitive function. Eligible women were community-dwelling participants without a diagnosis of stroke. Of the 21,202 women contacted, 93 percent completed the telephone cognitive interview, with response rates varying by no more than 2 percent across categories of alcohol intake. With the exclusion of the 3 percent of women who died after the baseline cognitive assessment, the authors repeated the telephone assessments of cognitive function after an average of 1.8 years (range, 1.3 to 5.5) in 93 percent of the women; 7 percent declined or were lost to follow-up. All aspects of the study were approved by the human research committee at Brigham and Women's Hospital. For the questionnaire information, the return of the completed questionnaire was considered to imply informed consent. For the telephone interview, the authors obtained oral consent. For the genetic substudy, the authors obtained written informed consent. 3) The authors report that older women who consumed up to one drink per day had consistently better cognitive performance than nondrinkers. Overall, as compared with nondrinkers, women who drank 1.0 to 14.9 g of alcohol per day had a decrease in the risk of cognitive impairment of approximately 20 percent. Moreover, moderate drinkers were less likely to have a substantial decline in cognitive function over a two-year period. The authors found similar inverse associations for all types of alcoholic beverages. 4) The authors point out their study had several limitations. They could not assess the effect of high levels of alcohol intake, since there were few heavy drinkers in their cohort. Also, cognitive decline was assessed only over a two-year interval; thus, the association between alcohol consumption and longer-term cognitive decline could not be evaluated. Information on alcohol consumption was self-reported, perhaps leading to some misclassification. However, the assessment of alcohol intake was validated on the basis of dietary records and levels of biochemical markers and has been used to predict several disease outcomes in this cohort. 5) The authors conclude their data suggest that in women up to one drink per day does not impair cognitive function and may actually decrease the risk of cognitive decline. References (abridged): 1. Chick JD, Smith MA, Engleman HM, et al. Magnetic resonance imaging of the brain in alcoholics: cerebral atrophy, lifetime alcohol consumption, and cognitive deficits. Alcohol Clin Exp Res 1989;13:512-518 2. Rimm EB, Stampfer MJ. Alcohol abstinence: a risk factor for coronary heart disease. Heart Disease Updates 2000;2:1-9 3. Rimm EB, Williams P, Fosher K, Criqui M, Stampfer MJ. Moderate alcohol intake and lower risk of coronary heart disease: meta-analysis of effects on lipids and haemostatic factors. BMJ 1999;319:1523-1528 4. Breteler MM, van Swieten JC, Bots ML, et al. Cerebral white matter lesions, vascular risk factors, and cognitive function in a population-based study: the Rotterdam Study. Neurology 1994;44:1246-1252 5. Ruitenberg A, van Swieten JC, Witteman JC, et al. Alcohol consumption and risk of dementia: the Rotterdam Study. Lancet 2002;359:281-286 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ALCOHOL AND BREAST CANCER The following points are made by K.W. Singletary and S.M. Gapstur (J. Am. Med. Assoc. 2001 286:2143): 1) The association of alcohol consumption with increased risk for breast cancer has been a consistent finding in a majority of epidemiologic studies during the past two decades. The authors summarize information on this association from human and animal investigations, with particular reference to epidemiologic data published since 1995. 2) Increased estrogen and androgen levels in women consuming alcohol appear to be important mechanisms underlying the association. Other plausible mechanisms include enhanced mammary gland susceptibility to carcinogenesis, increased mammary carcinogen DNA damage, and greater metastatic potential of breast cancer cells, processes for which the magnitude likely depends on the amount of alcohol consumed. 3) Susceptibility to the breast cancer-enhancing effect of alcohol may also be affected by other dietary factors (such as low folate intake), life-style habits (such as use of hormone replacement therapy), or biological characteristics (such as tumor hormone receptor status). 4) Additional progress in understanding the enhancing effect of alcohol on breast cancer will depend on a better understanding of the interactions between alcohol and other risk factors and on additional insights into the multiple biological mechanisms involved. There is apparently firm evidence that in general women who consume alcohol, even at the level of 1 drink per day, have higher blood levels of estrogen (estradiol) than women who do not drink. The authors recommend that in general women who do not drink should not start, and those who do drink should do so in moderation, which is generally recognized to be approximately 1 drink per day. J. Am. Med. Assoc. http://www.jama.com -------------------------------- Related Material: ON ALCOHOL IN THE WESTERN WORLD Notes by ScienceWeek: The consumption of alcohol (ethanol) is recognized as a major and potentially preventable health problem. In general, a linear correlation exists between the intensity of alcohol abuse in terms of duration and dose and the development of a wide spectrum of pathologies, especially liver disease. As little as 200 ml wine or 60 ml whiskey in women, 1200 ml of 5% beer in men, when consumed on a daily basis over years, can produce liver injury. The mechanisms by which alcohol damages the liver are still unclear, but the damage is undisputed. Given all of the above, however, it is a fact that alcohol consumption has been an important aspect of Western civilization for thousands of years, and has probably existed for at least 10,000 years in various communities. The following points are made by Bert L. Vallee (Scientific American 1998 June): 1) Ethanol (ethyl alcohol) is a multifaceted entity: it may be social lubricant, sophisticated dining companion, cardiovascular health benefactor, or agent of destruction. 2) For most of the past 10,000 years in the Western world, alcoholic beverages may have been the most popular and common daily drinks, indispensable sources of fluids and calories in a world of contaminated and dangerous water supplies. 3) The experience of the East differed greatly. For at least the past 2000 years, the practice of boiling water, usually for tea, has created a potable supply of non-alcoholic beverages. In addition, genetics plays an important role in making Asia avoid alcohol: approximately half of all Asian people metabolize alcohol differently than non-Asians, making the experience of drinking alcohol quite unpleasant [see background material below]. 4) Beer and wine became staples in Western societies and remained there until the end of the last century. Indeed, throughout Western history, the normal state of mind may have been one of inebriation. 5) Although yeasts produce alcohol, they can tolerate concentrations of only about 16 percent, so that fermented beverages had a natural maximum proof. Distillation, introduced by the Arabs about 700 A.D., circumvented the fermentation limit by taking advantage of alcohol's boiling point being lower than water (78 vs. 100 degrees centigrade) to boil off and then condense the alcohol from fermented mixtures. 6) Presently, alcohol contributes to 100,000 deaths in the US each year, making it the 3rd leading cause of preventable mortality. 7) Each year, approximately 12,000 children of drinking mothers are born with the physical signs and intellectual deficits associated with full-blown fetal alcohol syndrome, and thousands more suffer lesser effects. 8) Alcoholism, in historical terms, has only just been understood and accepted as a disease, and we are still coping with the historically recent arrival of concentrated alcohol. The author concludes: "Alcohol today is a substance primarily of relaxation, celebration and, tragically, mass destruction. To consider it as having been a primary agent for the development of an entire culture [Western civilization] may be jolting, even offensive to some. Any good physician, however, takes a history before attempting a cure." Scientific American http://www.sciam.com -------------------------------- Notes by ScienceWeek: Ethanol is readily absorbed from the gastrointestinal tract, and more than 90 percent is metabolized by the liver through oxidative mechanism involving mainly the enzyme alcohol dehydrogenase and certain other enzymes. Alcohol cannot be stored and all of it is metabolized. Alcohol dehydrogenase oxidizes alcohol to acetaldehyde. Apparently, approximately 85 percent of the Japanese population has an atypical alcohol dehydrogenase that operates about 5 times faster than the same enzyme does in non-Japanese. Other Asian groups may exhibit the same phenomenon. Consumption of alcohol by such persons leads to the accumulation of acetaldehyde, resulting in extensive vasodilation, facial flushing, and compensatory tachycardia (rapid heartbeat greater than 100 per minute). From checker at panix.com Mon May 23 19:47:11 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:47:11 -0400 (EDT) Subject: [Paleopsych] SW: On the Casualties of War Message-ID: Public Health: On the Casualties of War http://scienceweek.com/2005/sa050107-6.htm The following points are made by Atul Gawande (New Engl. J. Med. 2004 351:2471): 1) Each Tuesday, the US Department of Defense provides an online update of American military casualties (the number of wounded or dead) from Operation Iraqi Freedom and Operation Enduring Freedom.[1] According to this update, as of November 16, 2004, a total of 10,726 service members had suffered war injuries. Of these, 1361 died, 1004 of them killed in action; 5174 were wounded in action and could not return to duty; and 4191 were less severely wounded and returned to duty within 72 hours. No reliable estimates of the number of Iraqis, Afghanis, or American civilians injured are available. Nonetheless, these figures represent, by a considerable margin, the largest burden of casualties our military medical personnel have had to cope with since the Vietnam War. 2) When U.S. combat deaths in Iraq reached the 1000 mark in September, the event captured worldwide attention. Combat deaths are seen as a measure of the magnitude and dangerousness of war, just as murder rates are seen as a measure of the magnitude and dangerousness of violence in our communities. Both, however, are weak proxies. Little recognized is how fundamentally important the medical system is -- and not just the enemy's weaponry -- in determining whether or not someone dies. US homicide rates, for example, have dropped in recent years to levels unseen since the mid-1960s. Yet aggravated assaults, particularly with firearms, have more than tripled during that period.[2] The difference appears to be our trauma care system: mortality from gun assaults has fallen from 16 percent in 1964 to 5 percent today. 3) We have seen a similar evolution in war. Though firepower has increased, lethality has decreased. In World War II, 30 percent of the Americans injured in combat died.[3] In Vietnam, the proportion dropped to 24 percent. In the war in Iraq and Afghanistan, about 10 percent of those injured have died. At least as many US soldiers have been injured in combat in this war as in the Revolutionary War, the War of 1812, or the first five years of the Vietnam conflict from 1961 through 1965. This can no longer be described as a small or contained conflict. But a far larger proportion of soldiers are surviving their injuries. 4) It is too early to make a definitive pronouncement that medical care is responsible for this difference. With the war ongoing and still intense, data on the severity of injuries, the care provided, and the outcomes are necessarily fragmentary. One key constraint for planners has been the limited number of medical personnel available in a voluntary force to support the 130,000 to 150,000 troops fighting in Iraq. The Army is estimated to have only 120 general surgeons on active duty and a similar number in the reserves. It has therefore sought to keep no more than 30 to 50 general surgeons and 10 to 15 orthopedic surgeons in Iraq. Most have served in Forward Surgical Teams (FSTs) --small teams, consisting of just 20 people: 3 general surgeons, 1 orthopedic surgeon, 2 nurse anesthetists, 3 nurses, plus medics and other support personnel. In Vietnam, only 2.6 percent of the wounded soldiers who arrived at a surgical field hospital died, which meant that, despite helicopter evacuation, most deaths occurred before the injured made it to surgical care.[4] The recent emphasis on leaner, faster-moving military units added to the imperative to push surgical teams farther forward, closer to battle. So they, too, were made leaner and more mobile -- and that is their fundamental departure from previous wars. 5) Each FST is equipped to move directly behind troops and establish a functioning hospital with four ventilator-equipped beds and two operating tables within a difficult-to-fathom 60 minutes. The team travels in six Humvees. They carry three lightweight, Deployable Rapid Assembly Shelter ("drash") tents that can be attached to one another to form a 900-square-foot facility. Supplies to immediately resuscitate and operate on the wounded arrive in five backpacks: an ICU pack, a surgical-technician pack, an anesthesia pack, a general-surgery pack, and an orthopedic pack. They hold sterile instruments, anesthesia equipment, medicines, drapes, gowns, catheters, and a handheld unit allowing clinicians to obtain a hemogram and measure electrolytes or blood gases with a drop of blood. FSTs also carry a small ultrasound machine, portable monitors, transport ventilators, an oxygen concentrator providing up to 50 percent oxygen, 20 units of packed red cells, and six roll-up stretchers with their litter stands. Teams have forgone angiography and radiography equipment. (Orthopedic surgeons detect fractures by feel and apply external fixators.) But they have sufficient supplies to evaluate and perform surgery on as many as 30 wounded soldiers. They are not equipped, however, for more than six hours of postoperative intensive care.[5] References: 1. U.S. casualty status. Washington, D.C.: Department of Defense, 2004. Accessed November 17, 2004, at http://www.defenselink.mil/news/casualty.pdf 2. Harris AR, Thomas SH, Fisher GA, Hirsch DJ. Murder and medicine: the lethality of criminal assault 1960-1999. Homicide Stud 2002;6:128-66 3. Principal wars in which the United States participated: U.S. military personnel serving and casualties. Washington, D.C.: Department of Defense, 2004. Accessed November 17, 2004, at http://web1.whs.osd.mil/mmid/casualty/WCPRINCIPAL.pdf 4. Whelan TJ Jr. Surgical lessons learned and relearned in Vietnam. Surg Annu 1975;7:1-23 5. Pear R. U.S. has contingency plans for a draft of medical workers. New York Times. October 19, 2004:A22 New Engl. J. Med. http://www.nejm.org From checker at panix.com Mon May 23 19:48:07 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:48:07 -0400 (EDT) Subject: [Paleopsych] SW: Neanderthals and the Colonization of Europe Message-ID: Anthropology: Neanderthals and the Colonization of Europe http://scienceweek.com/2004/sc041231-1.htm The following points are made by Paul Mellars (Nature 2004 432:461): 1) The most significant contributions over the past decade to the study of the fate of the Neanderthals have come from detailed studies of the DNA structure of present-day human populations in different areas of the world, combined with the gradually accumulating recovery of residual traces of "ancient" DNA extracted from a number of Neanderthal and early anatomically modern human remains. Studies of both mitochondrial and Y-chromosome DNA patterns in modern world populations (inherited respectively through the female and male lineages) point to the genetic origins of all present-day populations within one limited area of Africa somewhere in the region of 150,000 years before present (yr BP), followed by their dispersal to other regions of the world between about 60,000 and 40,000 yr BP(1-5). 2) These results are further reinforced by recent discoveries of skeletal remains of anatomically modern populations in different areas. Discoveries at Herto in Ethiopia reported in 2003 confirm the presence of early forms of anatomically modern humans in Africa by about 160,000 yr BP, whereas the earliest discoveries of distinctively modern populations in both Europe and most parts of Asia can be dated no earlier than 40,000-45,000 yr BP. The one exception is in Israel, where the rich skeletal remains from the Skhul and Qafzeh caves indicate a precocious, and apparently short-lived, incursion of early anatomically modern populations into this region (presumably via the Nile valley) at an early stage in the last glaciation, around 100,000 yr BP. 3) In Europe, the most dramatic support for these patterns has come from the recovery of a number of relatively well-preserved sequences of mitochondrial DNA from a number of actual skeletal finds of Neanderthals and early anatomically modern humans. Analyses of seven separate Neanderthal specimens (including those from the Neanderthal type-site itself) yielded segments of mitochondrial DNA that are radically different from those of all known present-day populations in either Europe or other parts of the world, and that are equally different from those recovered from five early specimens of anatomically modern populations from European sites. The conclusion is clear that there was either very little -- if any -- interbreeding between the local Neanderthals and the intrusive modern populations in Europe, or that if such interbreeding did take place, all genetic traces of this interbreeding were subsequently eliminated from the European gene pool. 4) The mitochondrial DNA evidence recovered from the Neanderthal specimens further suggests that the initial evolutionary separation of the Neanderthals from the populations which eventually gave rise to genetically modern populations must reach back at least 300,000 yr -- a finding that is in good agreement with the surviving fossil evidence from Africa and Europe1. Whether this evidence is sufficient to indicate that the Neanderthals belonged to an entirely separate biological species from modern humans is at present more controversial(1,2). 5) The fate of the Neanderthal populations of Europe and western Asia has gripped the popular and scientific imaginations for the past century. Following at least 200,000 years of successful adaptation to the glacial climates of northwestern Eurasia, they disappeared abruptly between 30,000 and 40,000 years ago, to be replaced by populations all but identical to modern humans. Recent research suggests that the roots of this dramatic population replacement can be traced far back to events on another continent, with the appearance of distinctively modern human remains and artefacts in eastern and southern Africa. 6) That the Neanderthals were replaced by populations that had evolved biologically, and no doubt behaviorally, in the very different environments of southern Africa makes the rapid demise of the Neanderthals even more remarkable, and forces us to ask what cultural or cognitive developments may have made this replacement possible. The rapidly accumulating archaeological evidence for highly symbolic patterns of culture and technology within African populations dating back to at least 70,000 yr BP (marked by the appearance of complex bone technology, multiple-component missile heads, perforated sea-shell ornaments, complex abstract "artistic" designs and abundant use of red ochre --recently recorded from the Blombos Cave and other sites in southern Africa) may provide the critical clue to new patterns of cognition, and probably complex linguistic communication, linked directly with the biological evolution of anatomically and genetically modern populations(1,3). Perhaps it was the emergence of more complex language and other forms of symbolic communication that gave the crucial adaptive advantage to fully modern populations and led to their subsequent dispersal across Asia and Europe and the demise of the European Neanderthals. The precise mechanisms and timing of this dramatic population dispersal from southern Africa to the rest of the world remains to be investigated(1,3,4). References (abridged): 1. Stringer, C. Modern human origins: progress and prospects. Phil. Trans. R. Soc. Lond. B 357, 563-579 (2002) 2. Tattersall, I. in The Speciation of Modern Homo sapiens (ed. Crow, T. J.) 49-59 (British Academy, London, 2002) 3. Forster, P. Ice ages and the mitochondrial DNA chronology of human dispersals: a review. Phil. Trans. R. Soc. Lond. B 359, 255-264 (2004) 4. Lahr, M. M. & Foley, R. Towards a theory of modern human origins: geography, demography and diversity in modern human evolution. Yb. Physical Anthropol. 41, 127-176 (1998) 5. Richards, M. et al. Tracing European founder lineages in the near Eastern mitochondrial gene pool. Am. J. Hum. Genet. 67, 1251-1276 (2000) Nature http://www.nature.com/nature -------------------------------- Related Material: ANTHROPOLOGY: ON THE NEANDERTHALS The following points are made by Pat Shipman (American Scientist 2004 92:506): 1) Neandertals (Neanderthals) were probably not members of our own species, judging from recent analyses of mitochondrial DNA. Nonetheless, Neandertals were clearly built on a human-like plan (or vice versa) with some crucial modifications. A glance at the fossil remains of these hominids shows that Neandertal bones are much more robust than those of modern Homo sapiens. The skulls of the two species also show several striking differences. One of the most noticeable Neandertal features is the unmistakably large, bony browridges that stick out over the eyes. Below the orbits, the face is more prognathic -- the nose and jaw protrude farther in front of the braincase -- than a human face. The prominent nasal bones in Neandertal skulls top wide nasal openings, suggesting that they sported large, aquiline noses. Unlike the smoother, rounded contour of the human skull, the back of the Neandertal skull has a distinctive bulge, often referred to as a chignon or bun. Overall, the Neandertal skull resembles what you might expect if someone took a human skull made of rubber, grabbed it by the face and back of the head, and pulled. 2) These comparisons have attracted the attention of researchers who study the interactions between evolution and development from birth to adulthood -- so-called "evo-devo." Put simply, they wanted to know: How do you grow up Neandertal? In the spring of 2004, several studies offered answers to this question. F. Ramirez Rozzi and J.M. Bermudez de Castro (1) compared the rates of dental growth in several species within the genus Homo, including Neandertals. They examined the perikymata -- small enamel ridges on the tooth surface -- of incisor and canine teeth from 55 Neandertals, 25 Homo antecessor and Homo heidelbergensis individuals (two species that some anthropologists group together) and 39 ancient but anatomically modern humans. 3) Perikymata are created as a tooth grows. In humans and their close kin (such as Homo erectus), one ridge is created approximately every nine days during tooth development. The ridges of more distant relatives, including chimpanzees and gorillas, are formed at shorter intervals. By counting the number of perikymata, investigators can calculate how long the tooth took to form. Ramirez Rozzi and Bermudez de Castro (1) found that Neandertals formed their teeth in fewer days than did H. antecessor and H. heidelbergensis. If Neandertals had been the most ancient of the lot, one might expect them to be the most ape-like. But although the other fossil species are older still, they already show the human pattern. The finding is also a surprise because some researchers still propose that Neandertals are basically just strange-looking humans -- a judgment challenged by this fundamental difference. 4) Dental maturity is a common proxy for overall maturity because neurological, skeletal and sexual milestones are correlated with the pace of tooth mineralization. Ramirez Rozzi and Bermudez de Castro (1) concluded that faster dental development meant that Neandertals reached adulthood 15 percent sooner than humans, on average. To state this finding in practical terms, if humans attain physical maturity at 18 years, Neandertals were similarly grown at 15 years. The study also examined the spacing of perikymata across the front surfaces of incisors and canines. Dental enamel forms first at the tip of the crown -- the first point to emerge from the gum -- and then proceeds toward the roots. 5) In modern humans, the perikymata are widely spaced in the half of the tooth that formed first, indicating that lots of enamel was deposited during each nine-day increment. On the second half of each human tooth, the ridges are more closely spaced, showing a slower daily rate of enamel formation. Like human teeth, Neandertal teeth look as if they grew rapidly at first and then slowed down. However, on the part of each Neandertal tooth that grew later, the perikymata are more spread out than in their human counterparts. In other words, although the rate of enamel formation also decreased with age in Neandertals, the slowdown was less pronounced. This pattern of dental growth resembles that of apes. We know that the apes of today reach physical maturity much faster than humans. So, presumably, did Neandertals.(2-4) References (abridged): 1. Krovitz, G. 2003. Shape and growth differences between Neandertals and modern humans: Grounds for species-level distinction? In Patterns of Growth and Development in the Genus Homo, ed. J. L. Thompson, G. E. Krovitz and A. J. Nelson. Cambridge, UK: Cambridge University Press 2. Ramirez Rozzi, F., and J. M. Bermudez de Castro. 2004. Surprisingly rapid growth in Neanderthals. Nature 428:936-939 3. Trinkaus, E. 1995. Neandertal mortality patterns. Journal of Archaeological Science 22:121-142 4. Williams, F. L., L. R. Godfrey and M. R. Sutherland. 2003. Diagnosing heterochronic perturbations in the craniofacial evolution of Homo (Neandertals and modern humans) and Pan (P. troglodytes and P. paniscus). In Patterns of Growth and Development in the Genus Homo, ed. J. L. Thompson, G. E. Krovitz and A. J. Nelson. Cambridge, UK: Cambridge University Press American Scientist http://www.americanscientist.org -------------------------------- Related Material: ANTHROPOLOGY: ON NEANDERTHAL MITOCHONDRIAL DNA The following points are made by Alan Cooper et al (Current Biology 2004 14:R431): 1) The genetic affinities of the earliest modern humans of Europe and the earlier hominid occupants of the area, the Neandertals, has remained a hotly debated topic since the discovery of the extraordinarily robust skull cap and limb bones in the Neander Valley in 1856. While it is impossible to rule out a surreptitious coupling of the two groups in the more than 10,000 years they apparently co-occupied Europe, recent research and population genetic theory suggest that any genetic interchange was limited. 2) This issue is central to the two main theories of modern human origins: the replacement model, where modern humans rapidly replaced archaic forms, such as Neandertals, as they began to spread from Africa through Eurasia and the rest of the world sometime around 100,000 years ago [1]; and the multi-regional model, where genetic exchange or even continuity exists between archaic and modern humans [2,3]. Two years ago, a review [4] reported that characteristic mitochondrial DNA (mtDNA) sequences retrieved from remains of four Neandertals are absent from modern human populations. It remained possible, however, that these sequences had been present in early modern humans, but had been lost through genetic drift or the continuous influx of modern human DNA in the intervening 28,000 years since Neandertals became extinct. 3) The difficulty in testing these ideas using ancient DNA is that most ancient human remains are contaminated with modern human DNA, which deeply penetrates bone and teeth samples during the washing and routine handling that takes place after excavation. This modern DNA will either out-compete authentic ancient sequences in PCR reactions, or recombine with them to produce artificial, but authentic looking genetic sequences [5]. Consequently, even when strict criteria for authenticating ancient DNA results are followed, it can be impossible to determine the authenticity of results. 4) The approach taken recently by Serre et al [2004] avoided this problem by searching only for the presence of Neandertal mtDNA sequences in both early modern human and Neandertal fossils, while ignoring modern human sequences because they are potentially contaminants. Four additional Neandertal specimens tested positive, but Neandertal sequences could not be detected in five early modern human fossils with biochemical preservation consistent with DNA survival from the Czech Republic and France. This appears to confirm that sequences characteristic to Neandertal remains were not widespread in early modern humans. 5) In summary: Mitochondrial DNA sequences recovered from eight Neandertal specimens cannot be detected in either early fossil Europeans or in modern populations. This indicates that if Neandertals made any genetic contribution at all to modern humans, it must have been limited, though the extent of the contribution cannot be resolved at present. References (abridged): 1. Stringer, C.B. and Andrews, P. (1998). Genetic and fossil evidence for the origin of modern humans. Science 239, 1263-1268 2. Hawks, J.D. and Wolpoff, M.H. (2001). The accretion model of Neandertal evolution. Evol. Int. J. Org. Evol. 55, 1474-1485 3. Templeton, A. (2002). Out of Africa again and again. Nature 416, 45-51 4. Schmitz, R.W., Serre, D., Bonani, G., Feine, S., Hillgruber, F., Krainitzki, H., Poobo, S., and Smith, F.H. (2002). The Neandertal type site revisited: interdisciplinary investigations of skeletal remains from the Neander Valley, Germany. Proc. Natl. Acad. Sci. USA 99, 13342-13347 5. Poobo, S., Higuchi, R.G., and Wilson, A.C. (1989). Ancient DNA and the polymerase chain reaction. J. Biol. Chem. 264, 9709-9712 Current Biology http://www.current-biology.com From checker at panix.com Mon May 23 19:48:20 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:48:20 -0400 (EDT) Subject: [Paleopsych] SW: Physical Performance and Darwinian Fitness Message-ID: Evolution: Physical Performance and Darwinian Fitness http://scienceweek.com/2004/sc041231-4.htm The following points are made by J-F. Le Galliard et al (Nature 2004 432:502): 1) Strong evidence for a genetic basis of variation in physical performance has accumulated(1,2). Considering one of the basic tenets of evolutionary physiology -- that physical performance and darwinian fitness are tightly linked(3) -- one may expect phenotypes with exceptional physiological capacities to be promoted by natural selection. Why then does physical performance remain considerably variable in human and other animal populations(1,2,4)? 2) Sporting events would be exceedingly boring were there no variation in human performance; fortunately, this is not the case. For example, the distribution of finish times at international marathons has a large variance and a long tail(1), due to a variety of factors affecting the performance of individual runners(5). Although genetic variation in locomotor performance has been documented in human and other animal populations(1,2), questions remain as to how genetic and non-genetic factors would interact with each other and what effect selection has on the resulting individual variation(1). 3) The authors addressed these two questions using ground-dwelling lizards, a popular model system for studies of locomotor performance(2,4). The focus of the authors is on the endurance capacity as assayed in the laboratory. In lizards, endurance shows considerable interindividual variation that reflects differences in tight muscle mass, heart mass, and aerobic metabolism The study species is the common lizard (Lacerta vivipara Jacquin 1787) for which locomotor performance and life-history traits have been routinely studied. The authors took advantage of the populations established at the Ecological Research Station of Foljuif (Nemours, France) in the semi-natural conditions of outdoor enclosures to measure the heritability of initial endurance and the age-specific strength of natural selection on this trait. 4) In summary: The analysis by the authors of locomotor performance in the common lizard (Lacerta vivipara) demonstrates that initial endurance (running time to exhaustion measured at birth) is indeed highly heritable, but natural selection in favor of this trait can be unexpectedly weak. A manipulation of dietary conditions unravels a proximate mechanism explaining this pattern. Fully fed individuals experience a marked reversal of performance within only one month after birth: juveniles with low endurance catch up, whereas individuals with high endurance lose their advantage. In contrast, dietary restriction allows highly endurant neonates to retain their locomotor superiority as they age. Thus, the expression of a genetic predisposition to high physical performance strongly depends on the environment experienced early in life. References (abridged): 1. Rupert, J. L. The search for genotypes that underlie human performance phenotypes. Comp. Biochem. Physiol. A 136, 191-203 (2003) 2. Garland, T. J. & Losos, J. in Ecological Morphology: Integrative Organismal Biology (eds Wainwright, P. C. & Reilly, S. M.) 240-302 (Univ. Chicago Press, Chicago, 1994) 3. Arnold, S. J. Morphology, performance and fitness. Am. Zool. 23, 347-361 (1983) 4. Bennett, A. F. & Huey, R. B. in Oxford Surveys in Evolutionary Biology (eds Futuyma, D. J. & Antonovics, J.) 251-284, (1990) 5. Bouchard, C., Malina, R. M. & P?russe, L. Human Kinetics 408 (Champaign, Illinois, 1997) Nature http://www.nature.com/nature -------------------------------- Related Material: ON FITNESS AND SURVIVAL The following points are made by Gary J. Balady (New Engl. J. Med. 2002 346:852): 1) In 1859, Charles Darwin published his theory of evolution as an incessant struggle among individuals with different degrees of fitness within a species. Although at that time, his explanations created remarkable controversy, they were to revolutionize the course of science. Darwin's writings reflected conclusions drawn from years of study and observation. Now, nearly 150 years later, in the era of evidence-based medicine and rigorous scientific method, when fitness is quantitatively measured and study subjects are followed for years, the data supporting the concept of survival of the fittest are strong and compelling. During the past 15 years, many long-term epidemiologic studies have shown an unequivocal and robust relation of fitness, physical activity, and exercise to reduced mortality overall, to reduced mortality from cardiovascular causes, and to reduced cardiovascular risk. 2) Cardiorespiratory fitness, or physical fitness, is a set of attributes that enables a person to perform physical activity. It is determined, in part, by habitual physical activity and is also influenced by several other factors, including age, sex, heredity, and medical status. Physical fitness is best assessed by a measure of maximal or peak oxygen uptake (volume of oxygen consumed, measured in milliliters of oxygen per kilogram of body weight per minute), which is viewed as an index of energy expenditure. 3) It is now becoming clear that exercise modulates many biologic mechanisms to confer cardioprotection. Exercise improves the lipid profile and glucose tolerance, reduces obesity, and lowers blood pressure. However, modification of atherosclerotic risk factors does not fully explain the benefits that have been observed. Positive effects of exercise on vascular function, autonomic tone, blood coagulation, and inflammation are likely to contribute to improved cardiovascular health and survival. New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ON THE PHYSIOLOGY OF EXERCISE Notes by ScienceWeek: Underlying the various beneficial effects of physical exercise on the health of the human body are a constellation of physical, biochemical, and physiological factors that have been intensively studied for more than a century. At present, maximal oxygen consumption is the primary measure of exercise capacity, and mechanisms related to the delivery of oxygen to the muscles are considered to be the main factors determining exercise capacity. The following points are made by N.L. Jones and K.J. Killian (New England J. Med. 2000 343:633): 1) A fit 25-year-old man can generate 650 watts while bicycling for a few seconds and can maintain a power of 400 watts while bicycling for 1 minute, 230 watts while bicycling for 10 minutes, and 175 watts while bicycling for 30 minutes. He is able to reach 275 watts in a progressive incremental (increasing at 16.7 watts per minute) test to capacity; this power represents peak exercise and is the power at which maximal oxygen consumption (3.3 liters per minute) is measured. To put these figures in perspective, brisk walking represents an output of approximately 50 watts of power and an oxygen intake of 0.8 liters per minute. 2) Exercise depends on the oxidation of carbohydrate and fat for the regeneration of adenosine triphosphate required to sustain muscular contraction. Ventilation, gas exchange, and the circulation are adjusted to meet the requirements for delivery of oxygen and removal of carbon dioxide. As the intensity of exercise increases, the concentration of intramuscular creatine phosphate decreases, and the concentrations of intramuscular adenosine diphosphate, adenosine monophosphate, and inorganic phosphate increase. Increases in the intramuscular lactate concentration and decreases in the intramuscular potassium concentration contribute to a marked decline in muscle pH to below 6.5. 3) With prolonged submaximal exercise, the changes in intramuscular metabolite concentrations are less marked, but intramuscular glycogen is progressively depleted. The ability to sustain exercise depends on the initial intramuscular glycogen concentration. Fat stores represent a huge reservoir of potential energy, but the rate at which fat can be oxidized is limited to approximately one-fourth of the rate at which glycogen can be oxidized. Thus, even with maximal utilization of fat, the ability to maintain exercise is dependent on the oxidation of glycogen, which eventually becomes depleted, leading to muscle fatigue. 4) In exercise lasting longer than a minute or two, the cardiac output and heart rate increase linearly with peripheral oxygen uptake. The mean systemic arterial pressure and the vascular resistance in active muscle falls, leading to a large increase in blood flow to the muscles. Blood is pumped back to the heart by muscular contraction, and the cardiac output is determined by the venous return. 5) Muscle contraction during exercise is initiated by a central command from the *motor cortex of the brain that leads to activation of *motor neurons, depolarization of *motor end plates, propagation of *muscle action potentials, calcium release, formation of *cross-bridges, and shortening of *myofibrils. The magnitude of the central motor command increases in parallel with the power output, but it also increases if the responsiveness of the motor neurons or muscles decreases during fatigue. A maximum voluntary command is capable of activating virtually 100 percent of the *motor units in a fresh muscle (i.e., a muscle that has not been exercised). The responsiveness of motor neurons may be decreased by central and peripheral factors acting through reflexes in the spinal cord and by stimulation of receptors in the muscles. 6) Among the changes that accompany increasing intensity of exercise is a large increase in the intramuscular hydrogen ion concentration from 100 nanomoles per liter (pH = 7.0) at rest to 400 nanomoles per liter (pH = 6.4) or more at exhaustion, leading to inhibition of *excitation-contraction coupling and thus reducing the responsiveness of the muscle to stimulation of motor units. New Engl. J. Med. http://www.nejm.org -------------------------------- Notes by ScienceWeek: motor cortex of the brain: (cortex) The cerebral cortex is a thin surface layering of nerve cells of the brain, the region only several millimeters thick but covering all of the brain surface. This is the part of the central nervous system most intimately involved with the so-called "higher faculties", although the cortex operates in concert with other parts of the brain. The structure is primitive in lower mammals, and is found progressively more pronounced and with greater surface area in primates and man. The motor cortex is the region of the cortex involved in voluntary muscle movements. motor neurons: In this context, motor neurons are neurons with cell bodies in the spinal cord and extensions that leave the spinal cord to terminate on muscle fibers. In this report, the general paradigm for activation of voluntary muscles is neurons in brain (motor cortex) to neurons in spinal cord (motor neurons) to peripheral muscle fibers. motor end plates: The junctions between nerve fiber (axon) terminations and muscle fibers. muscle action potentials: The muscle "action potential" is completely analogous to the nerve action potential and consists of a brief (approximately 1 millisecond) reversal of polarization that travels along the fiber. This electrical change initiates calcium movements that begin the contraction process. cross-bridges: In general, connections between contractile elements of muscle fibers. myofibrils: In general, any of the long cylindrical contractile elements, 1 to 2 microns in diameter, that constitute the major component of muscle fibers. motor units: In general, a single motor neuron and all the muscle fibers that are innervated by it. (The axons of motor neurons branch to make contact with a number of muscle fibers.) excitation-contraction coupling: In general, the coupling of an excitatory stimulus to the contraction of muscle; at the cellular level, the process by which muscle fibers are caused to contract by the stimulation of a neuron. From checker at panix.com Mon May 23 19:48:31 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:48:31 -0400 (EDT) Subject: [Paleopsych] SW: On Collective Action in Large Groups Message-ID: Evolution of Behavior: On Collective Action in Large Groups http://scienceweek.com/2004/sa041231-2.htm The following points are made by Ernst Fehr (Nature 2004 432:449): 1) War is a prime example of large-scale within-group cooperation between genetically unrelated individuals. War also illustrates the fact that within-group cooperation often serves the purpose of between-group aggression. Modern states are able to enforce cooperation in large groups by means of sophisticated institutions that punish individuals who refuse to meet their duties and reward those who follow their superiors' commands. The existence of such cooperation-enhancing institutions is very puzzling from an evolutionary viewpoint, however, because no other species seems to have succeeded in establishing large-scale cooperation among genetically unrelated strangers(1). 2) The puzzle behind this cooperation can be summarized as follows. Institutions that enhance within-group cooperation typically benefit all group members. The effect of a single group member on the institution's success is negligible, but the contribution cost is not negligible for the individual. Why, therefore, should a self-interested individual pay the cost of sustaining cooperative institutions? More generally, why should a self-interested individual contribute anything to a public good that -- once it exists -- an individual can consume regardless of whether he contributed or not? Recent work(2) advances the scope of reputation-based models(3-5) and demonstrates that individuals' concern for their reputation may be a solution to this puzzle. 3) Evolutionary psychologists have sought to answer the puzzle of human collective action for decades. However, progress was limited because of a lack of commitment to mathematically rigorous theorizing. Many researchers erroneously thought that the notion of reciprocal altruism, formalized as a tit-for-tat strategy for two-person interactions, provides the solution to the problem. It was speculated that reciprocal altruism "may favor a multiparty altruistic system in which altruistic acts are dispensed freely among more than two individuals". However, it is always easier to speculate than to provide a rigorous model, and the speculation is likely to be wrong in this case. 4) In the context of the problem of public-goods provision, a reciprocally altruistic individual is willing to contribute to the public good if sufficient numbers of other group members are also willing to contribute. Unfortunately, the presence of only a small number of defectors quickly causes cooperation to unravel if it is solely based on conditionally cooperative behavior, because the defectors induce the conditional cooperators to defect as well. Theory and simulations suggest that reciprocally altruistic strategies can only sustain high levels of cooperation in two-person interactions. Moreover, experimental evidence indicates that cooperation in public-good games typically unravels because it is not possible to discipline "free riders" -- those who take advantage of others' cooperation -- if only conditionally cooperative strategies are available. References (abridged): 1. Fehr, E. & Fischbacher, U. Nature 425, 785-791 (2003) 2. Panchanathan, K. & Boyd, R. Nature 432, 499-502 (2004) 3. Nowak, M. A. & Sigmund, K. Nature 393, 573-577 (1998) 4. Leimar, O. & Hammerstein, P. Proc. R. Soc. Lond. B 268, 745-753 (2001) 5. Gintis, H., Smith, E. A. & Bowles, S. J. Theor. Biol. 213, 103-119 (2001) Nature http://www.nature.com/nature -------------------------------- Related Material: ON NATURAL SELECTION, KIN SELECTION, AND ALTRUISM Notes by ScienceWeek: In this context, the term "altruism" refers in general to behavior that benefits another individual, usually of the same species, at the expense of the agent. The phenomenon is widespread among various species, and has been interpreted by some as apparently at odds with Darwinian theory. Theories of altruism in biology are often concerned with "cost-benefit" analysis as dictated by the logic of natural selection. The term "Hamilton's rule" refers to the prediction that genetically determined behavior that benefits another organism, but at some cost to the agent responsible, will spread by natural selection when the relation (rb-c} > 0 is satisfied, where (r) is the degree of relatedness between agent and recipient, (b) is the improvement of individual fitness of the recipient caused by the behavior, and (c) is the cost of the agent's individual fitness as a result of the behavior. The rule was first proposed by William D. Hamilton (1936-2000), and Hamilton's theory is often referred to as "kin selection". As an example: A mutation that affected the behavior of a sterile worker bee so that she fed her fertile queen but starved herself would increase the inclusive fitness of that worker because, while her own fitness decreased, her actions increased the fitness of a close relative. The following points are made by Mark Ridley (citation below): 1) Natural selection working on groups of close genetic relatives is called kin selection. In species in which individuals sometimes meet one another, such as in social groups, individuals may be able to influence each other's reproduction. Biologists call a behavior pattern altruistic if it increases the number of offspring produced by the recipient and decreases that of the altruist. (Notice that the term in biology, unlike in human action, implies nothing about the altruist's intentions: it is a motive-free account of reproductive consequences.) Can natural selection ever favor altruistic actions that decrease the reproduction of the actor? If we take a strictly organismic view of natural selection, it would seem to be impossible. Yet, as a growing list of natural observations records, animals behave in an apparently altruistic manner. The altruism of the sterile 'workers' in such insects as ants and bees is one undoubted example. In such cases, the altruism is extreme, as the workers do not reproduce in some species. 2) Altruistic behavior often takes place between genetic relatives, where it is most likely explained by the theory of kin selection. Let us suppose for simplicity that we have two types of organism, altruistic and selfish. A hypothetical example might be that, when someone is drowning, an altruist would jump in and try and save him or her, whereas the selfish individual would not. The altruistic act decreases the altruist's chance of survival by some amount which we call c (for cost), because the altruist runs some risk of drowning. The action increases the chance of survival of the recipient by an amount b (for benefit). If the altruists dispensed their aid indiscriminately to other individuals, benefits will be received by other altruists and by selfish individuals in the same proportion as they exist in the population. Natural selection will then favor the selfish types, because they receive the benefits but do not pay the costs. 3) For altruism to evolve, it must be directed preferentially to other altruists. Suppose that acts of altruism were initially given only to other altruists. In such a case, what would be the condition for natural selection to favor altruism? The answer is that the altruism must take place only in circumstances in which the benefit to the recipient exceeds the cost to the altruist. This relation will hold true if the altruist is a better swimmer than the recipient, but it does not logically have to be true (if, for instance, the altruist were a poor swimmer and the recipients were capable of looking after themselves, the net result of the altruist's heroic plunge into the water might merely be that the altruist would drown). If the recipient's benefit exceeds the altruist's cost, then a net increase occurs in the average fitness of the altruistic types as a whole. This condition has only theoretical interest. In practice, it is usually (maybe always) impossible for altruism to be directed only to other altruists, because they cannot be recognized with certainty. It may be possible, however, for altruism to be directed at a class of individuals that contains a disproportionate number of altruists relative to their frequency in the population. For example, altruism may be directed toward genetic relatives. In this case, if a gene for altruism appears in an individual, it is also likely to be in its relatives." Adapted from: Mark Ridley: Evolution. 2nd Edition. Blackwell Science 1996, p.321. -------------------------------- Related Material: ON ALTRUISM OF INDIVIDUALS IN INSECT SOCIETIES The following points are made by Edward O. Wilson (citation below): 1) Altruism is self-destructive behavior performed for the benefit of others. The use of the word altruism in biology has been faulted by Williams and Williams (1957), who suggest that the alternative expression "social donorism" is preferable because it has less gratuitous emotional flavor. Even so, altruism has been used as a term in connection with evolutionary argumentation by Haldane (1932) and rigorous genetic theory by Hamilton (1964), and it has the great advantage of being instantly familiar. The self-destruction can range in intensity all the way from total bodily sacrifice to a slight diminishment of reproductive powers. Altruistic behavior is of course commonplace in the responses of parents toward their young. It is far less frequent, and for our purposes much more interesting, when displayed by young toward their parents or by individuals toward siblings or other, more distantly related members of the same species. Altruism is a subject of importance in evolution theory because it implies the existence of group selection, and its extreme development in the social insects is therefore of more than ordinary interest. The great scope and variety of the phenomenon in the social insects is best indicated by citing a few concrete examples: a) The soldier caste of most species of termites and ants is virtually limited in function to colony defense. Soldiers are often slow to respond to stimuli that arouse the rest of the colony, but, when they do, they normally place themselves in the position of maximum danger. When nest walls of higher termites such as Nasutitermes are broken open, for example, the white, defenseless nymphs and workers rush inward toward the concealed depths of the nest, while the soldiers press outward and mill aggressively on the outside of the nest. Nutting (personal communication) witnessed soldiers of Amitermes emersoni in Arizona emerge from the nest well in advance of the nuptial flights, wander widely around the nest vicinity, and effectively tie up in combat all foraging ants that could have endangered the emerging winged reproductives. b) I have observed that injured workers of the fire ant Solenopsis saevissima leave the nest more readily and are more aggressive on the average than their uninjured sisters. Dying workers of the harvesting ant Pogonomyrmex badius tend to leave the nest altogether. Both effects may be no more than meaningless epiphenomena, but it is also likely that the responses are altruistic. To be specific, injured workers are useless for most functions other than defense, while dying workers pose a sanitary problem. c) Alarm communication, which is employed in one form or other throughout the higher social groups, has the effect of drawing workers toward sources of danger while protecting the queens, the brood, and the unmated sexual forms. d) Honeybee workers possess barbed stings that tend to remain embedded when the insects pull away from their victims, causing part of their viscera to be torn out and the bees to be fatally injured. A similar defensive maneuver occurs in many polybiine wasps, including Synoeca surinama and at least some species of Polybia and Stelopolybia and the ant Pogonomyrmex badius. The fearsome reputation of social bees and wasps in comparison with other insects is due to their general readiness to throw their lives away upon slight provocation. e) When fed exclusively on sugar water, honeybee workers can still raise larvae -- but only by metabolizing and donating their own tissue proteins. That this donation to their sisters actually shortens their own lives is indicated by the finding of de Groot (1953) that longevity in workers is a function of protein intake. f) Female workers of most social insects curtail their own egg laying in the presence of a queen, either through submissive behavior or through biochemical inhibition. The workers of many ant and stingless bee species lay special trophic eggs that are fed principally to the larvae and queen. g) The "communal stomach", or distensible crop, together with a specially modified proventriculus, forms a complex storage and pumping system that functions in the exchange of liquid food among members of the same colony in the higher ants. In both honeybees and ants, newly fed workers often press offerings of ingluvial food on nestmates without being begged, and they may go so far as to expend their supply to a level below the colony average. 2) These diverse physiological and behavioral responses are difficult to interpret in any way except as altruistic adaptations that have evolved through the agency of natural selection operating at the colony level. The list by no means exhausts the phenomena that could be placed in the same category. Adapted from: Edward O. Wilson: The Insect Societies. Harvard University Press 1971, p.321. From checker at panix.com Mon May 23 19:48:46 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:48:46 -0400 (EDT) Subject: [Paleopsych] SW: Racial/Ethnic Disparities in Neonatal Mortality Message-ID: Racial/Ethnic Disparities in Neonatal Mortality http://scienceweek.com/2004/sb041224-6.htm The following points are made by S.L Lukacs and K.C. Schoendorf (Morb. Mort. Wkly. Rep. 2004 53:655): 1) Neonatal mortality (i.e., death at age less than 28 days) accounts for approximately two-thirds of infant deaths in the US. During 1989-2001, neonatal mortality rates (NMRs) declined. However, 2002 preliminary data indicated an increase. To characterize trends in neonatal mortality by gestational age and race/ethnicity, the Centers for Disease Control and Prevention (CDC) analyzed linked birth/infant death data sets for 1989-1991 and 1995-2001 (2002 linked data were not available). 2) Results indicated that (a) extremely preterm infants (i.e., born at less than 28 weeks gestation) accounted for 49%-58% of neonatal deaths during 1989-2001 and (b) racial/ethnic disparities persisted despite NMR declines among infants of all gestational ages.(1,2) 3) The findings document a considerable decline in neonatal mortality among infants of all gestational ages and racial/ethnic populations during the 1990s; despite this decline, racial/ethnic disparities persisted. Implementation of new therapies and recommendations likely contributed to the decline; however, the effects of these advances might differ within racial/ethnic populations. The medical advances include (a) surfactant therapy, which improves infant lung maturity, resulting in a decreased risk for death for high-risk preterm infants(3); (b) folic acid consumption by women of childbearing age to reduce the risk for neural tube defects(4); and (c) intrapartum antimicrobial prophylaxis for women colonized with or at high risk for maternal-infant transmission of group B streptococcal infection.(5) 4) In 2001, blacks continued to have the highest overall NMR, more than twice that of any other racial/ethnic population. The high rate among this population is likely attributable to a combination of high mortality among black infants born at 37 weeks? gestation (full-term infants account for approximately 90% of all births) and a high proportion of preterm births (17.6% black preterm births versus 10.8% white preterm births). 5) Preterm white infants had higher NMRs in 2001, compared with other racial/ethnic populations, despite a greater rate of decline in mortality. Although black preterm infants had lower NMRs in 2001, the annual rate of decline was lower than among other racial/ethnic populations. The narrowing gap in mortality between preterm white infants and preterm black infants might reflect the widened distribution of neonatal intensive care in the 1990s beyond urban tertiary-care centers and a possible difference in benefit from surfactant therapy between black and white infants. 6) Differences in neonatal mortality trends among racial/ethnic populations also might be explained by changing patterns in the occurrence of multiple births. The rate of multiple births has increased substantially over the preceding decade, and trends vary among infants of different races/ethnicities. References (abridged): 1. National Center for Health Statistics. National Center for Health Statistics linked birth/infant death data set: 1989-91 cohort data, 1995-2001 period data. Hyattsville, Maryland: U.S. Department of Health and Human Services, CDC, National Center for Health Statistics, 2003 2. Alexander GR, Himes JH, Kaufman RB, Mor J, Kogan M. A United States national reference for fetal growth. Obstet Gynecol. 1996;87:163-168 3. Horbar JD, Wright EC, Onstad L, National Institute of Child Health and Human Development Neonatal Research Network. Decreasing mortality associated with the introduction of surfactant therapy: an observational study of neonates weighing 601 to 1,300 grams at birth. Pediatrics. 1993;92:191-196 4. Mathews TJ, Honein MA, Erickson JD. Spina bifida and anencephaly prevalence--United States, 1991-2001. MMWR Recomm Rep. 2002;51(RR-13):9-11 5. CDC. Prevention of perinatal group B streptococcal disease: a public health perspective. MMWR Recomm Rep. 1996;45(RR-7):1-24 Centers for Disease Control and Prevention http://www.cdc.gov -------------------------------- Related Material: MEDICAL BIOLOGY: ON FETAL ALCOHOL SPECTRUM DISORDER The following points are made by R.J. Sokol et al (J. Am. Med. Assoc. 2003 290:2996): 1) "Fetal alcohol syndrome" (FAS), currently considered part of "fetal alcohol spectrum disorder" (FASD), was first described in 1973.(1) Although much has been learned in 30 years, substantial challenges remain in diagnosing and preventing this disorder. Individuals with FAS have characteristic facial dysmorphology (midfacial hypoplasia, long smooth philtrum, thin upper lip, small eyes that appear widely spaced, and inner epicanthal folds); growth restriction, including relative microcephaly; and central nervous system and neurodevelopmental abnormalities, including ophthalmic involvement. As children, they typically struggle in school because of decreased cognitive functioning and social problems. 2) Fetal alcohol syndrome is diagnosed when characteristic facial dysmorphology, growth restriction, and central nervous system/neurodevelopmental abnormalities are present, with or without confirmed prenatal alcohol exposure.(2) Although it has long been recognized that affected individuals may have some but not all of the FAS characteristics, research has not identified a reliable way of defining those individuals who are less affected. Fetal alcohol effects (FAE), prenatal alcohol effects (PAE), alcohol-related birth defects (ARBD), and alcohol-related neurodevelopmental disorder (ARND) have all previously been suggested as terms to identify those children with a spectrum of problems but not with classic FAS. 3) Although much available research still uses the older nomenclatures, the term FASD has recently been used by advocates, educators, and federal agencies (National Institute on Alcohol Abuse and Alcoholism and Centers for Disease Control and Prevention) as an umbrella term to cover the range of outcomes associated with all levels of prenatal alcohol exposure. Adoption of a common and overarching term, such as FASD, will allow researchers and physicians who work with affected individuals to better understand and describe the current state of knowledge. 4) How much drinking during pregnancy is too much? For nonpregnant women, physicians and many researchers define light drinking as 1.2 drinks per day, moderate drinking as 2.2 drinks per day, and heavy drinking as 3.5 or more drinks per day.(3) However, risk-drinking during pregnancy (enough to potentially damage offspring) has been defined as an average of more than 1 drink (0.5 oz) per day,(4) or less if massed (binges of >5 drinks per episode). Although many reports of adverse effects related to prenatal exposure involve heavier drinking,(5) recent research documenting deleterious outcomes for children prenatally exposed to small amounts of alcohol (0.5 drink per day) has led to recognition that a threshold has not been adequately identified. This, along with varying susceptibility (vulnerability), leads to the conclusion and recommendations by both the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists that abstinence during pregnancy should be recommended to preconceptional and pregnant women. 5) Detection of maternal alcohol exposure is a particular challenge; no reliable biological marker is available. Although analysis of both meconium and hair samples for fatty acid ethyl esters has been proposed, there are no large population-based validation studies for these methods. Similarly, other biochemical markers, including gamma-glutamyl transferase, hemoglobin-associated acetaldehyde, and carbohydrate-deficit transferrin, have not yet been validated or have not been shown to have adequate diagnostic sensitivity and specificity in identifying drinking in pregnant women. Most researchers and physicians rely on self-report of maternal alcohol use during pregnancy, with underreporting common because of stigmatization of drinking during pregnancy. Alcohol use histories must be sensitively elicited to yield complete information. Studies indicate that obstetricians often obtain inaccurate consumption information. For example, in a prospective study that included high-risk women, almost twice as many admitted to drinking during a research assessment compared with indications from maternal medical records. References (abridged): 1. Jones KL, Smith DW. Recognition of the fetal alcohol syndrome in early infancy. Lancet. 1973;2:999-1001 2. Stratton K, ed, Howe C, ed, Battaglia F, ed. Fetal Alcohol Syndrome: Diagnosis, Epidemiology, Prevention, and Treatment. Washington, DC: National Academy Press; 1996 3. Abel EL, Kruger ML, Driedl J. How do physicians define "light," "moderate," and "heavy" drinking? Alcohol Clin Exp Res. 1998;22:979-984 4. Hankin JR, Sokol RJ. Identification and care of problems associated with alcohol ingestion in pregnancy. Semin Perinatol. 1995;19:286-292 5. Roebuck TM, Mattson SN, Riley EP. Behavioral and psychological profiles of alcohol-exposed children. Alcohol Clin Exp Res. 1999;23:1070-1076 J. Am. Med. Assoc. http://www.jama.com -------------------------------- ON FETAL ALCOHOL SYNDROME The following points are made by L. Miller et al (J. Am. Med. Assoc. 2002 288:38): 1) Fetal alcohol syndrome is caused by maternal alcohol use during pregnancy and is one of the leading causes of preventable birth defects and developmental disabilities in the US. Fetal alcohol syndrome is diagnosed on the basis of a combination of growth deficiency (pre- or postnatal), central nervous system dysfunction, facial dysmorphology, and maternal alcohol use during pregnancy. Estimates of the prevalence vary from 0.2 to 1.0 per 1,000 live-born infants. This variation is due, in part, to the small size of the populations studied, varying case definitions, and different surveillance methods. In addition, differences have been noted among racial/ethnic populations. To monitor the occurrence of fetal alcohol syndrome, the Centers for Disease Control (CDC) collaborated with five states (Alaska, Arizona, Colorado, New York, and Wisconsin) to develop the Fetal Alcohol Syndrome Surveillance Network (FASSNet). The authors report a summary of the results of an analysis of FASSNet data on children born during 1995-1997, which indicate that FAS rates in Alaska, Arizona, Colorado, and New York ranged from 0.3 to 1.5 per 1,000 live-born infants and were highest for black and American Indian/Alaska Native populations. 2) The CDC suggests this report demonstrates that maternal alcohol use during pregnancy continues to affect children. Recent data indicate that the prevalence of binge (i.e., >5 drinks on any one occasion) and frequent drinking (i.e., >7 drinks per week or >5 drinks on any one occasion) during pregnancy reached a high point in 1995 and has not declined. FASSNet prevalence rates are similar to rates published previously from population-based prevalence studies, despite different case definitions and surveillance methods. These data indicate that children born to mothers in certain racial/ethnic populations have consistently higher prevalence rates of fetal alcohol syndrome. For example, prevalence was 3.0 per 1,000 live-born infants for American Indians/Alaska Natives during 1977-1992 compared with 0.2 for other Alaska residents during the same period. FASSNet findings confirm higher prevalence rates among black and American Indian/Alaska Native populations. Alaska health authorities have increased efforts to address this health problem. Increased awareness of maternal alcohol use and more complete documentation by Alaska Native health organizations might result in more vigilant reporting of potential cases of FAS, which could contribute to high reported FAS prevalence in this population. 3) The number of children affected adversely by in-utero exposure to alcohol is probably underestimated for at least four reasons. First, some fetal alcohol syndrome cases might not be diagnosed because of the syndromic nature of the condition, the lack of pathognomonic features, and the negative perceptions of fetal alcohol syndrome diagnosis. Second, medical records of children with fetal alcohol syndrome often lack sufficient documentation to determine case status. For example, 10 children diagnosed with fetal alcohol syndrome by a clinical geneticist, dysmorphologist, or developmental pediatrician did not meet the surveillance case definition for confirmed or probable fetal alcohol syndrome because documentation in the abstracted medical records was insufficient or the child did not meet FASSNet surveillance case definition criteria. However, adding these 10 children to the total case count would change the overall prevalence only slightly, from 0.43 to 0.45 per 1,000 live-born infants. Third, some children might not be identified as having fetal alcohol syndrome until they reach school age, at which point central nervous system abnormalities and learning disabilities are recognized more easily. Because only part of the cohort under surveillance was of school age and education records were not used in this surveillance system, the actual number of cases might have been underestimated. Finally, an unknown number of persons with fetal alcohol syndrome left the surveillance area before being identified by the surveillance system. Because of the small numbers and differences in sources and awareness among clinicians, prevalence rates across racial/ethnic populations and across states should be compared with caution. 4) The CDC suggests that ongoing, consistent, population-based surveillance systems are necessary to measure the occurrence of fetal alcohol syndrome and the impact of fetal alcohol syndrome prevention activities. These systems also are useful in evaluating the need for early intervention and special education services for children with birth defects such as fetal alcohol syndrome. Centers for Disease Control and Prevention http://www.cdc.gov From checker at panix.com Mon May 23 19:48:57 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:48:57 -0400 (EDT) Subject: [Paleopsych] SW: Mortality and Lifespan Message-ID: Evolutionary Biology: Mortality and Lifespan http://scienceweek.com/2004/sc041217-1.htm The following points are made by Peter A. Abrams (Nature 2004 431:1048): 1) Recent work(1) has involved an investigation of one of the main factors that influence the evolution of an organism's lifespan. That factor is the risk of dying that a population faces as a result of environmental conditions (e.g., predation). The study subjects were guppies, small tropical fish that are widely used in evolutionary studies, and the authors have provided the first experimental support for the prediction that a higher environmental risk of mortality can select for inherently longer-lived organisms. 2) Guppies from the lower reaches of several rivers in Trinidad are subject to much higher rates of predation than those in the upper parts of the same rivers, where waterfalls block access by larger fish. In predator-free lab experiments, Reznick et al(1) found that guppies from the high-predation segments of two of the rivers lived up to 35% longer than those from low-predation segments of the same watercourse. In addition, the guppies from high-predation sites had a 40% longer reproductive span and reproduced at a higher rate. So a background of higher mortality under natural conditions has apparently led to the evolution of both a longer lifespan and a longer reproductive span. 3) Some history is required to see why this observation is surprising. Environmentally caused ("extrinsic") mortality has long been recognized as a key factor determining how natural selection molds "intrinsic" mortality -- the death rate that a population would have under some standardized, generally benign, set of environmental conditions. Although evolution should favor lower intrinsic mortality (and a longer intrinsic lifespan) when all else is equal, many organisms face a trade-off between higher levels of reproduction or lower levels of intrinsic mortality. One of the main reasons that senescence occurs is because repair is costly: resources that are devoted to maintaining an organism are not available for reproduction. In the 1950s, Peter Medawar(2) and George Williams(3) pointed out that high extrinsic mortality could favor shorter intrinsic lifespan. Why, they reasoned, should an organism invest in costly repair that will probably only ensure that it is in prime physical condition when its life ends? Higher extrinsic mortality should favor low investment in repair, and thus a high intrinsic mortality and a short intrinsic lifespan. 4) But this reasoning did not take account of two further factors. One is that higher extrinsic mortality also slows the rate of population growth, and more slowly growing populations are expected to evolve to have lower rates of intrinsic mortality and a longer lifespan(4,5). The other factor is the interaction between extrinsic mortality factors and physiological repair or maintenance(5). If predators can be evaded by fast, but not by slow prey, greater predation risk should select for greater maintenance of the body systems essential for fast movement. This higher level of repair would then prolong intrinsic lifespan. References (abridged): 1. Reznick, D. N., Bryant, M. J., Roff, D., Ghalambor, C. K. & Ghalambor, D. E. Nature 431, 1095-1099 (2004) 2. Medawar, P. B. An unsolved problem in Biology (Lewis, London) 3. Williams, G. C. Evolution 11, 398-411 (1957) 4. Charlesworth, B. A. Evolution in Age-Structured Populations (Cambridge Univ. Press, 1980) 5. Abrams, P. A. Evolution 47, 877-887 (1993) Nature http://www.nature.com/nature -------------------------------- Related Material: SIGNALS FROM THE REPRODUCTIVE SYSTEM REGULATE THE LIFESPAN OF C. ELEGANS. The following points are made by H. Hsin and C. Kenyon (Nature 1999 399:308): 1) Understanding how the ageing process is regulated is a fascinating and fundamental problem in biology. The authors demonstrate that signals from the reproductive system influence the lifespan of the nematode Caenorhabditis elegans. If the cells that give rise to the germ line are killed with a laser microbeam, the lifespan of the animal is extended. 2) The authors suggest their findings indicate that germline signals act by modulating the activity of an insulin/IGF-1 (insulin-like growth factor) pathway that is known to regulate the ageing of this organism. Mutants with reduced activity of the insulin/IGF-1-receptor homologue DAF-2 have been shown to live twice as long as normal, and their longevity requires the activity of DAF- 16, a member of the forkhead/winged-helix family of transcriptional regulators. 3) The authors find that in order for germline ablation to extend lifespan, DAF-16 is required, as well as a putative nuclear hormone receptor, DAF-12. In addition, the findings suggest that signals from the somatic gonad also influence ageing, and that this effect requires DAF-2 activity. 4) The authors suggest that together their findings imply that the C. elegans insulin/IGF-1 system integrates multiple signals to define the animal's rate of ageing. The authors suggest this study demonstrates an inherent relationship between the reproductive state of this animal and its lifespan, and may have implications for the co-evolution of reproductive capability and longevity. Nature http://www.nature.com/nature -------------------------------- Related Material: AGING, LIFESPAN, AND SENESCENCE Notes by ScienceWeek: Our knowledge of the basis of senescence of cells, tissues, and organisms (including humans) has entered a new phase in recent decades because of the new vistas opened by molecular biology. Model systems have started to provide insights, and one important approach has been the identification of genes that determine the lifespan of an organism. The very existence of genes that when mutated can extend lifespan suggests to many researchers that one or a few processes may be critical in aging, and that a slowing of these processes may slow aging itself. The following points are made by L. Guarente et al (Proc. Nat. Acad. Sci. 1998 95:11034): 1) In the budding yeast Saccharomyces cerevisiae, aging results from the asymmetry of cell division, which produces a large mother cell and a small daughter cell arising from the bud. Much of the macromolecular composition of the daughter cell is newly synthesized, whereas the composition of the mother cell grows older with each cell division. It has been shown that mother cells of this yeast species divide a relatively fixed number of times, and exhibit a slowing of the cell cycle, cell enlargement, and sterility. Analysis of *ribosomal DNA in old cells reveals an accumulation of *extrachromosomal ribosomal DNA of discrete sizes, apparently representing a cumulative fragmentation of chromosomal ribosomal DNA. The authors suggest it will be of great interest to assess the generality of this process as an aging mechanism. 2) In Caenorhabditis elegans, the *neurosecretory system regulates whether animals enter the reproductive life cycle or arrest development at a primitive *diapause stage. Developmental arrest is apparently induced by a *pheromone and involves behavioral and morphological changes in many tissues of the animal, with the lifespan becoming 4 to 8 times longer than that of the normal 3-week lifespan of fully developed animals. Declines in pheromone concentration induce recovery to reproductive adults with normal metabolism and lifespan. Genes that regulate the function of the C. elegans diapause and the neuroendocrine aging pathway have been identified, and at least one of these genes codes for an *insulin-like receptor apparently involved in metabolism. The authors suggest that if the association of longevity and diapause is general, it is possible that *polymorphisms in the human insulin receptor-signaling pathway genes and related gene *homologues may underlie genetic variation in human longevity. 3) In plants, there is a large range of lifespans in the various plant kingdoms. Certain tree species live for well over a century, whereas other plants complete their life cycle in a few weeks. The "yellowing" of leaves is often referred to in the plant literature as leaf senescence or the "senescence syndrome" -- referring to the process by which nutrients are mobilized from the dying leaf to other parts of the plant to support their growth. The senescence syndrome is characterized by distinct cellular and molecular changes, with the chloroplast the first part of the cell to undergo disassembly (producing the "yellowing"). In many plant species, certain hormones can either enhance or delay senescence. Although the genes that are expressed during the plant senescence syndrome (as well as ways to manipulate such senescence) have been identified, much remains to be done to understand the molecular basis of aging in plants. For example, nothing is known about the signal transduction pathways that lead to altered gene expression during senescence, or how plant hormones such as *cytokinin influence senescence. But there are now many tools to explore this process. The authors conclude: "It remains to be seen whether common mechanisms link the aging process in diverse organisms." Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Notes by ScienceWeek: ribosomal DNA: A ribosome (not to be confused with riboZYME) is a small particle, a complex of various ribonucleic acid component subunits and proteins that functions as the site of protein synthesis. The term "ribosomal DNA" refers to the gene or genes that code for the RNA in ribosomes. In other words, the term "ribosomal DNA" does not refer to any DNA in ribosomes (there is no DNA in ribosomes). extrachromosomal: In general, this refers to anything outside of chromosomes, and in this case to DNA fragments unincorporated into chromosomal DNA. neurosecretory system: In general, all neural systems contain both neurons that themselves secrete chemical messengers and neurons that signal special secretory cells to secrete chemical messengers. A neurosecretory pathway is a delineated signaling system that involves such a resultant secretion. diapause: In general, this refers to any programmed period of suspended development in invertebrates. pheromone: In general, a chemical substance which, when released into an animal's surroundings, influences the development or behavior of other individuals of the same species. insulin: A protein hormone that promotes uptake by body cells of free glucose and/or amino acids, depending on target cell type. polymorphisms: A genetic polymorphism is a naturally occurring variation in the normal nucleotide sequence of the genome within individuals in a population. Variations are denoted as polymorphisms only if they cannot be accounted for by recurrent mutation and occur with a frequency of at least about 1 percent. homologues: In general, the term "homologous" means having the same structure. But the term has special uses in genetics and evolution biology. cytokinin: A group of plant growth substances. They are chemically identified as derivatives of the purine base adenine. They stimulate cell division and determine the course of differentiation. They work synergistically with other plant hormones called "auxins". From checker at panix.com Mon May 23 19:49:11 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 15:49:11 -0400 (EDT) Subject: [Paleopsych] SW: On Race-Based Therapeutics Message-ID: Public Health: On Race-Based Therapeutics http://scienceweek.com/2004/sc041210-5.htm The following points are made by M. Gregg Bloche (New Engl. J. Med. 2004 351:2035): 1) Are we moving into a new era of race-based therapeutics? The recent publication of the African-American Heart Failure Trial (A-HeFT), a clinical trial of a medication intended for a single racial group, poses this awkward question. The study's most striking finding -- that the addition of isosorbide dinitrate and hydralazine to conventional therapy for heart failure reduced relative one-year mortality by 43 percent among blacks -- has provoked wide discussion. The trial's sponsor, NitroMed, which holds a patent on the fixed-dose combination of isosorbide dinitrate and hydralazine that was used, posits that heart failure has a different pathophysiology in blacks than in whites, necessitating different treatment strategies.(1) 2) The reported 43 percent relative decrease in the rate of death due to heart failure among blacks is cause for celebration. There is wide agreement that blacks die from heart failure at rates disproportionate to those among whites. But to assess A-HeFT's larger implications for the role of race in therapeutic design, it is important to be clear about what the study has not shown. 3) First, A-HeFT has not established that adding isosorbide dinitrate and hydralazine to conventional therapy for heart failure yields greater benefits for blacks than for other racial or ethnic groups. The study, which enrolled only self-identified blacks, did not test this hypothesis. The clinical and economic logic behind A-HeFT's design has its roots in previous, multiracial studies that compared isosorbide dinitrate and hydralazine with other investigational drugs, administered in combination with different conventional therapies. These therapies were standard in their day but are inferior to the conventional therapy used today, which typically includes an angiotensin-converting-enzyme (ACE) inhibitor. Indeed, one of these previous studies helped to establish ACE inhibitors as standard treatment. This trial compared isosorbide dinitrate and hydralazine with the ACE inhibitor enalapril and demonstrated that enalapril resulted in a greater overall reduction in mortality.(2) 4) An ill-defined subgroup of patients, though, did well when treated with isosorbide dinitrate and hydralazine and fared poorly with enalapril. Seizing on this opportunity, a biotechnology firm obtained intellectual-property rights to a fixed-dose combination of isosorbide dinitrate and hydralazine and sought approval from the Food and Drug Administration (FDA) in 1996 to market this formulation as a new drug. The FDA declined, citing statistical uncertainties in the trial data.(1) That is when race entered the picture. A group of investigators (including the holder of the patent on the combination treatment) reanalyzed the previous clinical-trial data according to race and concluded in 1999 that the combination treatment did as well as enalapril at prolonging the lives of black patients with heart failure.(3) Other work suggested that ACE inhibitors were less effective in blacks than in whites. 5) At this point, it might have made clinical and scientific sense to add isosorbide dinitrate and hydralazine to conventional therapy (which by now typically included an ACE inhibitor) and to compare this combination to conventional therapy alone -- for all patients with heart failure, regardless of race. Such a trial had not been performed, since the standard therapies used in earlier trials did not include ACE inhibitors. But race consciousness offered a faster way through the FDA's regulatory maze. In 1999, NitroMed obtained intellectual-property rights to fixed-dose isosorbide dinitrate and hydralazine and said it would seek FDA approval to market the formulation as a therapy for heart failure in blacks. Two years later, the FDA indicated to NitroMed that successful completion of a clinical trial in black patients with heart failure would probably result in approval.(1) This commitment gave rise to A-HeFT, and the publication of this trial's results virtually ensures FDA approval. 6) We need not shy away from the potential benefits of race-conscious therapeutics, but we should manage its downside risks. Greater awareness among physicians and the public that race is at best a placeholder for other predispositions, and not a biologic verity, would be a first step. Beyond such awareness, companies -- such as NitroMed -- that stand to gain from taking account of race could commit a substantial portion of their profits to research on genetic, psychosocial, and other mechanisms that might underlie racial gaps in clinical response.(3-5) References (abridged): 1. Kahn J. How a drug becomes "ethnic": law, commerce, and the production of racial categories in medicine. Yale J Health Policy Law Ethics 2004;4:1-46 2. Cohn JN, Archibald DG, Ziesche S, et al. A comparison of enalapril with hydralazine-isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991;325:303-310 3. Carson P, Ziesche S, Johnson G, Cohn JN. Racial differences in response to therapy for heart failure: analysis of the vasodilator-heart failure trials. J Card Fail 1999;5:178-187 4. Lifton RJ. The Nazi doctors: medical killing and the psychology of genocide. New York: Basic Books, 1986 5. Cacioppo JT, Hawkley LC. Social isolation and health, with an emphasis on underlying mechanisms. Perspect Biol Med 2003;46:Suppl:S39-S52 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ANTHROPOLOGY: ON HUMANS AND RACE The following points are made by D.A. Hughes et al (Current Biology 2004 14:R367): 1) Systematists have not defined a "type specimen" for humans, in contrast to other species. Recent attempts to provide a definition for our species, so-called "anatomically modern humans", have suffered from the embarrassment that exceptions to such definitions inevitably arise -- so are these exceptional people then not "human"? Anyway, in comparison with our closest-living relatives, chimpanzees, and in light of the fossil record, the following trends have been discerned in the evolution of modern humans: increase in brain size; decrease in skeletal robusticity; decrease in size of dentition; a shift to bipedal locomotion; a longer period of childhood growth and dependency; increase in lifespan; and increase in reliance on culture and technology. 2) The traditional classification of humans as Homo sapiens, with our very own separate family (Hominidae) goes back to Carolus Linnaeus (1707-1778). Recently, the controversial suggestion has been made of lumping humans and chimpanzees together into at least the same family, if not the same genus, based on the fact that they are 98-99% identical at the nucleotide sequence level. DNA sequence similarity is not the only basis for classification, however: it has also been proposed that, in a classification based on cognitive/mental abilities, humans would merit their own separate kingdom, the Psychozoa (which does have a nice ring to it). 3) As for sub-categories, or "races", of humans, in his Systema Naturae of 1758 Linnaeus recognized four principal geographic varieties or subspecies of humans: Americanus, Europaeus, Asiaticus, and Afer (Africans). He defined two other categories: Monstrosus, mostly hairy men with tails and other fanciful creatures, but also including some existing groups such as Patagonians; and Ferus, or "wild boys", thought to be raised by animals, but actually retarded or mentally ill children that had been abandoned by their parents. In his scheme of 1795, Johann Blumenbach (1752-1840) added a fifth category, Malay, including Polynesians, Melanesians and Australians. 4) Blumenbach is also responsible for using the term "Caucasian" to refer in general to Europeans, which he chose on the basis of physical appearance. He thought Europeans had the greatest physical beauty of all humans -- not surprising, as he was of course European himself -- and amongst Europeans he thought those from around Mount Caucasus the most beautiful. Hence, he named the "most beautiful race" of people after their supposedly most beautiful variety -- a good reason to avoid using the term "Caucasian" to refer to people of generic European origin (another is to avoid confusion with the specific meaning of "Caucasian", namely people from the Caucasus). 5) The extent to which racial classifications of humans reflect any underlying biological reality is highly controversial; proponents of racial classification schemes have been unable to agree on the number of races (proposals range from 3 to more than 100), let alone how specific populations should be classified, which would seem to greatly undermine the utility of any such racial classification. Moreover, the apparent goal of investigating human biological diversity is to ask how such diversity is patterned and how it came to be the way that it is, rather than how to classify populations into discrete "races".(1-4) References: 1. Nature Encyclopedia of the Human Genome. (2003). Cooper, D. ed. (Nature Publishing Group), 2. Fowler, C.W. and Hobbs, L. (2003). Is humanity sustainable?. Proc. R. Soc. Lond. B. Biol. Sci. 270, 2579-2583 3. Encyclopedia of Human Evolution and Prehistory. (1988). Tattersall, I., Delson, E., and Van Couvering, J. eds. (Garland Publishing) 4. World Health Organization Website http://www.who.int Current Biology http://www.current-biology.com -------------------------------- Related Material: EFFECT OF RACE AND SEX OF PATIENTS ON TREATMENT FOR CHEST PAIN Notes by ScienceWeek: Epidemiological studies have identified differences according to race and sex in the US in the treatment of patients with cardiovascular disease. Some studies have found that blacks and women are less likely than whites and men, respectively, to undergo *cardiac catheterization or coronary artery bypass graft surgery when they are admitted to the hospital for treatment of chest pain or *myocardial infarction. In contrast, other studies were unable to confirm that invasive procedures are underused in women. One question that has not been addressed directly by previous studies is the extent to which attitudes of physicians (in addition to any possible social and economic factors) are responsible for the differences in treatment recommendations with respect to race and sex. The following points are made by K.A. Schulman et al (New England J. Med. 1999 340:619): 1) The authors report the results of a study to assess, in a controlled experiment, treatment recommendations by physicians for patients presenting with various types of chest pain. The authors report they developed a computerized survey instrument, with actors portraying patients with particular characteristics in scripted interviews about their symptoms. A total of 720 physicians at 2 national meetings of organizations of primary care physicians participated in the survey. Each physician viewed a recorded interview and was given other data about a hypothetical patient, and the physician then made recommendations about the care of that patient. 2) The authors report that women and blacks were less likely to be referred for cardiac catheterization than men and whites (*odds ratio = 0.6), respectively, and that analysis of the race-sex interactions indicated that black women were significantly less likely to be referred for catheterization than white men (odds ratio = 0.4). 3) The authors suggest their finding indicate that the race and sex of patients independently influence recommendations by physicians for the management of chest pain, and that decision-making by physicians may be an important factor in explaining differences in the US in the treatment of cardiovascular disease with respect to race and sex. New Engl. J. Med. http://www.nejm.org -------------------------------- Notes by ScienceWeek: cardiac catheterization: (intracardiac catheterization) This involves the passage of a catheter (a small-diameter tubular instrument) into the heart through a vein or artery, to withdraw samples of blood, measure pressures within the chambers of the heart or the larger vessels of the heart, or to inject contrast media for visualization techniques. The cardiac catheterization technique is used primarily in the diagnosis and evaluation of congenital, rheumatic, and coronary artery lesions, and to evaluate various dynamic aspects of cardiac function. myocardial infarction: (myocardial infarct) In general, an "infarct" is an area of necrosis caused by a sudden insufficiency of blood supply, and a "myocardial infarction" (cardiac infarction) is such damage of an area of heart muscle, usually as a result of occlusion of a coronary artery. odds ratio: In this context, the ratio of the probability of an event in one group to the probability of the event in another group. From checker at panix.com Mon May 23 21:06:00 2005 From: checker at panix.com (Premise Checker) Date: Mon, 23 May 2005 17:06:00 -0400 (EDT) Subject: [Paleopsych] NYTDBR: (Koontz) Receiving Moral Instruction, Courtesy of a Serial Killer Message-ID: Receiving Moral Instruction, Courtesy of a Serial Killer New York Times Daily Book Review, 5.5.23 http://www.nytimes.com/2005/05/23/books/23masl.html [Thanks to Sarah for this.] By [2]JANET MASLIN Although the thriller-sermon is an unusual hyphenate, Dean Koontz makes it his specialty. He writes about humble four-square heroes who try to lead exemplary lives but are challenged by the evils of the wider world. If those evils happen to be gruesome and sick enough to sell a lot of books, one might make the case that Mr. Koontz cannot define goodness without them. Maybe. But does every moral lesson require a Hannibal Lecter? The lurid extremes of "Velocity," Mr. Koontz's latest, come mighty close to cashing in on the very sinfulness he decries. The novel's resident good guy, Billy Wiles, is bedeviled by a serial killer who tries to make Billy complicit in his violence. He forces Billy to pick the victims. And he sends Billy such high-concept threatening letters that one of them is conveniently reprinted on the book's back cover. "Velocity" might be read as a flat-out exercise in escapist depravity - in other words, par for the course in popular crime fiction - were it not for the author's nonstop idiosyncrasies. Say this for Mr. Koontz: he is skillful in ways that make "Velocity" live up to its title, and nobody will ever accuse him of formulaic writing. He starts this book with a death by garden gnome. ("The gnome was made of concrete. Henry wasn't.") He includes a sweet young woman who believes she is a haruspex (a reader of entrails). In a further oblique nod to Scrabble, he makes Billy a woodcarver who likes listening to zydeco. Whimsy notwithstanding, Mr. Koontz also has blunt points to make. He underscores Billy's devotion to Barbara, his fianc?e, who has been in a coma for four years, though she continues to say cryptic, beautiful things. (These are eventually explained.) Despite this sign of life, Barbara's doctor suffers a "bioethics infection" that makes him want to remove Barbara's feeding tube. "Four years is such a long time," says the doctor. "Death is longer," says steadfast Billy. Mr. Koontz also condemns scientists who work on cloning, genetic engineering and stem cell research. ("The smarter they are, the dumber they get.") He has Billy flirt with drug use to show how it can be dangerous to the soul. "Pain is a gift," he writes, after Billy discovers Vicodin. "Humanity, without pain, would know neither fear nor pity. Without fear, there could be no humility, and every man would be a monster." Mr. Koontz saves particular scorn for the artist who is building a huge mural that will be destroyed as soon as it is completed. (The same artist once popped 20,000 helium balloons in Australia.) On the mural, a "40-foot wooden man strove to save himself from the great grinding wheels of industry or brutal ideology, or modern art." The author is closer to Christ than he is to Christo, and he wants to make that extra-specially clear. Then there is the killer who taunts Billy - or "the freak," as the fiend is often called here. It goes without saying that the freak is a sadist: his filthy deeds include nailing Billy's hand to a wooden floor and sticking fishhooks in Billy's brow. (Billy "recognized the religious reference. Christ had been called a fisher of men.") But beyond committing ugly physical violence, the freak means to shake Billy's faith. As the book puts it, straight from the pulpit, "There are days of doubt, more often lonely nights, when even the devout wonder if they are heirs to a greater kingdom than this earth and if they will know mercy - or if instead they are only animals like any other, with no inheritance except the wind and the dark." Mr. Koontz has the rare desire and even rarer dexterity to shoehorn passages like that into whodunit material. So this book wears its conscience on its sleeve even as it puts Billy on the receiving end of messages like this: "If you don't go to the police and get them involved, I will kill an unmarried man who won't much be missed by the world. If you do go to the police, I will kill a young mother of two." For Billy, this is even more compromising than it might be for others. "He would have preferred physical peril to the moral jeopardy that he faced," Mr. Koontz writes. Then Billy begins to realize - this is a mystery story, after all - that the killer may be somebody he knows. And as the homilies continue to pile up, so do the corpses and the kinks. But a book that features grotesque mutilations alongside sincere expressions of sweetness (a house glowing "like a centenarian's birthday cake") is suffering an identity crisis at the very least. At worst, it is drawn to the dark side with disturbing ardor. As for looking on the bright side, "Velocity" will be very popular with readers who do. It can be read more as a parable than as a self-contradictory screed if Billy's trials are rolled into one long journey through "this most temporary world." Here he is, seeking spiritual wisdom despite a world that makes truck stops easier to enter than churches (in the middle of the night, at least), finding himself tested at every turn: "Billy's fate was to live in a time that denied the existence of abominations, that gave the lesser name horror to every abomination, that redefined every horror as a crime, every crime as an offense, every offense as a mere annoyance." Mr. Koontz's self-appointed job here is to give those abominations their due. From shovland at mindspring.com Mon May 23 21:59:50 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 23 May 2005 14:59:50 -0700 Subject: [Paleopsych] Ray Kurzweil: Follow Your Passion Message-ID: <01C55FA8.1290A160.shovland@mindspring.com> WORCESTER, Mass. -- March 23, 2005 -- Ray Kurzweil, world-renowned inventor, entrepreneur, author, and futurist, will be the commencement speaker at Worcester Polytechnic Institute's (WPI) 137th graduation ceremony on Saturday, May 21. Kurzweil will discuss his ideas on the future interplay between mankind and artificial intelligence with WPI's graduates and community in his speech, "When Humans Transcend Biology." After Kurzweil's talk, the university will confer upon him an honorary degree. Widely regarded as one of the preeminent inventors and innovators of our time, Kurzweil foresees an era when the human body will be enhanced by software and computers, enabling humans to download intelligence and to live long past the current life expectancy. Kurzweil has laid out his vision in this area through his writing. He has authored five books and hundreds of articles. His first book, The Age of Intelligent Machines, was named Best Computer Science Book of 1990. His best-selling book, The Age of Spiritual Machines, When Computers Exceed Human Intelligence, has been published in nine languages and achieved the #1 best selling book on Amazon.com in the categories of "science" and "artificial intelligence." Kurzweil's most recent work, coauthored with Terry Grossman, M.D., is Fantastic Voyage: Live Long Enough to Live Forever. His next book, The Singularity is Near, When Humans Transcend Biology, is due to be published in September 2005. Photo by Michael Lutch. Courtesy of Kurzweil Tech. Inc." "As our graduates begin the next chapter of their lives, Ray Kurzweil is an excellent role model -- providing a firsthand example of an innovative career that has used science, technology and engineering to benefit the world," says Dennis D. Berkey, president of WPI. Kurzweil launched his thriving career in high school when he appeared on the television show "I've Got a Secret," hosted by Steve Allen. His secret was that he programmed his computer to analyze abstract patterns in musical compositions and then composed original melodies in a similar style. With this project, Kurzweil won first prize in the International Science Fair, and he was named one of the 40 Westinghouse Science Talent Search winners who were able to meet President Lyndon Johnson in a White House ceremony. Kurzweil subsequently rose to even greater success with the invention of several devices, including the first omni-font optical character recognition (OCR), the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. He has founded and developed nine businesses in OCR, music synthesis, speech recognition, reading technology, virtual reality, financial investment, cybernetic art, and other areas of artificial intelligence. As a result of his accomplishments, Kurzweil was named in 2002 to the National Inventors Hall of Fame, which was established by the U.S. Patent Office. He has also been the recipient of numerous awards, including the $500,000 Lemelson-MIT Prize, the nation's largest award in invention and innovation, and the 1999 National Medal of Technology, the nation's highest honor in technology, from President Bill Clinton in a White House ceremony. Kurzweil grew up in Queens, N.Y. He received his B.S. in computer science and literature from Massachusetts Institute of Technology. From checker at panix.com Wed May 25 00:46:08 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:46:08 -0400 (EDT) Subject: [Paleopsych] NYTDBR: (Koontz) Receiving Moral Instruction, Courtesy of a Serial Killer Message-ID: Receiving Moral Instruction, Courtesy of a Serial Killer New York Times Daily Book Review, 5.5.23 http://www.nytimes.com/2005/05/23/books/23masl.html [Thanks to Sarah for this.] By [2]JANET MASLIN Although the thriller-sermon is an unusual hyphenate, Dean Koontz makes it his specialty. He writes about humble four-square heroes who try to lead exemplary lives but are challenged by the evils of the wider world. If those evils happen to be gruesome and sick enough to sell a lot of books, one might make the case that Mr. Koontz cannot define goodness without them. Maybe. But does every moral lesson require a Hannibal Lecter? The lurid extremes of "Velocity," Mr. Koontz's latest, come mighty close to cashing in on the very sinfulness he decries. The novel's resident good guy, Billy Wiles, is bedeviled by a serial killer who tries to make Billy complicit in his violence. He forces Billy to pick the victims. And he sends Billy such high-concept threatening letters that one of them is conveniently reprinted on the book's back cover. "Velocity" might be read as a flat-out exercise in escapist depravity - in other words, par for the course in popular crime fiction - were it not for the author's nonstop idiosyncrasies. Say this for Mr. Koontz: he is skillful in ways that make "Velocity" live up to its title, and nobody will ever accuse him of formulaic writing. He starts this book with a death by garden gnome. ("The gnome was made of concrete. Henry wasn't.") He includes a sweet young woman who believes she is a haruspex (a reader of entrails). In a further oblique nod to Scrabble, he makes Billy a woodcarver who likes listening to zydeco. Whimsy notwithstanding, Mr. Koontz also has blunt points to make. He underscores Billy's devotion to Barbara, his fianc?e, who has been in a coma for four years, though she continues to say cryptic, beautiful things. (These are eventually explained.) Despite this sign of life, Barbara's doctor suffers a "bioethics infection" that makes him want to remove Barbara's feeding tube. "Four years is such a long time," says the doctor. "Death is longer," says steadfast Billy. Mr. Koontz also condemns scientists who work on cloning, genetic engineering and stem cell research. ("The smarter they are, the dumber they get.") He has Billy flirt with drug use to show how it can be dangerous to the soul. "Pain is a gift," he writes, after Billy discovers Vicodin. "Humanity, without pain, would know neither fear nor pity. Without fear, there could be no humility, and every man would be a monster." Mr. Koontz saves particular scorn for the artist who is building a huge mural that will be destroyed as soon as it is completed. (The same artist once popped 20,000 helium balloons in Australia.) On the mural, a "40-foot wooden man strove to save himself from the great grinding wheels of industry or brutal ideology, or modern art." The author is closer to Christ than he is to Christo, and he wants to make that extra-specially clear. Then there is the killer who taunts Billy - or "the freak," as the fiend is often called here. It goes without saying that the freak is a sadist: his filthy deeds include nailing Billy's hand to a wooden floor and sticking fishhooks in Billy's brow. (Billy "recognized the religious reference. Christ had been called a fisher of men.") But beyond committing ugly physical violence, the freak means to shake Billy's faith. As the book puts it, straight from the pulpit, "There are days of doubt, more often lonely nights, when even the devout wonder if they are heirs to a greater kingdom than this earth and if they will know mercy - or if instead they are only animals like any other, with no inheritance except the wind and the dark." Mr. Koontz has the rare desire and even rarer dexterity to shoehorn passages like that into whodunit material. So this book wears its conscience on its sleeve even as it puts Billy on the receiving end of messages like this: "If you don't go to the police and get them involved, I will kill an unmarried man who won't much be missed by the world. If you do go to the police, I will kill a young mother of two." For Billy, this is even more compromising than it might be for others. "He would have preferred physical peril to the moral jeopardy that he faced," Mr. Koontz writes. Then Billy begins to realize - this is a mystery story, after all - that the killer may be somebody he knows. And as the homilies continue to pile up, so do the corpses and the kinks. But a book that features grotesque mutilations alongside sincere expressions of sweetness (a house glowing "like a centenarian's birthday cake") is suffering an identity crisis at the very least. At worst, it is drawn to the dark side with disturbing ardor. As for looking on the bright side, "Velocity" will be very popular with readers who do. It can be read more as a parable than as a self-contradictory screed if Billy's trials are rolled into one long journey through "this most temporary world." Here he is, seeking spiritual wisdom despite a world that makes truck stops easier to enter than churches (in the middle of the night, at least), finding himself tested at every turn: "Billy's fate was to live in a time that denied the existence of abominations, that gave the lesser name horror to every abomination, that redefined every horror as a crime, every crime as an offense, every offense as a mere annoyance." Mr. Koontz's self-appointed job here is to give those abominations their due. From checker at panix.com Wed May 25 00:46:27 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:46:27 -0400 (EDT) Subject: [Paleopsych] NYT: New Rule on Endangered Species in the Southwest Message-ID: New Rule on Endangered Species in the Southwest http://www.nytimes.com/2005/05/24/national/24species.html By [2]FELICITY BARRINGER WASHINGTON, May 23 - The southwestern regional director of the United States Fish and Wildlife Service has instructed members of his staff to limit their use of the latest scientific studies on the genetics of endangered plants and animals when deciding how best to preserve and recover them. At issue is what happens once a fish, animal, plant or bird is included on the federal endangered species list as being in danger of extinction and needing protection. Dale Hall, the director of the southwestern region, in a memorandum dated Jan. 27, said that all decisions about how to return a species to robust viability must use only the genetic science in place at the time it was put on the endangered species list - in some cases the 1970's or earlier - even if there have been scientific advances in understanding the genetic makeup of a species and its subgroups in the ensuing years. His instructions can spare states in his region the expense of extensive recovery efforts. Arizona officials responsible for the recovery of Apache trout, for example, argue that the money - $2 million to $3 million in the past five years - spent on ensuring the survival of each genetic subgroup of the trout was misdirected, since the species as a whole was on its way to recovery. In his memorandum, Mr. Hall built upon a federal court ruling involving Oregon Coast coho salmon. The judge in that case said that because there was no basic genetic distinction between hatchery fish and their wild cousins, both had to be counted when making a determination that the fish was endangered. In the policy discussion attached to his memorandum, Mr. Hall wrote, "genetic differences must be addressed" when a species is declared endangered. Thereafter, he said, "there can be no further subdivision of the entity because of genetics or any other factor" unless the government goes through the time-consuming process of listing the subspecies as a separate endangered species. The regional office, in Albuquerque, covers Arizona, Oklahoma, New Mexico and Texas. Mr. Hall's memorandum prompted dissent within the agency. Six weeks later, his counterpart at the mountain-prairie regional office, in Denver, sent a sharp rebuttal to Mr. Hall. "Knowing if populations are genetically isolated or where gene flow is restricted can assist us in identifying recovery units that will ensure that a species will persist over time," the regional director, Ralph O. Morgenweck, wrote. "It can also ensure that unique adaptations that may be essential for future survival continue to be maintained in the species." Mr. Hall's policy, he wrote, "could run counter to the purpose of the Endangered Species Act" and "may contradict our direction to use the best available science in endangered species decisions in some cases." One retired biologist for the southwestern office, Sally Stefferud, suggested in a telephone interview that the issue went beyond the question of whether to consider modern genetics. "That's a major issue, of course," Ms. Stefferud said. "But I think there's more behind it. It's a move to make it easier" to take away a species's endangered status, she said. That would make it easier for officials to approve actions - like construction, logging or commercial fishing - that could reduce a species's number. Mr. Hall was on vacation and not available for comment Monday. Mr. Morgenweck could not be reached late Monday afternoon, but his assistant confirmed he had sent the rebuttal. The memorandums were provided by the Center for Biological Diversity and Public Employees for Environmental Responsibility, two groups that opposed Mr. Hall's policy. They said that species whose recovery could be impeded by the policy included the Gila trout and the Apache trout. Mr. Hall's ruling fits squarely into the theory advanced by the Pacific Legal Foundation, a property-rights group in California, that endangered species be considered as one genetic unit for purposes of being put on the endangered species list and in subsequent management plans. In an e-mail message on Monday, Russ Brooks, the lawyer who worked on the Oregon case for the foundation, wrote, "Having read the memo, I can say that I agree with it." Bruce Taubert, the assistant director for wildlife management at the Arizona Game and Fish Department, said of the new policy, "We support it," adding, in the case of the endangered Apache trout, "Why should we spend an incredible amount of time and money to do something with that species if it doesn't add to the viability and longevity of the species that was listed?" "By not having to worry about small genetic pools, we can do these things faster and better," Mr. Taubert said. But Philip Hedrick, a professor of population genetics at Arizona State University, said that it made no sense to ignore scientific advances in his field. "Genetics and evolutionary thinking have to be incorporated if we're going to talk about long-term sustainability of these species," he said. "Maybe in the short term you can have a few animals closely related and inbred out there, but for them to survive in any long-term sense you have to think about this long-term picture that conservation biologists have come up with over the last 25 years." Professor Hedrick added that cutting off new genetic findings that fell short of providing evidence that a separate species had evolved was "completely inappropriate, because as everyone knows, we're able to know a lot more than we did five years ago." He added, "They talk about using the best science, but that's clearly not what they're trying to do here." In a telephone interview from the Albuquerque fish and wildlife office, Larry Bell, a spokesman, said that Mr. Hall's interpretation meant that "the only thing that we have to consider in recovery is: does the species exist?" "We don't have to consider whether various adaptive portions of a species exist," he said. Asked about why an Oregon ruling would have an impact on policies in the southwest, he said: "My belief is that because it's the only court decision that addresses the issue of genetics. While we're not within this region bound by the Oregon decision per se, it would provide guidance." From checker at panix.com Wed May 25 00:46:39 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:46:39 -0400 (EDT) Subject: [Paleopsych] NYT: Sci-Fi Synergy Message-ID: Sci-Fi Synergy http://www.nytimes.com/2005/05/24/arts/design/24scif.html By [2]EDWARD ROTHSTEIN SEATTLE, May 22 - "Most museums show you history," boasts the year-old Science Fiction Museum and Hall of Fame in Seattle, but "only one takes you to the future." And so it does, if the future includes the life-size model of the Alien Queen from James Cameron's 1986 movie "Aliens"; a first edition of Ray Bradbury's "The Martian Chronicles" (1950); a collection of phaser guns from "Star Trek" (1966-present); the vinyl raincoat worn by Joanna Cassidy in the 1982 film "Blade Runner"; and a half-size replica of the roving explorer "Sojourner," used on the surface of Mars in 1997. Actually, of course, it isn't the future being shown, and it isn't really history either. It's something like a history of the future, or a history of ideas about the future. And as it unfolds here, it is dizzying in its miscellany. It also has some unusual resonances right now because science-fiction franchises like the "Star Wars" films and "Star Trek" series have just been brought to a close. In the museum, the influence of those epics is unmistakable, with sound effects and lighting shaping each exhibit's environment. A "Stardock" window even seems to look out into cinematic space, where ships from "E. T." and "Star Trek" and "Star Wars" (along with antiques like H. G. Wells's moon capsule), glide past one another as observers at touch-screens learn about their origins and powers. Other displays mix genres and media with almost gleeful abandon. A vest worn by Michael York in "Logan's Run" (1976) is not far from a first edition of an Ursula K. Le Guin novel and a copy of Mad magazine. Hauntingly delicate drawings by a little-known Brazilian artist, Alvim Corr?a, illustrating a 1906 Belgian edition of H. G. Wells's "War of the Worlds," are around the corner from models of extraterrestrials assembled in a mock intergalactic saloon similar to the one in "Star Wars." It is as if a molecular manipulator out of "The Fly" had scrambled a century of objects, grafting together disparate media and creatures. But within this phantasmagorical array of memorabilia, film and collectibles, a portrait of the history of the future does begin to take shape. The opening exhibit room, wrapped in a band of stars like a planetarium, offers a timeline of science fiction as the exhibits survey its preoccupations, its overlap with real science, its concerns with society, its fans turned practitioners. And the museum itself is really a rough first draft of that history, created by the Microsoft billionaire, Paul G. Allen, 52, largely out of his own collection. He gave it a $20 million, 13,000-square-foot home in the same Frank O. Gehry building as the $240 million museum devoted to one of Mr. Allen's other passions - rock music; in the Experience Music Project, as it is called, Jimi Hendrix's guitar is as readily displayed as Captain Kirk's tunic is here. In fact, scaled back ambitions for the music project, which has been having problems meeting original expectations, created room for the Science Fiction Museum in a space once used for a three-story thrill ride. But the museum doesn't leave science fiction at the level of toys and hobby horses. It is a good place to be reminded that a genre that 80 years ago was on the margins is now, at least in its cinematic incarnations, at the very center of culture. Science fiction pulp magazines once featured what insiders called "BBB's" fleeing "BEM's" - "Brass Bra Babes" fleeing "Bug-Eyed Monsters." Not for long. Writers of the mid-20th century turned science fiction into something more profound; many recent writers have been scientists themselves. Even modern cinema, with its sound effects and computer-generated worlds, was shaped by science fiction: George Lucas's 1977 "Star Wars" film was a declaration of independence and dominance for the genre, setting it on its current course. The long-term television and movie saga of "Star Trek" also created new types of fans, satirized and paid homage to in the film "Galaxy Quest" (1999). The ends of these franchises do not, of course, mark an end to the genre's importance, nor do they portend an era of dissolution. And while, aside from the cinematic items, the museum's center of gravity seems to rest in the 1970's, causing it to lean backward rather than forward, the collection also provides a glimpse of science fiction's enduring appeal. It is astonishing, for example, how often boundaries between fantasy and reality are broken down in the exhibits themselves. Objects from "Star Trek" are real ("The phaser," we are told, "was developed early in the 23rd century as a defensive weapon"), while other objects, like a 1951 Dick Tracy radio, are called toys. A "Starfleet communicator badge" from "Star Trek" is labeled a "reproduction" presumably because it was not really a "communicator" used on the show. But in an exhibit of uniforms, a tunic from the 1956 film "Forbidden Planet" shares the same case as a NASA space suit from the Gemini program. Fiction and fact intermingle. This is one point of an exhibit devoted to "War of the Worlds" (which is now being made into two feature films). It offers a recording of Orson Welles's 1938 radio broadcast, which famously caused listeners to believe a Martian invasion was taking place in New Jersey. After a broadcast in Ecuador a few years later, the exhibit tells us, the news that it had only been a radio drama led to riots, the storming of a radio station and multiple deaths. This reaction, however extreme, is a fantasy for fantasists: science fiction's account of the future is not meant to be fantasy. Instead it creates a kind of "thought experiment." As one exhibit points out, it asks "What if ..." What if you only saw the stars every 2,000 years? This is an approach used within science as well, testing understanding and exploring possibilities; scientists often point to science fiction as an early influence. The museum's director, Donna Shirley, has said that reading Bradbury's "Martian Chronicles" at the age of 12 inspired her career. At NASA's Jet Propulsion Laboratory, she managed the Mars Exploration Program; here, she helped mount the museum's Mars exhibit, which juggles science fiction and fact. Alternate realities, of course, appeal to adolescents as well as inventors, and in some ways Mr. Allen's museums lean toward the former, enshrining his own adolescent passions. Indeed, right now, the collection determines too readily what is shown: a series of exhibits about dystopias and utopias could have been far more powerful had the objects been selected more carefully, and the ideas more thoroughly explored. But the passion and homage are welcome. For science fiction may really aspire to be more like history than fantasy, not because it aspires to be true, but because it aspires to know what could possibly be true. Often, in fact, it is less preoccupied with the future than with the past; studying "what was?" can help show "what if?" Mr. Lucas's first three "Star Wars" films, for example, actually oppose the onslaught of the future. The villains are the masters of gleaming technology; the heroes are retro and ramshackle, in touch with the lost powers of the past. So histories of the future really deserve a museum, if only to suggest where they might go next. From checker at panix.com Wed May 25 00:47:00 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:47:00 -0400 (EDT) Subject: [Paleopsych] NYT: Class Matters - Social Class and Education in the United States of America Message-ID: Class Matters - Social Class and Education in the United States of America http://www.nytimes.com/2005/05/24/national/class/EDUCATION-FINAL.html [Fifth of a series.] By [2]DAVID LEONHARDT CHILHOWIE, Va. - One of the biggest decisions Andy Blevins has ever made, and one of the few he now regrets, never seemed like much of a decision at all. It just felt like the natural thing to do. In the summer of 1995, he was moving boxes of soup cans, paper towels and dog food across the floor of a supermarket warehouse, one of the biggest buildings here in southwest Virginia. The heat was brutal. The job had sounded impossible when he arrived fresh off his first year of college, looking to make some summer money, still a skinny teenager with sandy blond hair and a narrow, freckled face. But hard work done well was something he understood, even if he was the first college boy in his family. Soon he was making bonuses on top of his $6.75 an hour, more money than either of his parents made. His girlfriend was around, and so were his hometown buddies. Andy acted more outgoing with them, more relaxed. People in Chilhowie noticed that. It was just about the perfect summer. So the thought crossed his mind: maybe it did not have to end. Maybe he would take a break from college and keep working. He had been getting C's and D's, and college never felt like home, anyway. "I enjoyed working hard, getting the job done, getting a paycheck," Mr. Blevins recalled. "I just knew I didn't want to quit." So he quit college instead, and with that, Andy Blevins joined one of the largest and fastest-growing groups of young adults in America. He became a college dropout, though nongraduate may be the more precise term. Many people like him plan to return to get their degrees, even if few actually do. Almost one in three Americans in their mid-20's now fall into this group, up from one in five in the late 1960's, when the Census Bureau began keeping such data. Most come from poor and working-class families. The phenomenon has been largely overlooked in the glare of positive news about the country's gains in education. Going to college has become the norm throughout most of the United States, even in many places where college was once considered an exotic destination - places like Chilhowie (pronounced chill-HOW-ee), an Appalachian hamlet with a simple brick downtown. At elite universities, classrooms are filled with women, blacks, Jews and Latinos, groups largely excluded two generations ago. The American system of higher learning seems to have become a great equalizer. In fact, though, colleges have come to reinforce many of the advantages of birth. On campuses that enroll poorer students, graduation rates are often low. And at institutions where nearly everyone graduates - small colleges like [3]Colgate, major state institutions like the [4]University of Colorado and elite private universities like [5]Stanford - more students today come from the top of the nation's income ladder than they did two decades ago. Only 41 percent of low-income students entering a four-year college managed to graduate within five years, the Department of Education found in a study last year, but 66 percent of high-income students did. That gap had grown over recent years. "We need to recognize that the most serious domestic problem in the United States today is the widening gap between the children of the rich and the children of the poor," [6]Lawrence H. Summers, the president of Harvard, said last year when announcing that Harvard would give full scholarships to all its lowest-income students. "And education is the most powerful weapon we have to address that problem." There is certainly much to celebrate about higher education today. Many more students from all classes are getting four-year degrees and reaping their benefits. But those broad gains mask the fact that poor and working-class students have nevertheless been falling behind; for them, not having a degree remains the norm. That loss of ground is all the more significant because a college education matters much more now than it once did. A bachelor's degree, not a year or two of courses, tends to determine a person's place in today's globalized, computerized economy. College graduates have received steady pay increases over the past two decades, while the pay of everyone else has risen little more than the rate of inflation. As a result, despite one of the great education explosions in modern history, economic mobility - moving from one income group to another over the course of a lifetime - has stopped rising, researchers say. Some recent studies suggest that it has declined over the last generation. [[7]Click here for more information on income mobility.] Put another way, children seem to be following the paths of their parents more than they once did. Grades and test scores, rather than privilege, determine success today, but that success is largely being passed down from one generation to the next. A nation that believes that everyone should have a fair shake finds itself with a kind of inherited meritocracy. In this system, the students at the best colleges may be diverse - male and female and of various colors, religions and hometowns - but they tend to share an upper-middle-class upbringing. An old joke that Harvard's idea of diversity is putting a rich kid from California in the same room as a rich kid from New York is truer today than ever; Harvard has more students from California than it did in years past and just as big a share of upper-income students. Students like these remain in college because they can hardly imagine doing otherwise. Their parents, understanding the importance of a bachelor's degree, spent hours reading to them, researching school districts and making it clear to them that they simply must graduate from college. Andy Blevins says that he too knows the importance of a degree, but that he did not while growing up, and not even in his year at [8]Radford University, 66 miles up the Interstate from Chilhowie. Ten years after trading college for the warehouse, Mr. Blevins, 29, spends his days at the same supermarket company. He has worked his way up to produce buyer, earning $35,000 a year with health benefits and a 401(k) plan. He is on a path typical for someone who attended college without getting a four-year degree. Men in their early 40's in this category made an average of $42,000 in 2000. Those with a four-year degree made $65,000. Still boyish-looking but no longer rail thin, Mr. Blevins says he has many reasons to be happy. He lives with his wife, Karla, and their year-old son, Lucas, in a small blue-and-yellow house at the end of a cul-de-sac in the middle of a stunningly picturesque Appalachian valley. He plays golf with some of the same friends who made him want to stay around Chilhowie. But he does think about what might have been, about what he could be doing if he had the degree. As it is, he always feels as if he is on thin ice. Were he to lose his job, he says, everything could slip away with it. What kind of job could a guy without a college degree get? One night, while talking to his wife about his life, he used the word "trapped." "Looking back, I wish I had gotten that degree," Mr. Blevins said in his soft-spoken lilt. "Four years seemed like a thousand years then. But I wish I would have just put in my four years." The Barriers Why so many low-income students fall from the college ranks is a question without a simple answer. Many high schools do a poor job of preparing teenagers for college. Many of the colleges where lower-income students tend to enroll have limited resources and offer a narrow range of majors, leaving some students disenchanted and unwilling to continue. Then there is the cost. Tuition bills scare some students from even applying and leave others with years of debt. To Mr. Blevins, like many other students of limited means, every week of going to classes seemed like another week of losing money - money that might have been made at a job. "The system makes a false promise to students," said [9]John T. Casteen III, the president of the [10]University of Virginia, himself the son of a Virginia shipyard worker. Colleges, Mr. Casteen said, present themselves as meritocracies in which academic ability and hard work are always rewarded. In fact, he said, many working-class students face obstacles they cannot overcome on their own. For much of his 15 years as Virginia's president, Mr. Casteen has focused on raising money and expanding the university, the most prestigious in the state. In the meantime, students with backgrounds like his have become ever scarcer on campus. The university's genteel nickname, the Cavaliers, and its aristocratic sword-crossed coat of arms seem appropriate today. No flagship state university has a smaller proportion of low-income students than Virginia. Just 8 percent of undergraduates last year came from families in the bottom half of the income distribution, down from 11 percent a decade ago. That change sneaked up on him, Mr. Casteen said, and he has spent a good part of the last year trying to prevent it from becoming part of his legacy. Starting with next fall's freshman class, the university will charge no tuition and require no loans for students whose parents make less than twice the poverty level, or about $37,700 a year for a family of four. The university has also increased financial aid to middle-income students. To Mr. Casteen, these are steps to remove what he describes as "artificial barriers" to a college education placed in the way of otherwise deserving students. Doing so "is a fundamental obligation of a free culture," he said. But the deterrents to a degree can also be homegrown. Many low-income teenagers know few people who have made it through college. A majority of the nongraduates are young men, and some come from towns where the factory work ethic, to get working as soon as possible, remains strong, even if the factories themselves are vanishing. Whatever the reasons, college just does not feel normal. "You get there and you start to struggle," said Leanna Blevins, Andy's older sister, who did get a bachelor's degree and then went on to earn a Ph.D at Virginia studying the college experiences of poor students. "And at home your parents are trying to be supportive and say, 'Well, if you're not happy, if it's not right for you, come back home. It's O.K.' And they think they're doing the right thing. But they don't know that maybe what the student needs is to hear them say, 'Stick it out just one semester. You can do it. Just stay there. Come home on the weekend, but stick it out.' " Today, Ms. Blevins, petite and high-energy, is helping to start a new college a few hours' drive from Chilhowie for low-income students. Her brother said he had daydreamed about attending it and had talked to her about how he might return to college. For her part, Ms. Blevins says, she has daydreamed about having a life that would seem as natural as her brother's, a life in which she would not feel like an outsider in her hometown. Once, when a high-school teacher asked students to list their goals for the next decade, Ms. Blevins wrote, "having a college degree" and "not being married." "I think my family probably thinks I'm liberal," Ms. Blevins, who is now married, said with a laugh, "that I've just been educated too much and I'm gettin' above my raisin'." Her brother said that he just wanted more control over his life, not a new one. At a time when many people complain of scattered lives, Mr. Blevins can stand in one spot - his church parking lot, next to a graveyard - and take in much of his world. "That's my parents' house," he said one day, pointing to a sliver of roof visible over a hill. "That's my uncle's trailer. My grandfather is buried here. I'll probably be buried here." Taking Class Into Account Opening up colleges to new kinds of students has generally meant one thing over the last generation: affirmative action. Intended to right the wrongs of years of exclusion, the programs have swelled the number of women, blacks and Latinos on campuses. But affirmative action was never supposed to address broad economic inequities, just the ones that stem from specific kinds of discrimination. That is now beginning to change. Like Virginia, a handful of other colleges are not only increasing financial aid but also promising to give weight to economic class in granting admissions. They say they want to make an effort to admit more low-income students, just as they now do for minorities and children of alumni. "The great colleges and universities were designed to provide for mobility, to seek out talent," said [11]Anthony W. Marx, president of [12]Amherst College. "If we are blind to the educational disadvantages associated with need, we will simply replicate these disadvantages while appearing to make decisions based on merit." With several populous states having already banned race-based preferences and the United States Supreme Court suggesting that it may outlaw such programs in a couple of decades, the future of affirmative action may well revolve around economics. Polls consistently show that programs based on class backgrounds have wider support than those based on race. The explosion in the number of nongraduates has also begun to get the attention of policy makers. This year, New York became one of a small group of states to tie college financing more closely to graduation rates, rewarding colleges more for moving students along than for simply admitting them. Nowhere is the stratification of education more vivid than here in Virginia, where Thomas Jefferson once tried, and failed, to set up the nation's first public high schools. At a modest high school in the Tidewater city of Portsmouth, not far from Mr. Casteen's boyhood home, a guidance office wall filled with college pennants does not include one from rarefied Virginia. The colleges whose pennants are up - [13]Old Dominion University and others that seem in the realm of the possible - have far lower graduation rates. Across the country, the upper middle class so dominates elite universities that high-income students, on average, actually get slightly more financial aid from colleges than low-income students do. These elite colleges are so expensive that even many high-income students receive large grants. In the early 1990's, by contrast, poorer students got 50 percent more aid on average than the wealthier ones, according to the [14]College Board, the organization that runs the SAT entrance exams. At the other end of the spectrum are community colleges, the two-year institutions that are intended to be feeders for four-year colleges. In nearly every one are tales of academic success against tremendous odds: a battered wife or a combat veteran or a laid-off worker on the way to a better life. But over all, community colleges tend to be places where dreams are put on hold. Most people who enroll say they plan to get a four-year degree eventually; few actually do. Full-time jobs, commutes and children or parents who need care often get in the way. One recent national survey found that about 75 percent of students enrolling in community colleges said they hoped to transfer to a four-year institution. But only 17 percent of those who had entered in the mid-1990's made the switch within five years, according to a separate study. The rest were out working or still studying toward the two-year degree. "We here in Virginia do a good job of getting them in," said Glenn Dubois, chancellor of the [15]Virginia Community College System and himself a community college graduate. "We have to get better in getting them out." 'I Wear a Tie Every Day' College degree or not, Mr. Blevins has the kind of life that many Americans say they aspire to. He fills it with family, friends, church and a five-handicap golf game. He does not sit in traffic commuting to an office park. He does not talk wistfully of a relocated brother or best friend he sees only twice a year. He does not worry about who will care for his son while he works and his wife attends community college to become a physical therapist. His grandparents down the street watch Lucas, just as they took care of Andy and his two sisters when they were children. When Mr. Blevins comes home from work, it is his turn to play with Lucas, tossing him into the air and rolling around on the floor with him and a stuffed elephant. Mr. Blevins also sings in a quartet called the Gospel Gentlemen. One member is his brother-in-law; another lives on Mr. Blevins's street. In the long white van the group owns, they wend their way along mountain roads on their way to singing dates at local church functions, sometimes harmonizing, sometimes ribbing one another or talking about where to buy golf equipment. Inside the churches, the other singers often talk to the audience between songs, about God or a grandmother or what a song means to them. Mr. Blevins rarely does, but his shyness fades once he is back in the van with his friends. At the warehouse, he is usually the first to arrive, around 6:30 in the morning. The grandson of a coal miner, he takes pride, he says, in having moved up to become a supermarket buyer. He decides which bananas, grapes, onions and potatoes the company will sell and makes sure that there are always enough. Most people with his job have graduated from college. "I'm pretty fortunate to not have a degree but have a job where I wear a tie every day," he said. He worries about how long it will last, though, mindful of what happened to his father, Dwight, a decade ago. A high school graduate, Dwight Blevins was laid off from his own warehouse job and ended up with another one that paid less and offered a smaller pension. "A lot of places, they're not looking that you're trained in something," Andy Blevins said one evening, sitting on his back porch. "They just want you to have a degree." Figuring out how to get one is the core quandary facing the nation's college nongraduates. Many seem to want one. In a [16]New York Times poll, 43 percent of them called it essential to success, while 42 percent of college graduates and 32 percent of high-school dropouts did. This in itself is a change from the days when "college boy" was an insult in many working-class neighborhoods. But once students take a break - the phrase that many use instead of drop out - the ideal can quickly give way to reality. Family and work can make a return to school seem even harder than finishing it in the first place. After dropping out of Radford, Andy Blevins enrolled part-time in a community college, trying to juggle work and studies. He lasted a year. From time to time in the decade since, he has thought about giving it another try. But then he has wondered if that would be crazy. He works every third Saturday, and his phone rings on Sundays when there is a problem with the supply of potatoes or apples. "It never ends," he said. "There's a never a lull." To spend more time with Lucas, Mr. Blevins has already cut back on his singing. If he took night classes, he said, when would he ever see his little boy? Anyway, he said, it would take years to get a degree part-time. To him, it is a tug of war between living in the present and sacrificing for the future. Few Breaks for the Needy The college admissions system often seems ruthlessly meritocratic. Yes, children of alumni still have an advantage. But many other pillars of the old system - the polite rejections of women or blacks, the spots reserved for graduates of Choate and Exeter - have crumbled. This was the meritocracy Mr. Casteen described when he greeted the parents of freshman in a University of Virginia lecture hall late last summer. Hailing from all 50 states and 52 foreign countries, the students were more intelligent and better prepared than he and his classmates had been, he told the parents in his quiet, deep voice. The class included 17 students with a perfect SAT score. If anything, children of privilege think that the system has moved so far from its old-boy history that they are now at a disadvantage when they apply, because colleges are trying to diversify their student rolls. To get into a good college, the sons and daughters of the upper middle class often talk of needing a higher SAT score than, say, an applicant who grew up on a farm, in a ghetto or in a factory town. Some state legislators from Northern Virginia's affluent suburbs have argued that this is a form of geographic discrimination and have quixotically proposed bills to outlaw it. But the conventional wisdom is not quite right. The elite colleges have not been giving much of a break to the low-income students who apply. When [17]William G. Bowen, a former president of Princeton, looked at admissions records recently, he found that if test scores were equal a low-income student had no better chance than a high-income one of getting into a group of 19 colleges, including [18]Harvard, [19]Yale, [20]Princeton, [21]Williams and [22]Virginia. Athletes, legacy applicants and minority students all got in with lower scores on average. Poorer students did not. The findings befuddled many administrators, who insist that admissions officers have tried to give poorer applicants a leg up. To emphasize the point, Virginia announced this spring that it was changing its admissions policy from "need blind" - a term long used to assure applicants that they would not be punished for seeking financial aid - to "need conscious." Administrators at Amherst and Harvard have also recently said that they would redouble their efforts to take into account the obstacles students have overcome. "The same score reflects more ability when you come from a less fortunate background," Mr. Summers, the president of Harvard, said. "You haven't had a chance to take the test-prep course. You went to a school that didn't do as good a job coaching you for the test. You came from a home without the same opportunities for learning." But it is probably not a coincidence that elite colleges have not yet turned this sentiment into action. Admitting large numbers of low-income students could bring clear complications. Too many in a freshman class would probably lower the college's average SAT score, thereby damaging its [23]ranking by U.S. News & World Report, a leading arbiter of academic prestige. Some colleges, like [24]Emory University in Atlanta, have climbed fast in the rankings over precisely the same period in which their percentage of low-income students has tumbled. The math is simple: when a college goes looking for applicants with high SAT scores, it is far more likely to find them among well-off teenagers. More spots for low-income applicants might also mean fewer for the children of alumni, who make up the fund-raising base for universities. More generous financial aid policies will probably lead to higher tuition for those students who can afford the list price. Higher tuition, lower ranking, tougher admission requirements: they do not make for an easy marketing pitch to alumni clubs around the country. But Mr. Casteen and his colleagues are going ahead, saying the pendulum has swung too far in one direction. That was the mission of John Blackburn, Virginia's easy-going admissions dean, when he rented a car and took to the road recently. Mr. Blackburn thought of the trip as a reprise of the drives Mr. Casteen took 25 years earlier, when he was the admissions dean, traveling to churches and community centers to persuade black parents that the university was finally interested in their children. One Monday night, Mr. Blackburn came to Big Stone Gap, in a mostly poor corner of the state not far from Andy Blevins's town. A community college there was holding a college fair, and Mr. Blackburn set up a table in a hallway, draping it with the University of Virginia's blue and orange flag. As students came by, Mr. Blackburn would explain Virginia's new admissions and financial aid policies. But he soon realized that the Virginia name might have been scaring off the very people his pitch was intended for. Most of the students who did approach the table showed little interest in the financial aid and expressed little need for it. One man walked up to Mr. Blackburn and introduced his son as an aspiring doctor. The father was an ophthalmologist. Other doctors came by, too. So did some lawyers. "You can't just raise the UVa flag," Mr. Blackburn said, packing up his materials at the end of the night, "and expect a lot of low-income kids to come out." When the applications started arriving in his office this spring, there seemed to be no increase in those from low-income students. So Mr. Blackburn extended the deadline two weeks for everybody, and his colleagues also helped some applicants with the maze of financial aid forms. Of 3,100 incoming freshmen, it now seems that about 180 will qualify for the new financial aid program, up from 130 who would have done so last year. It is not a huge number, but Virginia administrators call it a start. A Big Decision On a still-dark February morning, with the winter's heaviest snowfall on the ground, Andy Blevins scraped off his Jeep and began his daily drive to the supermarket warehouse. As he passed the home of Mike Nash, his neighbor and fellow gospel singer, he noticed that the car was still in the driveway. For Mr. Nash, a school counselor and the only college graduate in the singing group, this was a snow day. Mr. Blevins later sat down with his calendar and counted to 280: the number of days he had worked last year. Two hundred and eighty days - six days a week most of the time - without ever really knowing what the future would hold. "I just realized I'm going to have to do something about this," he said, "because it's never going to end." In the weeks afterward, his daydreaming about college and his conversations about it with his sister Leanna turned into serious research. He requested his transcripts from Radford and from [25]Virginia Highlands Community College and figured out that he had about a year's worth of credits. He also talked to Leanna about how he could become an elementary school teacher. He always felt that he could relate to children, he said. The job would take up 180 days, not 280. Teachers do not usually get laid off or lose their pensions or have to take a big pay cut to find new work. So the decision was made. On May 31, Andy Blevins says, he will return to Virginia Highlands, taking classes at night; the Gospel Gentlemen are no longer booking performances. After a year, he plans to take classes by video and on the Web that are offered at the community college but run by [26]Old Dominion, a Norfolk, Va., university with a big group of working-class students. "I don't like classes, but I've gotten so motivated to go back to school," Mr. Blevins said. "I don't want to, but, then again, I do." He thinks he can get his bachelor's degree in three years. If he gets it at all, he will have defied the odds. References 2. http://query.nytimes.com/search/query?ppds=bylL&v1=DAVID%20LEONHARDT&fdq=19960101&td=sysdate&sort=newest&ac=DAVID%20LEONHARDT&inline=nyt-per 3. http://www.colgate.edu/ 4. http://www.colorado.edu/ 5. http://www.stanford.edu/ 6. http://www.president.harvard.edu/ 7. http://www.nytimes.com/packages/html/national/20050515_CLASS_GRAPHIC/index_03.html 8. http://www.radford.edu/ 9. http://www.virginia.edu/president/biography.html 10. http://www.virginia.edu/ 11. http://www.amherst.edu/~president/bio.html 12. http://www.amherst.edu/ 13. http://web.odu.edu/ 14. http://www.collegeboard.com/splash 15. http://www.so.cc.va.us/ 16. http://www.nytimes.com/packages/html/national/20050515_CLASS_GRAPHIC/index_04.html 17. http://www.mellon.org/Staff/Bowen/Content.htm 18. http://www.harvard.edu/ 19. http://www.yale.edu/ 20. http://www.princeton.edu/ 21. http://www.williams.edu/ 22. http://www.virginia.edu/ 23. http://www.usnews.com/usnews/edu/college/rankings/rankindex_brief.php 24. http://www.emory.edu/ 25. http://www.vhcc.edu/ 26. http://web.odu.edu/ From checker at panix.com Wed May 25 00:47:17 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:47:17 -0400 (EDT) Subject: [Paleopsych] NYT: No Degree, and No Way Back to the Middle Message-ID: No Degree, and No Way Back to the Middle Class Matters - Social Class and Education in the United States of America http://www.nytimes.com/2005/05/24/national/class/BLUECOLLAR-FINAL.html [Supplementary article to the fifth in a series.] By TIMOTHY EGAN SPOKANE, Wash. - Over the course of his adult life, Jeff Martinelli has married three women and buried one of them, a cancer victim. He had a son and has watched him raise a child of his own. Through it all, one thing was constant: a factory job that was his ticket to the middle class. It was not until that job disappeared, and he tried to find something - anything - to keep him close to the security of his former life that Mr. Martinelli came to an abrupt realization about the fate of a working man with no college degree in 21st-century America. He has skills developed operating heavy machinery, laboring over a stew of molten bauxite at Kaiser Aluminum, once one of the best jobs in this city of 200,000. His health is fine. He has no shortage of ambition. But the world has changed for people like Mr. Martinelli. "For a guy like me, with no college, it's become pretty bleak out there," said Mr. Martinelli, who is 50 and deals with life's curves with a resigned shrug. His son, Caleb, already knows what it is like out there. Since high school, Caleb has had six jobs, none very promising. Now 28, he may never reach the middle class, he said. But for his father and others of a generation that could count on a comfortable life without a degree, the fall out of the middle class has come as a shock. They had been frozen in another age, a time when Kaiser factory workers could buy new cars, take decent vacations and enjoy full health care benefits. They have seen factory gates close and not reopen. They have taken retraining classes for jobs that pay half their old wages. And as they hustle around for work, they have been constantly reminded of the one thing that stands out on their r?sum?s: the education that ended with a high school diploma. It is not just that the American economy has shed six million manufacturing jobs over the last three decades; it is that the market value of those put out of work, people like Jeff Martinelli, has declined considerably over their lifetimes, opening a gap that has left millions of blue-collar workers at the margins of the middle class. And the changes go beyond the factory floor. Mark McClellan worked his way up from the Kaiser furnaces to management. He did it by taking extra shifts and learning everything he could about the aluminum business. Still, in 2001, when Kaiser closed, Mr. McClellan discovered that the job market did not value his factory skills nearly as much as it did four years of college. He had the experience, built over a lifetime, but no degree. And for that, he said, he was marked. He still lives in a grand house in one of the nicest parts of town, and he drives a big white Jeep. But they are a facade. "I may look middle class," said Mr. McClellan, who is 45, with a square, honest face and a barrel chest. "But I'm not. My boat is sinking fast." By the time these two Kaiser men were forced out of work, a man in his 50's with a college degree could expect to earn 81 percent more than a man of the same age with just a high school diploma. When they had started work, the gap was only 52 percent. Other studies show different numbers, but the same trend - a big disparity that opened over their lifetimes. Mr. Martinelli refuses to feel sorry for himself. He has a job in pest control now, killing ants and spiders at people's homes, making barely half the money he made at the Kaiser smelter, where a worker with his experience would make about $60,000 a year in wages and benefits. "At least I have a job," he said. "Some of the guys I worked with have still not found anything. A couple of guys lost their houses." Mr. Martinelli and other former factory workers say that, over time, they have come to fear that the fall out of the middle class could be permanent. Their new lives - the frustrating job interviews, the bills that arrive with red warning letters on the outside - are consequences of a decision made at age 18. The management veteran, Mr. McClellan, was a doctor's son, just out of high school, when he decided he did not need to go much farther than the big factory at the edge of town. He thought about going to college. But when he got on at Kaiser, he felt he had arrived. His father, a general practitioner now dead, gave him his blessing, even encouraged him in the choice, Mr. McClellan said. At the time, the decision to skip college was not that unusual, even for a child of the middle class. Despite Mr. McClellan's lack of skills or education beyond the 12th grade, there was good reason to believe that the aluminum factory could get him into middle-class security quicker than a bachelor's degree could, he said. By 22, he was a group foreman. By 28, a supervisor. By 32, he was in management. Before his 40th birthday, Mr. McClellan hit his earnings peak, making $100,000 with bonuses. Friends of his, people with college degrees, were not earning close to that, Mr. McClellan said. "I had a house with a swimming pool, new cars," he said. "My wife never had to work. I was right in the middle of middle-class America and I knew it and I loved it." If anything, the union man, Mr. Martinelli, appreciated the middle-class life even more, because of the distance he had traveled to get there. He remembers his stomach growling at night as a child, the humiliation of welfare, hauling groceries home through the snow on a little cart because the family had no car. "I was ashamed," he said. He was a C student without much of a future, just out of high school, when he got his break: the job on the Kaiser factory floor. Inside, it was long shifts around hot furnaces. Outside, he was a prince of Spokane. College students worked inside the factory in the summer, and some never went back to school. "You knew people leaving here for college would sometimes get better jobs, but you had a good job, so it was fine," said Mike Lacy, a close friend of Mr. Martinelli and a co-worker at Kaiser. The job lasted just short of 30 years. Kaiser, debt-ridden after a series of failed management initiatives and a long strike, closed the plant in 2001 and sold the factory carcass for salvage. Mr. McClellan has yet to find work, living off his dwindling savings and investments from his years at Kaiser, though he continues with plans to open his own car wash. He pays $900 a month for a basic health insurance policy - vital to keep his wife, Vicky, who has a rare brain disease, alive. He pays an additional $500 a month for her medications. He is both husband and nurse. "Am I scared just a little bit?" he said. "Yeah, I am." He has vowed that his son David will never do the kind of second-guessing that he is. Even at 16, David knows what he wants to do: go to college and study medicine. He said his father, whom he has seen struggle to balance the tasks of home nurse with trying to pay the bills, had grown heroic in his eyes. He said he would not make the same choice his father did 27 years earlier. "There's nothing like the Kaiser plant around here anymore," he said. Mr. McClellan agrees. He is firm in one conclusion, having risen from the factory floor only to be knocked down: "There is no working up anymore." From checker at panix.com Wed May 25 00:47:32 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:47:32 -0400 (EDT) Subject: [Paleopsych] Melissa Greene and Niobe Way: Self-Esteem Trajectories among Ethnic Minority Adolescents Message-ID: Self-Esteem Trajectories among Ethnic Minority Adolescents: A Growth Curve Analysis of the Patterns and Predictors of Change Melissa L. Greene and Niobe Way New York University Journal of Research on Adolescence Volume 15 Issue 2 Page 151 - June 2005 doi:10.1111/j.1532-7795.2005.00090.x First, the summary from CHE 5.5.24): A glance at the June issue of the Journal of Research on Adolescence: Race, gender, and self-esteem While the self-esteem of white children, as a group, remains constant during their adolescence, it increases among black, Latino, and Asian-American youths from low-income families, and that difference is important to note because self-esteem has been shown to influence mental health, academic achievement, and social relationships, write Melissa L. Greene, a psychology instructor at Cornell University, and Niobe Way, an associate professor of applied psychology at New York University. That was one of two major findings from their study of black, Latino, and Asian-American students from lower-income and working-class families at a public high school in New York City. The other was that "boys and girls experienced similar average trajectories" of self-esteem. The latter finding contradicts previous research, which has generally focused on white, middle-class children and has found different patterns of change in self-esteem between boys and girls. Their study, Ms. Greene and Ms. Way say, also supports earlier findings that "patterns of gender difference are not identical across ethnic groups, and challenges the well-established assumption that boys experience higher self-esteem than girls through adolescence." Their study, the authors say, suggests that cultural and contextual factors influence the development of self-esteem. They say that in future studies of adolescent psychological processes, researchers should take account of the varied backgrounds of subjects. In addition, they suggest, research should not settle for group averages, but should focus instead on individual variations. In their study, Ms. Greene and Ms. Way also explored the effects of adolescents' perceptions of how family, peers, and their school environment affected increases in self-esteem. They say: "Each perceived context was significantly associated with self-esteem trajectories when examined independently, but family experiences emerged as most strongly related to changes in self-esteem." --------------------- The current study presents a growth curve analysis of self-esteem among Black, Latino, and Asian American high school students. A series of hierarchical linear models were used to examine patterns and predictors of change in self-esteem over time. Results revealed an average increase in self-esteem with age. Although boys and girls experienced similar trajectories of self-esteem, ethnicity was a significant moderator of developmental change. Black adolescents reported higher self-esteem, while Asian American adolescents reported lower self-esteem, compared with their Latino peers. Latino adolescents experienced a sharper increase in self-esteem over time compared with Black adolescents. The unique and conjoint effects of adolescents' experiences with peers, family, and school were examined in relation to self-esteem trajectories. Results revealed that each perceived context was significantly associated with self-esteem trajectories when examined independently, but family experiences emerged as most strongly related to changes in self-esteem. Results underscore the need to examine change at the individual level, as well as the importance of studying the unique and conjoint effects of individual and contextual-level variables on developmental processes among ethnic minority adolescents. The importance of self-esteem for the well-being of adolescents is underscored by decades of theory and research supporting its link with mental health, academic achievement, and social relationships (e.g., Hirsch & DuBois, 1991; Rosenberg, 1965; Zimmerman, Copeland Laurel, Shope, & Dielman, 1997 ). Furthermore, the numerous biological, physical, and cognitive changes that occur during adolescence highlight the importance of examining how self-esteem may change as well during this critical period of development. Although research suggests that self-esteem increases gradually during middle and late adolescence (e.g., McCarthy & Hoge, 1982), most studies have examined change at the group level, to the exclusion of the individual level (e.g., Marsh, 1989; McCarthy & Hoge, 1982 ). Thus, we have limited understanding of the individual variations in self-esteem trajectories during adolescence. Furthermore, existing research has focused primarily on self-esteem among White, middle-class adolescents (e.g., Baldwin & Hoffman, 2002 ), excluding the experiences of ethnically and socioeconomically diverse youth. In addition, while research has identified correlates of self-esteem, including family relationships (e.g., Coates, 1985; Luster & McAdoo, 1995; Openshaw, Thomas, & Rollins, 1984; Walker & Greene, 1986), peer relationships and/or friendships (e.g., Coates, 1985; Buhrmester & Yin, 1997), and school experiences (e.g., Hoge, Smit, & Hanson, 1990; Kuperminc, Leadbeater, Emmons, & Blatt, 1997 ), the combined influence of these factors, and the dynamic associations between changes in these experiences and changes in self-esteem have not been examined. Thus, responding to the gaps in the literature, the current study had three primary goals: (1) to describe the trajectories of change over time in self-esteem at the individual level among Black, Latino, and Asian American adolescents from low-income families; (2) to examine gender and ethnic differences in trajectories of self-esteem; (3) and to explore the unique and combined effects of adolescents' perceptions of family, peers, and school on change in self-esteem. This study is informed by ecological theory, which underscores the influence and interaction of multiple developmental contexts (Bronfenbrenner, 1979), as well as the importance of examining adolescents' subjective experiences of these contexts (Bronfenbrenner, 1979; Demo, Small, & Savin-Williams, 1987). Self-Esteem: Definitions and Significance Self-esteem is typically understood to be the evaluative aspect of the self-concept. A common definition is that of Rosenberg, who defined high self-esteem as "the feeling of being satisfied with oneself, believing that one is a person of worth" (1985, p. 210). Similarly, Harter defined self-esteem or self-worth as "the overall value that one places on the self as a person [...] " (1990, p. 67). These definitions reflect the notion of "global" or "general" self-esteem or self-worth. Global self-esteem has been linked with a variety of psychological and behavioral outcomes for children and adolescents, including anxiety (Bachman, 1970; Rosenberg, 1965); depression (Bachman, 1970; Overholser, Adams, Lehnert, & Brinkman, 1995; Rosenberg, 1965; Harter, Marold, & Whitesell, 1992); and suicidal ideation and behavior (Overholser, Adams, Lehnert, & Brinkman 1995), and is generally believed to be a good indicator of overall well-being (Harter, 1999; Rosenberg, 1986). Trajectories of Change in Global Self-Esteem Theoretical accounts of adolescent development (e.g., Erikson, 1968; Harter, 1999; Rosenberg, 1985 ) predict that self-esteem may decline during the transition from childhood to early adolescence, as individuals struggle with their self-concept and identity in the face of the numerous physical, social, and cognitive changes that characterize this period (Feldman & Elliott, 1990 ). However, as adolescents consolidate their self-concept and begin to forge a sense of identity during middle and late adolescence, self-esteem should stabilize and/or increase. Short-term longitudinal research has supported a gradual increase in self-esteem during middle and late adolescence ( McCarthy & Hoge, 1982; Mullis, Mullis, & Normandin, 1992; O'Malley & Bachman, 1983; Wallace, Cunningham, & Del Monte, 1984). Traditional research on adolescent self-esteem has focused almost exclusively on changes at the group, or average level, potentially ignoring individual variation in trajectories (Block & Robins, 1993; Hirsch & DuBois, 1991 ). Responding to this limitation, more recent research has used idiographic methods such as cluster analysis to examine trajectories of self-esteem change over time. These studies have supported the presence of distinct trajectories of self-esteem during adolescence; while some adolescents experience increases over time, others experience declines in self-esteem, or no change over time (Deihl, Vicary, & Deike, 1997; Hirsch & DuBois, 1991; Pahl, Greene, & Way, 2000; Zimmerman et al., 1997). While cluster analysis provides an idea of the patterns of change in a population, change at the individual level can be examined only through the use of growth curve analysis (see Rogosa & Willett, 1985; Willett, Singer & Martin, 1998 ). Although recent research using these methods has explored change in domain specific areas of self-esteem or self-competence (e.g., Cole, Maxwell, Martin, Peeke, Seroczynski, Tram, Hoffman, Ruiz, Jacquez, & Maschman, 2001; Jacobs, Lanza, Osgood, Eccles, & Wigfield, 2002), to date, only a handful of studies have used growth curve analysis to examine changes in global self-esteem during adolescence. Scheier, Botvin, Griffin, and Diaz (2000) used growth curve analysis to examine change in self-esteem over a 4-year interval in a sample of White, middle-class middle-school students. Results revealed an average decrease in self-esteem with age, and significant variability in trajectories. Recently, Baldwin and Hoffman (2002) explored individual change in self-esteem in a sample of White, middle-class adolescents ranging in age from 11 to 22 years old. Results indicated a curvilinear effect of age on self-esteem. In addition, similar to the results of Scheier et al. (2000) , there was significant inter-individual variability in self-esteem trajectories. While these studies present a starting point for examining change at the individual level, they have focused exclusively on White, middle-class adolescents, leaving open the question of change in self-esteem among ethnically and socioeconomically diverse youth. Individual-Level Moderators of Self-Esteem Trajectories The importance of studying self-esteem trajectories among ethnic minority adolescents is underscored by the growing percentage of ethnic minority youth in the adolescent population (see McCloyd, 1998 ), as well as existing research indicating ethnic group differences in self-esteem. More specifically, Black adolescents often report higher self-esteem than their Latino and/or Asian American peers ( AAUW, 1991; Carlson, Uppal, & Prosser, 2000; Dukes & Martinez, 1994; Martinez & Dukes, 1997; Pahl et al., 2000; Phinney, Cantu, & Kurtz, 1997; Rotheram-Borus, Dopkins, Sabate, & Lightfoot, 1996). High self-esteem among Black adolescents may in part result from strong ties to family and community that provide a sense of "group autonomy" (Chapman & Mullis, 2000 ), which may insulate these adolescents against the effects of racism found in the larger society, allowing for more positive self-esteem. Conversely, Asian American adolescents are often found to report the lowest self-esteem of these three groups (Crocker, Luhtanen, Blaine, & Broadnax, 1994; Dukes & Martinez, 1994; Martinez & Dukes, 1997; Way & Chen, 2000). Research indicates that Asian American adolescents struggle more than other ethnic minority youth with peer discrimination (Fisher, Wallace, & Fenton, 2000; Rosenbloom & Way, 2004), as well as poor perceptions of physical appearance and attractiveness (Root, 1995). These experiences may contribute to the poor self-perceptions found among Asian American adolescents. Although existing research has not examined longitudinally how trajectories of self-esteem differ among ethnic minority adolescents, cross-sectional research (AAUW, 1991), as well as research comparing trajectories among Black and White adolescents (Brown, McMahon, Biro, Crawford, Schreiber, Similo, Waclawiw, & Striegel-Moore, 1998 ), suggests that age-related differences in self-esteem may differ according to ethnic group. If Black adolescents report higher self-esteem than their peers, they may be less likely than their peers to indicate improvement in self-esteem with age. In other words, a "ceiling effect" may be evident among those who report high self-esteem. In addition, because of their relatively lower "starting point," Asian American adolescents may demonstrate the greatest age-related improvements in their self-esteem. Similarly, gender may be an important variable that moderates self-esteem trajectories. Gender differences have been found consistently among White, middle-class samples, with boys typically reporting higher self-esteem than girls throughout early, middle, and late adolescence (Block & Robins, 1993; Kling, Hyde, Showers, & Buswell, 1999; Wilgenbusch & Merrell, 1999 ). Furthermore, studies that have examined change in self-esteem have found that these gender differences become more dramatic with age, as girls often experience a decline in self-esteem throughout adolescence, while their male peers experience improved self-esteem with age (Block & Robins, 1993; Harter, 1993; Zimmerman et al., 1997). Among ethnic minority adolescents, gender differences appear less clear-cut. For example, in a sample of Black, Latino, and White adolescents, Martinez and Dukes (1991) found that boys reported higher self-esteem than girls among Latino and White adolescents; however, among Black adolescents, the opposite was true, with girls reporting higher self-esteem than their male peers. Similarly, the declines in self-esteem found among White girls are not found consistently among girls from ethnic minority backgrounds. For example, while Hispanic and White girls experienced significant declines in self-esteem from elementary school through high school, Black girls continue to report high levels of self-esteem throughout adolescence (AAUW, 1991 ). Taken together, these studies suggest that the gender differences and developmental patterns found in White middle-class samples may not apply to adolescents of diverse backgrounds. The current study will examine the effects of gender and ethnicity on trajectories of self-esteem among Black, Latino, and Asian American adolescents. Perceived Contexts and Self-Esteem Symbolic interactionist theory (Cooley, 1902; Coopersmith, 1967; Mead, 1934; Rosenberg, 1979 ) asserts that the experience of the self is a function of perceptions of relationships with significant others. For example, the notion of the "looking glass self" (Cooley, 1902, p. 184 ) highlights the importance of perceptions and evaluations of significant others for an individual's self-concept and self-esteem. Family relationships, friendships, and school are the three primary relational contexts during adolescence, underscoring the importance of examining their contribution to self-esteem trajectories. Research has found positive associations between self-esteem and perceptions of family (Luster & McAdoo, 1995; Way & Robinson, 2003; Zimmerman & Maton, 1992), peer (Cauce, 1986; Coates, 1985) and school contexts (Fenzel, Magaletta, & Peyrot, 1997; Luster & McAdoo, 1995; Roeser & Eccles, 1998; Way & Robinson, 2003 ) among ethnic minority and/or low-income adolescents. This body of research suggests that when adolescents perceive warmth, support, and safety in their family, peer, and school environments, they experience more positive self-perceptions as well. However, our understanding of the contextual predictors of self-esteem has several significant limitations. First, longitudinal research has been limited and the existing research has often been short in duration (e.g., Way & Robinson, 2003 ), potentially obscuring any change that occurs gradually over time. Second, while peer and family relationships have been examined together (e.g., Coates, 1985; Zimmerman & Maton, 1992 ), rarely has the school context been included as well, precluding an understanding of the relative and combined effects of these three contextual-level variables on self-esteem. Third, these contexts are typically conceptualized as static entities, measured at only one point in time, rather than dynamic entities that themselves change over time (e.g., Way & Robinson, 2003). Finally, with a few exceptions (e.g., Way & Robinson, 2003 ), studies have focused on Black or African American adolescents, precluding an understanding of the ways in which the family, peer, and school contexts contribute to the self-esteem of Latino and Asian American adolescents. The Current Study The current study will investigate self-esteem trajectories in a sample of Black, Latino, and Asian American adolescents. A growth curve analysis using hierarchical linear modeling (Bryk & Raudenbush, 1987, 1992 ) will be conducted to examine trajectories of self-esteem, as well as individual (i.e., gender and ethnicity) and contextual (family, peers, and school) predictors of within- and between-person difference in growth trajectories. It is hypothesized that self-esteem will demonstrate a linear increase with age. It is predicted that Black adolescents will report higher self-esteem than their peers, as well as a flatter increase over time, while Asian Americans will experience the lowest levels of self-esteem, but the sharpest increases over time. No predictions are made regarding gender differences in self-esteem because of the inconsistency of gender differences in samples of ethnic minority youth. In addition, it is predicted that family support, friendship support, and school climate will be positively related to changes in self-esteem over time, and will also be related to individual differences in growth in self-esteem. Interactions between gender and ethnicity and family support, friendship support, and school climate will be explored, but no specific predictions are made because of the lack of research in this area. METHOD Go to: [GO] [down] Participants Participants in the current study are part of a larger longitudinal study of adolescent development. All participants were students at a public high school on the Lower East Side of New York City. The majority of students in this school come from lower and working class families, with 90% of the student population eligible for federal assistance through the free lunch program. The data were collected in the fall semesters of 1996 (Time 1), 1997 (Time 2), 1998 (Time 3), 1999 (Time 4), and 2000 (Time 5). Upon entry to the study, all participants were in the ninth or tenth grade. Eighty-five percent of students who were informed of the study in 1996 agreed to participate, and greater than 95% of those who remained in the school each of the four additional years of the study chose to continue to participate. The current study will focus only on the 205 students (53% female) who had complete data for a minimum of two (27%), and a maximum of five (26%) time points, with the majority having complete data for three or more time points (73%). Independent samples t-tests were used to explore any possible differences between students who had complete data at all five time points and those who did not. No significant differences were found on any predictor or outcome variables. Participants were racially and ethnically diverse, including 24% Black [predominantly African American (68%) and West Indian (25%)], 48% Latino [predominantly Dominican (39%) and Puerto Rican (55%)], 18% Asian American [predominantly Chinese American (96%)], and 10% bi- or multi-racial [predominantly Black/Latino]. The racial/ethnic breakdown of the sample was comparable with the racial/ethnic breakdown of the school from which the sample was drawn. Eighty percent of participants were born in the United States. Participants tended to come from single-parent homes (73%), and have mothers (74%) and fathers (72%) who were not educated beyond high school. Procedure Students were recruited for participation through their mainstream English classes. Informed consent was obtained from both parents and students and translated into Spanish and Chinese to accommodate non-English speaking parents. Questionnaires were administered to all students who returned a signed consent form at each time point. Questionnaires were administered by members of a racially diverse research team and distributed during English classes or lunch periods. Students who had finished high school by the fifth wave of data collection were contacted by postcards, and completed questionnaires in a university research lab. Students were paid $5.00 in return for completion of their questionnaires at Time 1, and $10.00 at Times 2 through 5. Measures Self-Esteem. Self-esteem was assessed at Times 1 through 5 with the Rosenberg Self-Esteem Scale (RSE; Rosenberg, 1965 ), a ten-item measure of global self-esteem. Participants are asked to respond on a five-point Likert scale ranging from "strongly disagree" to "strongly agree." For the purpose of the present analysis, a mean score was calculated as a measure of global self-esteem for each participant at each time point. The RSE was developed for use with high-school students, and its validity and reliability have been demonstrated repeatedly (e.g., Hagborg, 1993; Rosenberg, 1965; Scheier et al., 2000). Reliability was excellent in the current sample ( [alpha] >.80 at each time point). Family Support. The Perceived Social Support for Family Scale (Procidano & Heller, 1983 ) was administered to assess perceptions of support received from family members at Times 1 through 5. Students were asked to respond "yes,""no," or "don't know" to 20 items concerning their experiences with their families. For the purposes of the present analysis, the total number of positive responses was used as a summary score of perceived family support. The authors report good reliability and construct validity for this measure (Procidano & Heller, 1983 ), and this measure has been determined to be reliable and valid when used in urban samples of ethnically and racially diverse adolescents (e.g., Tardy, 1985; Way & Leadbeater, 1999). Reliability was excellent in the current sample ( [alpha] >.80 at each time point). Friendship Support. The Perceived Social Support for Friends Scale (Procidano & Heller, 1983 ) was administered to assess perceptions of perceived support received from friends at Times 1 through 5. Students were asked to respond "yes,""no" or "don't know" to 20 items concerning their experiences with their friends. For the purposes of the present analysis, the total number of positive responses was used as a summary score of perceived friendship support. The authors report good reliability and construct validity for this measure (Procidano & Heller, 1983 ). Additionally, this measure has been determined to be reliable and valid when used in urban samples of ethnically and racially diverse adolescents (e.g., Tardy, 1985; Way & Leadbeater, 1999). Reliability was excellent in the current sample ( [alpha] >.80 at each time point). Perceived School Climate. Perceived school climate was measured at Times 1 through 5 with a modified version of the School Climate Scale (Haynes, Emmons, & Comer, 1993 ). This version contains 33 items tapping three dimensions of school climate: student/student relations, teacher/student relations, and order and discipline. Students indicate their agreement with the items using a five-point Likert scale ranging from "strongly agree" to "strongly disagree." For the present analyses, a mean score was calculated for each participant as a measure of perceived school climate at each time point. This measure has demonstrated good reliability and validity previous samples (Haynes et al., 1993) and reliability was excellent in the current sample ( [alpha] >.80 at each time point). For those students who had finished high school by the fifth wave of data collection, their reports at Time 5 of school climate were retrospective. However, there were no significant differences in reports of perceived school climate at Time 5 between those who remained in school and those who had graduated. Data Analytic Method In order to examine within- and between-person change in self-esteem, a growth curve analysis (see Rogosa & Willett, 1985; Willett et al., 1998) was conducted using Hierarchical Linear Models (HLMs) (Bryk & Raudenbush, 1987, 1992 ). A growth curve analysis using HLM confers numerous advantages over more traditional methods of investigating developmental change, such as repeated measures analysis of variance (see Bryk & Raudenbush, 1987, 1992). HLM uses empirical Bayes estimation (Strenio, Weisburg, & Bryk, 1983 ) to derive the final growth estimates for each participant, drawing on information at the within-person (i.e., level-1) and between-person (i.e., level-2) levels of analysis. As a result, HLM affords more precise estimates of individual growth over time and greater power to detect predictors of individual differences in change even with relatively small samples (Bryk & Raudenbush, 1987 ). This process also allows HLM to handle missing data, using any available data points to fit a growth trajectory for each participant. Finally, HLM can include covariates that themselves change over time. This is crucial to the current study, as we are interested in the dynamic relationships between changes in family support, friendship support, and school climate and self-esteem. RESULTS Go to: [GO] [up] [down] Preliminary Analyses Because of research suggesting "within group" differences in psychological variables among subgroups of ethnic minority adolescents (e.g., Erkut & Tracy, 2002; Rumbaut, 1994 ), we examined possible differences between African American and West Indian adolescents, as well as between Puerto Rican and Dominican adolescents. Independent samples t-tests were conducted to explore possible differences in levels of self-esteem, family support, friendship support, and school climate. No significant differences were found. Thus, for all subsequent analyses, African American and West Indian participants were grouped into a category labeled "Black," while Dominican and Puerto Rican participants were grouped together into the "Latino" category. Differences among Asian American participants were not explored because this subsample was predominantly Chinese American. Thus, for all growth models, three dummy variables, Black, Asian, and Bi/Multiracial were constructed to represent the four ethnic groups represented in the study. The Latino group was selected as the reference group because of previous research indicating that the greatest differences in self-esteem are found between Latino and non-Latino students (Way & Robinson, 2003). Because participants were of different ages upon entry into the study, and participated in different numbers of waves of data collection, a variable (Mean Age) was computed to represent each participant's average age during the years in which he or she participated. This variable provides a check for possible cohort effects, as well as an additional check for attrition effects. Mean Age was included as a level-2 predictor in preliminary growth models, and we compared the fit of the full model (the model including Mean Age) to a reduced model. Mean Age was not a significant predictor of either the intercept or growth rate, and did not improve the fit of the model. Thus, we did not include Mean Age as a control variable in subsequent analyses. Descriptive Analyses Descriptive analyses were conducted to examine the distribution of the predictor and outcome variables for each of the five time points. Means and standard deviations can be seen in Table 1. Mean changes in self-esteem as a function of gender and ethnicity can be seen in Figures 1 and 2, respectively. Modeling Individual Growth in Self-Esteem Unconditional Growth Models. Unconditional growth models were constructed in order to examine average growth in the population, as well as the between-person variability in growth. Results of the linear unconditional model can be seen in Table 2 . The intercept of 4.02 indicates the mean level of self-esteem for the sample at age 16, while the significant positive slope coefficient indicates that self-esteem increased by .08 units per year of age on average. Examination of the random effects revealed significant heterogeneity in the intercept and slope coefficients. This result indicates that these coefficients should be allowed to vary at level-2, and predictors of inter-individual differences should be explored. A quadratic age term ((age [-] 16)2) was added to the unconditional model in order to explore any possible acceleration in the growth of self-esteem (see Table 3). Neither the fixed nor the random effect was significant, although there was a trend towards significance (p<.10) for the random effect, suggesting some degree of inter-individual variability in the curvature of growth. A [chi] 2 test revealed that the addition of the quadratic term did not significantly improve the fit of the model. Thus, a linear model with random effects for both the intercept and slope was selected as best fitting growth in self-esteem in the population. Demographic Predictors of Self-Esteem Trajectories. Dummy codes representing gender and ethnicity were entered into the level-2 equations in order to explore possible moderating effects of these variables on the growth parameters. Results of this model can be seen in Table 4 . Examination of the fixed effects revealed significant effects of ethnicity on the intercept. Black adolescents reported higher self-esteem at age 16, while Asian American adolescents reported lower self-esteem at age 16, compared with Latino adolescents. In the equation predicting the linear growth rate, there was also a significant effect of ethnicity. Latino adolescents experienced a steeper increase in their self-esteem change as compared with Black adolescents. Fitted growth curves as a function of ethnicity can be seen in Figure 3 . Gender was not a significant predictor of inter-individual differences in either the intercept or the slope, although there was a trend towards significance in the equation predicting the intercept. In this equation, the positive coefficient representing gender suggests that boys experienced higher self-esteem than did girls at age 16. However, boys and girls experienced similar rates of change in self-esteem as a function of age. In a second model, a series of terms representing the interaction of gender and ethnicity (Male [x] Black, Male [x] Asian American, Male [x] Bi/Multiracial) were entered the level-2 equations. There were no significant effects on either the intercept or the slope for any of the interaction terms, indicating that the ethnic differences in self-esteem trajectories were similar for boys and girls. Unique Effects of Family Support, Friendship Support, and School Climate on Self-Esteem Trajectories. In order to examine the unique effects of the family, peer, and school contexts, a series of models was constructed to examine each context separately. Time-varying covariates were added to the level-1 equations in order to examine the associations between changes in the covariates and changes in self-esteem (Bryk & Raudenbush, 1992 ). All time-varying covariates were centered around each participant's unique mean averaged over time (i.e., group mean centering), when entered into the level-1 equations. Although these variables were measured across time, their coefficients were fixed (i.e., not allowed to vary randomly). This was done because growth curve models can only accurately estimate the numbers of random coefficients equal to half of the number of waves of data (Bryk & Raudenbush, 1992). Thus, these coefficients can be thought to be "varying" rather than "random" (Kreft & De Leeuw, 1998 ). In addition, a variable representing each participant's mean for each of the time-varying covariates (Mean Family Support, Mean Friendship Support, Mean School Climate) was entered into the level-2 equations. These variables were centered around the mean for the sample (i.e., grand mean centering) when entered into the level-2 equations. The inclusion of the mean of the time-varying covariates at level-2 controls for the possibility that the coefficients estimated at level-1 are the result of stable between-person differences (Bryk & Raudenbush, 1992 ). In addition, the inclusion of these variables can answer questions regarding the ability of these covariates to predict inter-individual differences in self-esteem trajectories. Results revealed that within-person changes in self-esteem were significantly associated with changes in family support ( [gamma] =.02, p<.001) and friendship support ( [gamma] =.01, p <.05), indicating that increases in perceived family support and friendship support were associated with improvements in self-esteem. At level-2, individual differences in self-esteem at age 16 were predicted by mean family support ( [gamma] =.06, p<.001), and mean friendship support ( [gamma] =.07, p <.001). These results indicate that adolescents who reported higher family support and higher friendship support, on average, also reported higher self-esteem at age 16. A trend towards significance was found for mean school climate ( [gamma] =.14, p <.10), suggesting that adolescents who reported more positive perceptions of school climate on average, also reported higher self-esteem. Only mean family support was predictive of individual differences in the linear slope ( [gamma] = [-] .01, p <.05), indicating that adolescents who reported lower average levels of family support experienced a steeper increase in self-esteem over time compared with adolescents who reported higher average levels of family support. Combined Effects of Family Support, Friendship Support, and School Climate on Self-Esteem Trajectories. Following the unique models, a combined model was constructed containing the individual- and contextual-level variables that had reached significance in the earlier models. Family support and friendship support were included as time-varying covariates at level-1 in order to examine the combined impact of these contexts on intra-individual change over time in self-esteem. At level-2, mean family support, friendship support, and perceived school climate were included as predictors of inter-individual differences in the intercept, while mean family support was also included in the equation predicting the linear slope. Gender and ethnicity were also included in the equation predicting the intercept, and ethnicity was included in the equation predicting the linear slope because these variables had been significant (or had approached significance) in the demographic model. Results of this model can be seen in Table 4 . Family support continued to be significantly associated with intra-individual growth in self-esteem, indicating that increases in family support were associated with increases in self-esteem over time. Changes in friendship support were no longer significantly associated with intra-individual change in self-esteem once family support was included in the model. At level-2, ethnicity remained a significant predictor of the intercept and linear slope. In addition, both mean family support and friendship support were significant positive predictors of the intercept, while mean school climate demonstrated a trend towards significance. These results indicate that adolescents who reported higher levels of family and friendship support, and more positive perceptions of their school climate, on average, also reported higher levels of self-esteem at age 16. After including ethnicity in the model, mean family support was no longer a significant predictor of the linear slope. The final model examined possible interactions between gender and ethnicity and family and friendship support on within-person changes in self-esteem. Results of this model can be seen in the second column of Table 4 . While the effects of family support were robust across gender and ethnicity, there was an interaction between ethnicity and friendship support, with Black adolescents reporting a significantly stronger association between changes in friendship support and changes in self-esteem as compared with their Latino peers (see Figure 4). DISCUSSION Go to: [GO] [up] [down] The current study provides the first application of growth curve modeling to the examination of self-esteem among ethnic minority adolescents from low-income families. Based on theory (e.g., Harter, 1999) and previous research examining developmental change in self-esteem (e.g., Baldwin & Hoffman, 2002; Marsh, 1989 ), we predicted that adolescents in the current study would report improvements in self-esteem with age. This hypothesis was confirmed, as it appears that adolescents experience increasingly more positive self-perceptions with age. In addition, we predicted and found that adolescents differed significantly in both levels of self-esteem and the rate of change over time. This result is consistent with previous research using cluster analysis to examine self-esteem trajectories (Diehl et al., 1997; Hirsch & DuBois, 1991; Pahl et al., 2000; Zimmerman et al., 1997) and supports the necessity of using person-centered and idiographic statistical methods to examine developmental processes. Although our results support earlier investigations of self-esteem change among White, middle-class adolescents (Baldwin & Hoffman, 2002; Marsh, 1989 ), one striking difference is the lack of significant gender differences in the current study. Previous research with White, middle-class adolescents has consistently found that boys and girls experience different patterns of change in self-esteem over time (e.g., Baldwin & Hoffman, 2002; Block & Robins, 1993; Harter, 1993; Zimmerman et al., 1997 ). In the current study, boys and girls experienced similar average trajectories. This result supports and extends earlier research that suggests that patterns of gender differences are not identical across ethnic groups (e.g., Martinez & Dukes, 1991), and challenges the well-established assumption that boys experience higher self-esteem than girls throughout adolescence. The lack of gender differences in the current study, as well as in other samples of ethnic minority youth, suggests the influence of cultural and contextual factors in the development of self-esteem and underscores the need to examine mediators of these differences. Recent research has suggested that stable and/or higher self-esteem found among Black girls may be accounted for by their higher levels of perceived attractiveness and confidence in popularity, and lower levels of concern about weight and social self-consciousness relative to their White peers (Brown et al., 1998; Eccles, Barber, Jozefowicz Malenchuk, & Vida, 1999 ). Thus, the risk factors that may contribute to declines in self-esteem for White, middle-class girls may not be as salient and/or as intimately tied to self-esteem for Black girls. However, these factors have not been adequately studied among Latina and Asian American girls. Thus, future research should examine possible mediating variables in an ethnically diverse sample including Black, Latina, Asian American, and White adolescents. While boys and girls experienced similar average self-esteem trajectories, exploration of ethnic differences in the current sample revealed significant differences between groups. Consistent with our predictions and previous research (e.g., Crocker et al., 1994; Dukes & Martinez, 1994; Pahl et al., 2000; Phinney et al., 1997 ), Black adolescents reported higher self-esteem, and Asian American adolescents reported lower self-esteem than their Latino peers. In addition, while an increase in self-esteem with age was found for the sample as a whole, as predicted, there were significant ethnic differences in the rate of change over time. The self-esteem of Black adolescents showed a flatter increase over time compared with that of Latino adolescents. This may be because of the relatively higher self-esteem of Black youth at the early ages of the study, and may therefore indicate a "ceiling" for self-esteem. Because the self-esteem of Latino adolescents increased at a greater rate over time, by the end of the study, Latino youth had achieved levels of self-esteem comparable with their Black peers. Unlike their Latino peers who "caught up" with their Black peers, Asian American adolescents continued to experience dramatically lower self-esteem even during the years of late adolescence. This finding is consistent with the small body of research that has examined self-esteem among Asian American adolescents (Crocker et al., 1994; Dukes & Martinez, 1994; Martinez & Dukes, 1997 ), as well as qualitative data that indicates that Asian American adolescents' self-perceptions are less positive than those of their Latino and Black peers (Way & Pahl, 1999 ). The reasons for this finding may include the fact that Asian American students often report more peer discrimination than Latino or Black adolescents (Rosenbloom & Way, 2004). Recent research has linked peer discrimination with poor self-esteem among ethnic minority youth (Fisher et al., 2000 ). This finding may also reflect the lower priority within the Asian American community on enhancing self-esteem among youth than within the Black or Latino communities (Sodowsky, Kwan, & Pannu, 1995 ). Understanding why Asian American students are consistently reporting significantly lower self-esteem than their peers is an important next step in self-esteem research. Alternatively, as predicted, Black adolescents were found to report higher self-esteem compared with their peers. It has been suggested that strong social support may be a factor in the relatively high self-esteem of Black adolescents. Research examining the use of social support has demonstrated its importance as a coping strategy for Black adolescents (Chapman & Mullis, 2000; Maton, Teti, Corns, Vieira-Baker, & Lavine, 1996; McCreary, Slavin & Berry, 1996 ). Thus, Black adolescents may effectively use their social networks to cope with stressors that might otherwise affect self-esteem (e.g., Wilson, 1989 ). The strong link between social relationships and self-esteem for Black adolescents is supported by our finding that friendship support had a greater impact on changes in self-esteem among Black adolescents compared with their Latino peers. Previous analyses with the current data set has found self-esteem and friendship quality to be strongly linked among Black adolescents (Rosenbaum, 2000 ). Taken together, these results suggest that peer relationships may play a particularly important role in Black adolescents' sense of self. Qualitative research could aid in our understanding of the importance of peers relationships for the self-esteem and psychological well-being of Black youth. The final goal of the current study was to explore the unique and combined effects of the family, peer, and school contexts on self-esteem trajectories. Results revealed that when adolescents experienced increased support from family and friends, evaluations of the self were more positive as well. In addition, adolescents who experienced better family and peer relationships, and more positive perceptions of their school, also reported higher self-esteem, while adolescents who reported lower levels of family support, reported greater increases in self-esteem over time. Additional analyses suggested that higher levels of family support were associated with higher levels of self-esteem, and therefore, a smaller increase in self-esteem over time. These results provide strong support for the importance of the social context for adolescents' self-perceptions (Cooley, 1902), and suggest that positive family, peer, and school environments are important contributors to psychological well-being. When multiple contexts were examined together, the quality of family relationships emerged as most consistently and strongly related to self-esteem trajectories. This result supports previous research that has found family or parent relationships to be more closely tied to self-esteem and/or psychological well-being than peer relationships (e.g., Armsden & Greenberg, 1987; McCreary et al., 1996; Paterson, Pryor, & Field, 1995; Way & Robinson, 2003; Zimmerman & Maton, 1992), and highlights the ongoing importance of high-quality family relationships for psychological well-being during adolescence. The current results suggest that parents are a primary presence in their children's emotional lives throughout adolescence. The relatively weak association between school climate and self-esteem was surprising, as other longitudinal studies have found strong associations between school climate and self-esteem (Hoge et al., 1990; Way & Robinson, 2003 ). However, reports of perceived school climate at Time 5 were retrospective for students who had graduated by the fifth wave of data collection, which may have weakened the association between school climate and self-esteem. While the current study adds to our existing understanding of self-esteem processes, and particularly, the dynamics of self-esteem among ethnic minority, low-income adolescents, there are several areas for further exploration. First, current advances in statistical methods and growth curve analysis will provide opportunities for greater understanding of the dynamic relationships between the family, peer, and school contexts, and self-esteem. For example, research should explore cross-domain growth models (see Sayer & Willett, 1998; Willett & Sayer, 1996 ), allowing for the simultaneous modeling of growth in multiple domains. Second, reliance on questionnaire data alone limits our understanding of adolescents' perceptions of themselves and their environment. Use of integrative methods (e.g., qualitative and quantitative data) would help to provide a greater understanding of the meaning of self-esteem to adolescents, as well as the meanings that adolescents make of the important contexts of their lives and how they themselves see these contexts influencing their feelings about themselves and their psychological adjustment. An additional limitation is the fact that the Time 5 school climate measure assessed both retrospective (i.e., for those who had already graduated) and current (i.e., for those who had not graduated yet) school experiences. Although additional analysis of the data using reports of school climate from Times 1 to 4 suggested similar findings as those with all waves of data, the inclusion of a variable that includes retrospective accounts may have weakened the association between school climate and self-esteem. Future research should include additional indicators of the quality of peer relationships in order to provide support for the current findings. In sum, the current results indicate that Black, Latino, and Asian American youth from low-income families experience an increase in self-esteem throughout the course of adolescence. However, our results highlight important individual differences in self-esteem trajectories. Our findings that trajectories differed as a function of ethnicity, but not gender, underscore the necessity of exploring psychological processes in samples that include adolescents from a range of backgrounds using methods that are sensitive to individual variation. Some of our most common assumptions about adolescent development (e.g., gender differences in self-esteem) will continue to be challenged as we expand our understanding of adolescents from diverse ethnic and socioeconomic backgrounds. Future research should continue to examine not only the patterns and predictors of mental health among diverse groups of adolescents, but also the ways in which families, peers, and schools can support the well-being of these young people. Only then will we have the tools necessary to help adolescents develop to their fullest and healthiest potentials. Acknowledgements Go to: [GO] [up] [down] This research is based on the Doctoral Dissertation of Melissa L. Greene in the Department of Psychology, New York University. The authors would like to thank Niall Bolger, Diane Hughes, Joanna Sattin, Esther Marron, and Benjamin Williams for their feedback and support. In addition, we would like to thank the National Science Foundation, The National Institute of Mental Health, and The William T. Grant Foundation for their generous support of this project. References Go to: [GO] [up] ? American Association of University Women (AAUW) (1991). Shortchanging girls, shortchanging America: Full data report. Washington, DC: American Association of University Women Educational Foundation. ? Armsden, G. C., & Greenberg, M. T. (1987). The inventory of parent and peer attachment: Individual differences and their relationship to psychological well-being in adolescence. Journal of Youth and Adolescence, 16, 427 [-] 453. [ISI Abstract] ? Bachman, J. J. (1970). Youth in transition (Vol. 2). Ann Arbor: University of Michigan Institute for Social Research, Survey Research Center. ? Baldwin, S. A., & Hoffman, J. P. (2002). The dynamics of self-esteem: A growth-curve analysis. Journal of Youth and Adolescence, 2, 101 [-] 113. [CrossRef Abstract] ? Block, J., & Robins, R. W. (1993). A longitudinal study of consistency and change in self-esteem from early adolescence to early adulthood. Child Development, 64, 909 [-] 923. [ISI Abstract] [CSA Abstract] ? Bronfenbrenner, U. (1979). The ecology of human development. Cambridge, MA: Harvard University Press. ? Brown, K. M., McMahon, R. P., Biro, F. M., Crawford, P., Schreiber, G. B., Similo, S., Waclawiw, M., & Striegel-Moore, R. (1998). Changes in self-esteem in black and white girls between the ages of 9 and 14 years. Journal of Adolescent Health, 23, 7 [-] 19. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Bryk, A. S., & Raudenbush, S. W. (1987). Application of hierarchical linear models to assessing change. Psychological Bulletin, 101, 147 [-] 158. [CrossRef Abstract] [ISI Abstract] ? Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models. Newbury Park, CA: Sage. ? Buhrmester, D., & Yin, J. (1997, March). A longitudinal study of friends' influence on adolescents' adjustment. Paper presented at the Meeting for the Society for Research on Child Development, Washington, D.C. ? Carlson, C., Uppal, S., & Prosser, E. C. (2000). Ethnic differences in processes contributing to the self-esteem of early adolescent girls. Journal of Early Adolescence, 20, 44 [-] 67. [ISI Abstract] ? Cauce, A. M. (1986). Social networks and social competence: Exploring the effects of early adolescent friendships. American Journal of Community Psychology, 14, 607 [-] 628. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Chapman, P. L., & Mullis, R. L. (2000). Racial differences in adolescent coping and self-esteem. The Journal of Genetic Psychology, 161, 152 [-] 160. [ISI Abstract] [CSA Abstract] ? Coates, D. L. (1985). Relationships between self-concept measures and social network characteristics for Black adolescents. Journal of Early Adolescence, 5, 319 [-] 338. ? Cole, D. A., Maxwell, S. E., Martin, J. M., Peeke, L. G., Seroczynski, A. D., Tram, J. M., Hoffman, K. B., Ruiz, M. D., Jacquez, F., & Maschman, T. (2001). The development of multiple domains of child and adolescent self-concept: A cohort sequential longitudinal design. Child Development, 72, 1723 [-] 1746. [Synergy Abstract] [ISI Abstract] [CSA Abstract] ? Cooley, C. H. (1902). Human nature and the social order. New York: Scribner. ? Coopersmith, S. (1967). The antecedents of self-esteem. San Francisco: W.H. Freeman. ? Crocker, J., Luhtanen, R., Blaine, B., & Broadnax, S. (1994). Collective self-esteem and psychological well-being among White, Black, and Asian college students. Personality and Social Psychology Bulletin, 20, 503 [-] 513. [ISI Abstract] [CSA Abstract] ? Deihl, L. M., Vicary, J. R., & Deike, R. C. (1997). Longitudinal trajectories of self-esteem from early to middle adolescence and related psychosocial variables among rural adolescents. Journal of Research on Adolescence, 7, 393 [-] 411. [ISI Abstract] [CSA Abstract] ? Demo, D. H., Small, S. A., & Savin-Williams, R. C. (1987). Family relations and the self-esteem of adolescents and their parents. Journal of Marriage and the Family, 49, 705 [-] 715. [ISI Abstract] [CSA Abstract] ? Dukes, R. L., & Martinez, R. (1994). The impact of ethgender on self-esteem among adolescents. Adolescence, 29, 105 [-] 115. [ISI Abstract] [CSA Abstract] ? Eccles, J. S., Barber, B. L., Jozefowicz Malenchuk, O., & Vida, M. (1999). Self-evaluations of competence, task values, and self-esteem. In N. G. Johnson, M. C. Roberts, & J. Worell (Eds.), Beyond appearance: A new look at adolescent girls (pp. 53 [-] 83). Washington: American Psychological Association. ? Erikson, E. H. (1968). Identity: Youth and crisis. New York: W.W. Norton & Company. ? Erkut, S., & Tracy, A. J. (2002). Predicting adolescent self-esteem from participation in school sports among Latino subgroups. Hispanic Journal of Behavioral Sciences, 24, 409 [-] 429. [ISI Abstract] [CSA Abstract] ? Feldman, S. S., & Elliott, G. R. (1990). At the threshold: The developing adolescent. Cambridge, MA: Harvard University Press. ? Fenzel, L. M., Magaletta, P. R., & Peyrot, M. F. (1997). The relationship of school strain to school functioning and self-worth among urban, African American early adolescents. Psychology in the Schools, 34, 279 [-] 288. [CrossRef Abstract] [ISI Abstract] ? Fisher, C. B., Wallace, S. A., & Fenton, R. E. (2000). Discrimination distress during adolescence. Journal of Youth and Adolescence, 29, 679 [-] 695. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Hagborg, W. J. (1993). The Rosenberg self-esteem scale and Harter's self-perception profile for adolescents: A concurrent validity study. Psychology in the Schools, 30, 132 [-] 136. [ISI Abstract] ? Harter, S. (1990). Causes, correlates, and the functional role of global self-worth: A life-span perspective. In R. J. Sternberg, & J. Kolligian (Eds.), Competence considered (pp. 67 [-] 98). New Haven, CT: Yale University Press. ? Harter, S. (1993). Causes and consequences of low self-esteem in children and adolescents. In R. F. Baumeister (Ed.), Self-esteem: The puzzle of low self-regard (pp. 87 [-] 116). New York: Plenum. ? Harter, S. (1999). The construction of the self. New York: Guilford. ? Harter, S., Marold, D.B, & Whitesell, N. R. (1992). A model of psychosocial risk factors leading to suicidal ideation in young adolescents. Development and Psychopathology, 4, 167 [-] 188. [ISI Abstract] ? Haynes, N., Emmons, C., & Comer, J. P. (1993). Elementary and middle-school climate survey. Unpublished manuscript, Yale University Child Study Center. ? Hirsch, B. J., & DuBois, D. L. (1991). Self-esteem in early adolescence: The identification and prediction of contrasting longitudinal trajectories. Journal of Youth and Adolescence, 20, 53 [-] 72. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Hoge, D. R., Smit, E. K., & Hanson, S. L. (1990). School experiences predicting changes in self-esteem of sixth- and seventh-grade students. Journal of Educational Psychology, 82, 117 [-] 127. [CrossRef Abstract] [ISI Abstract] ? Jacobs, J. E., Lanza, S., Osgood, D. W., Eccles, J. S., & Wigfield, A. (2002). Changes in children's self-competence and values: Gender and domain differences across grades one through twelve. Child Development, 73, 509 [-] 527. [Synergy Abstract] [ISI Abstract] [CSA Abstract] ? Kling, K. C., Hyde, J. S., Showers, C. J., & Buswell, B. N. (1999). Gender differences in self-esteem: A meta-analysis. Psychological Bulletin, 125, 470 [-] 500. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Kreft, I., & De Leeuw, J. (1998). Introducing multilevel models. Thousand Oaks, CA: Sage. ? Kuperminc, G., Leadbeater, B. J., Emmons, C., & Blatt, S. J. (1997). Perceived school climate and problem behaviors in middle-school students: The protective function of a positive educational environment. Journal of Applied Developmental Science, 1, 76 [-] 88. [CSA Abstract] ? Luster, T., & McAdoo, H. P. (1995). Factors related to self-esteem among African American youths: A secondary analysis of the High/Scope Perry preschool data. Journal of Research on Adolescence, 5, 451 [-] 467. [ISI Abstract] [CSA Abstract] ? Marsh, H. W. (1989). Age and sex effects in multiple dimensions of self-concept: Preadolescence to early adulthood. Journal of Educational Psychology, 81, 417 [-] 430. [CrossRef Abstract] [ISI Abstract] ? Martinez, R. O., & Dukes, R. L. (1991). Ethnic and gender differences in self-esteem. Youth and Society, 22, 318 [-] 338. [ISI Abstract] ? Martinez, R. O., & Dukes, R. L. (1997). The effects of ethnic identity, ethnicity, and gender on adolescent well-being. Journal of Youth and Adolescence, 26, 503 [-] 516. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Maton, K. I., Teti, D. M., Corns, K. M., Vieira-Baker, C. C., & Lavine, J. R. (1996). Cultural specificity of support sources, correlates and contexts: Three studies of African-American and Caucasian youth. American Journal of Community Psychology, 24, 551 [-] 587. [ISI Abstract] [CSA Abstract] ? McCarthy, J. D., & Hoge, D. R. (1982). Analysis of age effects in longitudinal studies of adolescent self-esteem. Developmental Psychology, 18, 372 [-] 379. [CrossRef Abstract] [ISI Abstract] ? McCloyd, V. C. (1998). Changing demographics in the American population: Implications for research on minority children and adolescents. In V. C. McLloyd & L. Steinberg (Ed.), Studying minority adolescents conceptual, methodological, and theoretical issues (pp. 3 [-] 20). Mahwah, NJ: Lawrence Erlbaum Associates. ? McCreary, M. L., Slavin, L. A., & Berry, E. J. (1996). Predicting problem behavior and self-esteem among African American adolescents. Journal of Adolescent Research, 11, 216 [-] 234. [ISI Abstract] [CSA Abstract] ? Mead, G. H. (1934). Mind, self and society. Chicago: University of Chicago Press. ? Mullis, A. K., Mullis, R. L., & Normandin, D. (1992). Cross-sectional and longitudinal comparisons of adolescent self-esteem. Adolescence, 27, 51 [-] 61. [ISI Abstract] ? O'Malley, P. M., & Bachman, J. G. (1983). Self-esteem: Change and stability between ages 13 and 23. Developmental Psychology, 19, 257 [-] 268. [CrossRef Abstract] ? Openshaw, D.K, Thomas, D. L., & Rollins, B. C. (1984). Parental influence of adolescent self-esteem. Journal of Early Adolescence, 4, 259 [-] 274. [CSA Abstract] ? Overholser, J. C., Adams, D. M., Lehnert, K. L., & Brinkman, D. C. (1995). Self-esteem deficits and suicidal tendencies among adolescents. Journal of the American Academy of Child and Adolescent Psychiatry, 34, 919 [-] 928. [CrossRef Abstract] [ISI Abstract] ? Pahl, K., Greene, M., & Way, N. (2000, March). Self-esteem trajectories among urban, low-income, ethnic minority high school students. Poster presented at the Meeting of the Society for Research on Adolescence, Chicago, IL. ? Paterson, J., Pryor, H., & Field, J. (1995). Adolescent attachment to parents and friends in relation to aspects of self-esteem. Journal of Youth and Adolescence, 24, 365 [-] 375. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Phinney, J. S., Cantu, C. L., & Kurtz, D. A. (1997). Ethnic and American identity as predictors of self-esteem among African American, Latino, and White adolescents. Journal of Youth and Adolescence, 26, 165 [-] 185. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Procidano, M., & Heller, K. (1983). Measure of perceived social support from friends and family. American Journal of Community Psychology, 11, 1 [-] 24. [CrossRef Abstract] [ISI Abstract] ? Roeser, R., & Eccles, J. (1998). Adolescents' perceptions of middle-school: Relation to longitudinal changes in academic and psychological adjustment. Journal of Research on Adolescence, 8, 123 [-] 158. [ISI Abstract] [CSA Abstract] ? Rogosa, D. R., & Willett, J. B. (1985). Understanding correlates of change by modeling individual differences in growth. Psychometrika, 50, 203 [-] 228. [ISI Abstract] ? Root, M. P. P. (1995). The psychology of Asian American women. In H. Landrine (Ed.), Bringing cultural diversity to feminist psychology: Theory, research, and practice (pp. 265 [-] 301). Washington, D.C.: American Psychological Association. ? Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton University Press. ? Rosenberg, M. (1979). Conceiving the self. New York: Basic Books. ? Rosenberg, M. (1985). Self-concept and psychological well-being in adolescence. In R. L. Leahy (Ed.), The development of the self (pp. 205 [-] 246). New York: Academic Press. ? Rosenberg, M. (1986). Self-concept from middle childhood through adolescence. In J. Suls, & A. G. Greenwald (Eds.), Psychological perspectives on the self (pp. 107 [-] 136). Hillsdale, NJ: Lawrence Erlbaum Associates. ? Rosenbaum, G. (2000). An investigation of the ecological factors associated with friendship quality in urban, low-income, racial and ethnic minority adolescents. Unpublished doctoral dissertation, New York University ? Rosenbloom, S. R., & Way, N. (2004). Experiences of discrimination among African American, Asian American, and Latino adolescents in an urban high school. Youth and Society, 35, 420 [-] 451. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Rotheram-Borus, M. J., Dopkins, S., Sabate, N., & Lightfoot, M. (1996). Personal and ethnic identity, values and self-esteem among Black and Latino adolescent girls. In B. J. R. Leadbeater, & N. Way (Eds.), Urban girls: Resisting stereotypes, creating identities (pp. 35 [-] 52). New York: New York University Press. ? Rumbaut, R. G. (1994). The crucible within: Ethnic identity, self-esteem, and segmented assimilation among children of immigrants. International Migration Review, 28, 748 [-] 794. [ISI Abstract] [CSA Abstract] ? Sayer, A. G., & Willett, J. B. (1998). A cross-domain model for growth in adolescent expectancies. Multivariate Behavioral Research, 33, 509 [-] 543. [ISI Abstract] ? Scheier, L. M., Botvin, G. J., Griffin, K. W., & Diaz, T. (2000). Dynamic growth models of self-esteem and adolescent alcohol use. Journal of Early Adolescence, 20, 178 [-] 209. [ISI Abstract] ? Sodowsky, G. R., Kwan, K. K., & Pannu, R. (1995). Ethnic identity of Asians in the United States. In J. G. Ponterotto, J. M. Casas, L. A. Suzuki, & C. A. Alexander (Eds.), Handbook of multicultural counseling (pp. 123 [-] 154). Thousand Oaks, CA: Sage. ? Strenio, J. L. F., Weisberg, H. I., & Bryk, A. S. (1983). Empirical Bayes estimation of individual growth curves parameters and their relationship to covariates. Biometrics, 39, 71 [-] 86. [ISI Abstract] ? Tardy, C. H. (1985). Social support measures. American Journal of Community Psychology, 13, 187 [-] 202. [CrossRef Abstract] [ISI Abstract] ? Walker, L. S., & Greene, J. W. (1986). The social context of adolescent self-esteem. Journal of Youth and Adolescence, 15, 315 [-] 322. [ISI Abstract] ? Wallace, J. R., Cunningham, T. F., & Del Monte, V. (1984). Change and stability in self-esteem between late childhood and early adolescence. Journal of Early Adolescence, 4, 253 [-] 257. ? Way, N., & Chen, L. (2000). The characteristics, quality, and correlates of friendships among African American, Latino, and Asian American Adolescents from low-income families. Journal of Adolescent Research, 15, 274 [-] 301. [ISI Abstract] [CSA Abstract] ? Way, N., & Leadbeater, B. J. (1999). Pathways toward educational achievement among African American and Puerto Rican adolescent mothers: Reconsidering the role of family support. Development and Psychopathology, 11, 349 [-] 364. [CrossRef Abstract] [ISI Abstract] ? Way, N., & Pahl, K. (1999). Friendship patterns among urban adolescent boys: A qualitative account. In M. Kopala, & L. A. Suzuki (Eds.), Using qualitative methods in psychology (pp. 145 [-] 161). Thousand Oaks, CA: Sage. ? Way, N., & Robinson, M. G. (2003). A longitudinal study of the effects of family, friends, and school experiences on the psychological adjustment of ethnic minority, low-SES adolescents. Journal of Adolescent Research, 18, 324 [-] 326. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Wilgenbusch, T., & Merrell, K. W. (1999). Gender differences in self-concept among children and adolescents: A meta-analysis of multidimensional studies. School Psychology Quarterly, 14, 101 [-] 120. [ISI Abstract] ? Willett, J. B., & Sayer, A. G. (1996). Cross-domain analysis of change over time: Combining growth modeling and covariance structure analysis. In G. A. Marcoulides, & R. E. Schumacker (Eds.), Advanced structural equation modeling (pp. 125 [-] 157). Hillsdale: NJ: Erlbaum. ? Willett, J. B., Singer, J. D., & Martin, N. C. (1998). The design and analysis of longitudinal studies of development and psychopathology in context: Statistical models and methodological recommendations. Development and Psychopathology, 10, 395 [-] 426. [CrossRef Abstract] [ISI Abstract] ? Wilson, M. N. (1989). Child development in the context of the Black extended family. American Psychologist, 44, 380 [-] 383. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Zimmerman, M. A., Copeland Laurel, A., Shope, J. T., & Dielman, T. E. (1997). A longitudinal study of self-esteem: Implications for adolescent development. Journal of Youth and Adolescence, 26, 117 [-] 141. [CrossRef Abstract] [ISI Abstract] [CSA Abstract] ? Zimmerman, M. A., & Maton, K. I. (1992, March). Self-esteem, social support, and life stress: A regression analysis among male African American adolescents. Paper presented at The Society for Research on Adolescence, Washington, D.C. Correspondence Requests for reprints should be sent to Melissa L. Greene, Department of Psychiatry, Weill Medical College of Cornell University, White Plains, New York 10605. E-mail: mlg2004 at med.cornell.edu Image Previews TABLE 1 Predictor and Outcome Variable Means (and Standard Deviations) TABLE 2 Unconditional Growth Models for Self-Esteem TABLE 3 Demographic Predictors of Self-Esteem Trajectories TABLE 4 Combined Effects of Family Support, Friendship Support, and School Climate on Trajectories of... FIGURE 1 Mean changes in self-esteem as a function of age and gender. FIGURE 2 Mean changes in self-esteem as a function of age and ethnicity. FIGURE 3 Fitted growth curves for self-esteem as a function of ethnicity. FIGURE 4 Intraindividual change in self-esteem as a function of friendship support and ethnicity. From checker at panix.com Wed May 25 00:47:48 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:47:48 -0400 (EDT) Subject: [Paleopsych] Harper's Magazine: The pages of sin: indulging in the seven deadlies Message-ID: The pages of sin: indulging in the seven deadlies Harper's Magazine, Jan, 2005 by [7]Arthur Krystal http://www.24hourscholar.com/p/articles/mi_m1111/is_1856_310/ai_n8694479 et seq. [First of four articles on sin.] Discussed in this essay: Envy, by Joseph Epstein. Oxford University Press, 2003. 109 pages. $17.95. Gluttony, by Francine Prose. Oxford University Press, 2003. 108 pages. $17.95. Lust, by Simon Blackburn. Oxford University Press, 2004. 151 pages. $17.95. Greed, by Phyllis A. Tickle. Oxford University Press, 2004.97 pages. $17.95. Anger, by Robert A. F. Thurman. Oxford University Press, 2004. 125 pages. $17.95. Sloth, by Wendy Wasserstein. Oxford University Press, 2005.112 pages. $17.95. Pride, by Michael Eric Dyson. Oxford University Press, forthcoming. The bad news is that we are born sinners; the good news (Gospel literally means good news) is that we can make things right through repentance. So Scripture, of the Catholic Church, tells us. It also tells us that along with sin there is Sin. Original sin, about which we can do nothing (except strive for grace), issues from man's first disobedience. Eve ate of the apple, enticed Adam to eat of it as well, and all of us, as a result, are rotten at the core. God, however, does not refer to this as a sin; rather it was Augustine of Hippo who peered closely and identified the hereditary stain on our souls. The word "sin" actually makes its first appearance in the Bible after Cain becomes angry with God for favoring Abel's. offering of choice cuts of meat over Cain's own assortment of fruit. God doesn't care for Cain's attitude and says: "If you do not do well, sin is couching at the door; its desire is for you, but you must master it." By then, however, it's too late. The apple had done its work: Cain invites Abel out to the field, and in time, as men multiplied over the face of the earth, wickedness and violence were everywhere. Properly vexed, He sent His flood, sparing only the 600-year-old Noah, his wife and sons, and some animals. This should have been enough to give Noah's descendants pause-but no, they too acted up, behaving sodomishly and gomorrishly, praying to false gods and the like. This time, however, God restrained Himself. Instead of wiping out the race of men, He gave them his Ten Commandments, the first doctrinal instance of supernal rules of behavior, from which our concept of the sins derives. In addition to instructions about honoring God and parents and keeping the Sabbath, there are those well-known but woefully ineffective proscriptions against murder, adultery, stealing, lying, coveting, and lusting. How, one can't help wondering, have we avoided another flood? Christianity offers one answer: God sent us Jesus instead. It is Christ who came to suffer for our sins and to cleanse us of them. Whether or not we avail ourselves of the opportunity, Jesus certainly altered how we regard sin. The sin that wends its way through the Old Testament usually takes the form of flouting God's will; it seems more a dereliction of duty--rather brave and exceedingly stupid, considering Yahweh's obvious bad temper--than an absence of faith, it also appears as something external to man, something "couching at the door." Jesus, however, saw sin differently and put it where it belongs: in us. Whereas Yahweh demands strict obedience, Jesus expects something besides: "You have heard that it was said to the men of old, 'You shall not kill; and whoever kills shall be liable to judgment.' But I say to you that everyone who is angry with his brother shall be liable to judgment." The Sermon on the Mount is nothing less than a corrective to the Ten Commandments. If the Commandments told man how to behave, the Sermon told him how to feel. Unfortunately, Christ's life and death did not automatically generate a radiant and immutable theology. As Christianity evolved over the course of several centuries, the Church fathers not only leaned on the teachings of the apostles but also borrowed from Pharisaic texts, Hellenistic mystery cults, and Neoplatonic cosmology. Ecclesiastical councils were convened to determine whether Jesus' body was as divine as his spirit, and whether he was equal to, or only a subset of, God. The Church may have been built upon the rock that was Peter, but it found its hierarchical perspective in the caves of Plato and the writings of Aristotle. If God's rule is Judaic and God's love is Christian, then God's reason is Greek. In a world created by an all-powerful intelligence, order and symmetry presided. Nature consisted of a series of graded existences, a chain of being--"Homer's golden chain," as Plotinus wrote--from the simple to the most complex, from the lowest and basest to the highest and best. The Church fathers, therefore, took a dim view of anything that distorted such a picture or that obscured its beauty and wisdom--a perspective that continued well into the Enlightenment. Thus worshiping the world--the divine organizing principle that informed all things--was another way of worshiping God, as long as one didn't grasp at earthly pleasures at the expense of seeing the bigger picture. In effect, the Church, having gotten its philosophical bearings, had decided that fear of hell, although necessary, was not sufficient. Thoughts and behavior offensive to God were also an affront to Nature, and sin was nothing less than a violation of the natural order, an upending of a set balance, a small tear in the divine fabric. And where Nature was concerned, one did not so much make distinctions as take inventory of those that already existed. There were four basic elements, ten heavenly spheres, four cardinal humors, four classical virtues, seven Christian virtues, and a specific number of sins. (You could fiddle with the list, but there had to be a list.) Reason moved the spheres, kept everything in alignment, and extended even to hell; the reason not to sin was Reason itself. With that established, the Church could turn its compartmentalizing mind to defining rules and responsibilities, assigning values to various kinds of behavior. So how many sins were there, and what were their respective degrees of badness? Proverbs notes: There are six things which the LORD hates, seven which are an abomination to him: haughty eyes, a lying tongue, and hands that shed innocent blood, a heart that devises wicked plans, feet that make haste to run to evil, a false witness who breathes out lies, and a man who sows discord among brothers. Theft and adultery are absent from this list, but there were still the Commandments, the Sermon on the Mount, and the apostles, who had plenty to say about sinning. There was no shortage of sins for the Church fathers to choose from, and if they required assistance, Paul ne Saul was only too happy to oblige them. In his Epistle to the Colossians, Paul denounces "fornication, impurity, passion, evil desire, and covetousness, which is idolatry ... anger, wrath, malice, slander, and foul talk." In Romans, he comes down equally on same-sex relations, envy, murder, strife, deceit, malignity, gossip, slander, insolence, haughtiness, disobedience, foolishness, heartlessness, and ruthlessness. Fine distinctions were not Paul's forte--in Corinthians, he lumps the effeminate with liars, thieves, and extortionists. A pattern emerges: "If you live according to the flesh you will die, but if by the Spirit you put to death the deeds of the body you will live." Over time, Scriptures' cautionary words about behavior would become canonical law, with only slight variations between Jews and Christians (mainly in the matter of conjugal relations). But before the sins became seven or Deadly, they were first Cardinal or Capital and amounted to eight. in written form they materialize in the works of Evagrius Ponticus, a monk (c. 345-399), who identified in men eight "evil thoughts." John Cassian (c. 360-435), another monk, soon Latinized these thoughts as eight vitia, or faults; in ascending order of seriousness they were: gula (gluttony), luxuria (lust), avaritia (avarice), tristitia (sadness), ira (anger), acedia (spiritual lethargy), vana gloria (vanity), and superbia (pride). Cassian also proposed that each sin summons the next one in the chain. * ("Summon" because the sins were identified with external demonic forces that could enter and poison the mind.) Two centuries later, Pope Gregory I (c. 540 604) officially adopted the list, modifying it slightly by folding vainglory into pride, merging lethargy and sadness, and adding envy. Pride now became the sin responsible for all the others (an idea later taken up by Thomas Aquinas), and, from bad to worst, Gregory's list includes: lust, gluttony, avarice, sadness (or melancholy), anger, envy, and pride. (Sloth would replace melancholy only in the seventeenth century.) http://www.24hourscholar.com/p/articles/mi_m1111/is_1856_310/ai_n8694479/pg_2?pi=scl Still, there remained the business of classification. Catholic dogma divides sin into two general categories--commission and omission--and, in each case, the malice and gravity must be determined. As regards malice, sins may partake of ignorance, passion, and infirmity; as regards gravity, they are either mortal or venial (pardonable). "All wrongdoing is sin, but there is sin which is not mortal" (1 John 5:17). A mortal, of cardinal, sin was defined by Augustine as Dictum vel factum vel concupitum contra legem aeternam--something said, done, or desired contrary to the eternal law. Thus, a mortal sin is always voluntary, whereas a venial sin may contain little or no realice or be committed out of ignorance. The Church even makes allowances for those sinners whose ignorance is "invincible." But be very, very careful if you are not one of the invincibly ignorant. ** Of course, if you are, you won't know it--and, well, don't eat any apples. Conceived by monks for monks, the seven deadly sins took hold in the popular imagination, though probably not in equal measure. One can see how lust and gluttony would be a bother in a monastery, but should the secular poor not dig in if the opportunity presented itself? Still, the unmagnificent seven were a handy compendium, available to priests, parents, poets, and artists. Brueghel, Bosch, Donizetti, Dante, Rabelais, Spenser, and John Bunyan took the seven asa subject, and neither by pen nor brush did they let them off lightly. Sin sent you straight to hell, where freezing water awaited the envious; dismemberment, the angry; snakes, the lazy; boiling oil, the greedy; fire and brimstone, the lustful. The sins, it bears repeating, were real external influences, and they were waiting for you. Aquinas, who did his Aristotelian best to elucidate the finer points of sin, reserves "mortal" for offenses committed against nature (e.g., murder and sodomy); for exploiting the less fortunate; and for defrauding workmen of their wages, which nicely raises the stakes involved in screwing over one's employees. Mortal also splits into the spiritual (blasphemy) and the carnal (adultery), the commission of which puts a stain (macula) on the soul. The sins of the flesh, born of the flesh, however, are less serious than sins of the spirit. In fact, the greater the carnal nature of the sin, Aquinas argues, the less culpability is involved. Oddly enough, Paul might agree, since he's pretty sure that nothing good dwells within the flesh, and although he "can will what is right, [he] cannot do it." So if Paul does what he doesn't want to do, it's sin, not himself, that's at fault. Sin, of course, became even less manageable during the Reformation. in fact, the very words Luther overheard that led to his break from the Church were "I believe in the forgiveness of sins." But what did that mean exactly? Only that forgiveness was not in our power to effect. In general terms, Protestant doctrine--which, of course, rejected the Church as the intermediary between God and man, thereby rejecting the Church's right to forgive our sins--held that the fulfillment of God's will cannot be affected by our will. Grace, in other words, comes about not through good works but through the goodness of God, about which we cannot presume. Sin exists: live with it, die with it, and hope that God forgives you for it. That doesn't mean you can do as you like, but it does mean that confession--however good for the soul--isn't good enough for absolution. One may surrender the self to God, in which case grace may miraculously descend, but it will not be as a reward for such surrender. Thus a certain helplessness exists not only in having been born in sin but also in being unable to do anything about it. Tellingly, Jesus himself doesn't really harp on sin. Sin is regrettable, to be sure, but also pardonable. There is, however, one sin that is unforgivable: "Every sin and blasphemy will be forgiven men," Jesus says in Matthew 12:31-32, but ... "whoever speaks against the Holy Spirit will not be forgiven, either in this age of in the age to come." The Unforgivable Sin makes the seven deadly ones look piddling by comparison. And, truth to tell, the seven sins are not in and of themselves all that exciting; it's what frenzied or slothful people do with them that's peculiar or outrageous. Angry? Envious? Lustful? Well, who hasn't been? Moreover, who cares? Certainly an excess of any one of the sins, or some nasty combination of them, may not win you friends, but who truly believes that the bad-tempered, the envious, or the lazy are going to hell? Even the all-too-pleasant idea of bastards like Mengele or Stalin eternally roasting together is credited only by scriptural literalists. If the Seven Deadlies don't exactly make us cower in fear, they can at least serve as fodder for the literati. Ian Fleming certainly thought so when, as a member of the editorial board of the London Times in the late 1950s, he asked, among other luminaries, W. H. Auden, Cyril Connolly, and Angus Wilson to weigh in on a particular sin. The resulting smallish book might be described, if one were given to verbal raffishness, as sinfully entertaining. The essays are urbane, knowing, and casual--one or two almost too casual--and bear out Fleming's own assessment: "How drab and empty life would be without these sins, and what dull dogs we all would be without a healthy trace of many of them in our make up!" Of the seven contributors, only Auden (on anger) and Evelyn Waugh (on sloth) take a marked exception to their subjects, perfectly understandable given their religious beliefs. Waugh takes the lazy to task in high style, suggesting that any show of indulgence is unwarranted: "Just as he is a poor soldier whose sole aim is to escape detention, so he is a poor Christian whose sole aim is to escape Hell." Auden, meanwhile, sends anger down some subtle byways: "To speak of the Wrath of God cannot mean that God is Himself angry." Because the laws of the spiritual life are the very laws that define our nature, Auden suggests that we can defy but never break them. Should any souls wind up in Hell, "it is not because they have been sent there, but because Hell is where they insist upon being." Over the years other writers have sallied forth with varying success to confront the Seven Deadlies, usually with our best interests at heart. Unfortunately, the well-intentioned are more concerned with grace than with graceful prose, and their books will appeal only to the converted. The same, happily, cannot be said of the writers engaged in the new Oxford University Press series. Oxford has reprised Ian Fleming's project, coramissioning seven slender volumes from seven contemporary scribes. True, it seems a publishing ploy on the order of forming a boy band, but there is precedent--although, as before, the pairing of writer and sin is not immediately evident. Why would an Englishman be asked to write about lust? ponders Simon Blackburn: "Other nationalities are amazed that we English reproduce at all." In order of publication we now have, or can soon expect, Joseph Epstein on Envy, Francine Prose on Gluttony, Simon Blackburn on Lust, Phyllis A. Tickle on Greed, Robert A. F. Thurman on Anger, Wendy Wasserstein on Sloth, and Michael Eric Dyson on Pride. The books grew out of lectures delivered at the New York Public Library, and the final results, slim as they are, still feel a bit padded because a little sin does not go very far: the Brits were wise to keep them at essay's length. The trouble with sin nowadays is that there's no sting in the tale. Without a firm conviction in the soul's vertical passage, either up or down, sin is neutered, shorn of religious fear and loathing. Sin has to have some bite to it if it's going to make an impression on the page, which is not to say that the Oxford series is anything less than smart and civilized (and hence unpalatable to the sanctimonious who want their sins demonized). The authors, of course, recognize the dilemma of writing about sin for secularists. Gluttony, as Francine Prose observes, has become "an affront to prevailing standards of beauty and health rather than an offense against God." When sin has been co-opted by the helping professions, it should come as no surprise to learn that a congregation of French chefs recently petitioned the Vatican to remove gluttony from the list (though, apparently, it's more of a semantic dispute than a religious one). http://www.24hourscholar.com/p/articles/mi_m1111/is_1856_310/ai_n8694479/pg_3?pi=scl The Oxford series' charter is to make the sins relevant. That means turning them into vices and character flaws by presenting a lot of anecdotal evidence concerning contemporary bad behavior. All well and good, but it is the early Christian references to sin and its divisions that constitute the books' main appeal. You can learn things here, mainly the philosophical, political, and economic evolution of the sins as culture and values change. Joseph Epstein's Envy wittily dissects the different forms that envy takes as it morphs into covetousness, Schadenfreude, snobbery, and ressentiment. And Phyllis Tickle, whose Notes constitute nearly a third of her book, is particularly good on greed. How many readers know that avarice was not originally defined solely as material greed but as "thinking about what does not yet exist"? Of that the first known Christian ecclesiastical court "was an adjudication of sorts involving the ownership of land and greed over its proceeds"? Ah, where are the sins of yesteryear? It's probably fair to say that we've become insensitized to the word, if not the Word. In those secular neighborhoods where sin has been replaced by morality and "cultural" norms," people don't fail God so much as they fail themselves and one another. And given the influence of early traumatic experience, genetic makeup, and our peripatetic hormones, the condemnation, if not morality, of certain behaviors becomes problematic. It's not sin that besets us, it's poor impulse control, selfishness, and depression. Chemistry is fate, up to a point; and lust and gluttony are joined at the lip. Does that not absolve the obese adulterer of sin if not of wrongdoing? Unless one is a true believer, sin is a conceit rather than something waiting to pounce and drive us straight into the ground. The truth is, the concept of sin is not required to recognize contemptible and malignant behavior. Serious consequences, after all, attend certain acts whether we call them vices or sins. Is murder any less evil for being sinless? Hardly. God's law aside, there is some behavior whose maliciousness is sufficient to tie the perpetrator to the rack. Hell merely simplifies the question of punishment. Even among the religious, there was and remains disagreement regarding the exact nature of our transgressions. Whom and what are we to believe? Luther, for example, decreed that all the sins of unbelievers are mortal sins, and all the sins of the faithful, with the exception of infidelity, are venial. Yet one can go all of one's life without committing adultery, and grace, according to Luther, is still not guaranteed. It is this unyielding moral absolutism that makes it possible to believe in God without taking the idea of sin too seriously. The momentous, the significant, fact of Creation is God, not man. On the other hand, if one is convinced that He sent his only begotten Son to save us, then the soul rather than Creation becomes the point. Got soul? Then you've got sin. Got soul, then you also have a body that houses it, and most of the cardinal sins, as we know, are associated with the body's unregulated appetites. If we were all purely spiritual entities, sin wouldn't be a problem. Nor, logically speaking, would Christianity. The point has been made before: Christianity is a religion of the body. The devout regard the body of Christ with unabashed fetishistic devotion (Mel Gibson's Passion does well to remind us of this), and although we can imaginatively divorce other religions from their founders--it didn't have to be Abraham per se or Muhammad or Buddha--we cannot accomplish this with Jesus. Christ is God; his arras, legs, tongue, and teeth are God. All the same, the body is pretty loathsome--just ask Paul--and the distinction that is sometimes made between the "sins of the flesh" and the body does not really work. Bodies are flesh; divinity is not. God does not bleed. But God did bleed when He took bodily form, and perhaps this helps explain the conflicting strains in Christianity regarding the sins. If God created the body in His image, should we not honor Him by using it to make ourselves happy? Well, that would depend on whom you ask. For most Christian theologians and lawgivers, separating the body from its trespasses and establishing what aspects of the body may be enjoyed without guilt are thorny issues. Just how much pleasure, if any, is allowed during procreation? Sin, however we think of it, is always a struggle with our own bodies. But the body isn't all bad, it isn't just a physical protuberance that we scale in order to reach God; it's the means of providing for consciousness. Because unless one is an implacable idealist, it's obvious that the mind needs the body to teach it to become mind, and mind is the means by which we conceive God. Thank God, then, for the body. That said, our affinity to Him is found in our ability to think. If He made us, he made us to reason. And if He made us, He also made us imperfectly (or the apple took care of that). And because we are imperfect, and because sin is pervasive, it's reasonable to assume that He wants us to become perfect by defeating sin. in effect, sin exists to make us worthy of Him. And the best way of showing that special affinity is to defeat sin through the gift He gave us--free will. We can therefore assume that He would want our devotion (if we can say that God "wants") to be a thoughtful devotion, for how much greater is faith when it comes through reason rather than from terror or insecurity of need of comfort. It's a struggle, of course--but, as Augustine said, He wants us to struggle. It's why free will and evil both exist. Aquinas tends to agree. Evil is part of His plan, and so it must be good. Indeed, it's what we must overcome to attain the supreme good that is God. Let us forget for the moment the Church's rules for keeping souls in line or the prospect of incurring God's wrath (which Auden was correct in doubting); sin is a question of wanting. We want wealth, power, and status; we want this man's money and that man's wife; we want to win, we want revenge, we want to rest. And whenever we want too much, we want Him less. Sin is a question of emphasis: the grasping at earthly happiness instead of reaching toward heaven. One might even say that the essence of sin is the attempt to secure happiness instead of being willing to receive it. Since the gift of true happiness comes from God, any undue attempt to attain it on earth casts suspicion on His power to bestow it. Again, if God's essence is mind--rational, perfect, perpetual, and precise--we can realize Him only through mind; and if the mind is clouded, disturbed, or in thrall to earthly delights, we're in trouble. So it's also a question of degree. How much pleasure or distraction is too much? As Blackburn sensibly notes in Lust, "If we build the notion of excess into the definition, the desire is damned simply by its name." In other words, we can enjoy ourselves so long as enjoyment doesn't blot out God--not something most of us want to think about when spooning toward the bottom of a pint of Chunky Monkey or gleefully eyeing the contents of our blue-chip portfolio. Ultimately, sin is a problem only for the sinful, which is another way of saying that the believer and the nonbeliever cannot shake hands across the spiritual divide. The secular not only reject the plausibility of sin; they may well wonder how anyone who really does believe in God could sin. I mean, there's hell to pay: twisting and turning in the fiery pit ... FOREVER! One would have to be an idiot to believe in sin and commit it too. But perhaps that's too simplistic. Perhaps the emphasis should be not on the sin but on the temptation to sin in the full knowledge that God exists. It's incorrect, then, to accuse Jimmy Swaggart and Jim Baker of hypocrisy, or the hundreds of priests who abused young boys; that would mean they didn't believe. The truth is, they believed and they still couldn't help themselves, which is, in effect, the point: Without belief in the soul and the afterlife there is no sin. So who is more admirable: the virtuous who instinctively lead righteous lives, or the weak and easily tempted who put the lid on their lust or envy every waking moment? Isn't the alcoholic who refuses a drink more deserving of our respect than the teetotaler who thinks it's morally wrong to knock back a beer? http://www.24hourscholar.com/p/articles/mi_m1111/is_1856_310/ai_n8694479/pg_4?pi=scl On the whole, it helps to have sin around; it's like having a set of instructions for building a life that God approves of. We may have free will, but what are our choices when it comes to salvation? We can choose to do good or to do evil. Take away sin, however, and free will has no ballast, no epistemological basis of absolute moral certainty. Even if ethics is a "condition of the world, like logic," as Wittgenstein suggests, how in the world can it be demonstrated? Upon what blackboard would Wittgenstein have us look? What's a free moral agent to do? The obvious answer is: keep looking for answers, keep weighing the effect of behavior against the desire that prompts it and the satisfaction gained when indulging it. It's not a simple equation, and, like schoolchildren, we have to struggle to balance the equation's parts. That's one way of looking at it. Another is to dismiss with prejudice the idea of original sin, discount the prospect of souls becoming muddier the longer their sojourn on earth, and instead concentrate on doing good because goodness makes sense. If everyone did good (like "obscenity," we know "goodness" when we see it), the world probably would make sense. But human nature being what it is, we may have to pull up a chair in society's emergency room and settle in for a long wait. Those tired of working on the equation can always turn to Paul. Me, I prefer to look to a blonder, more bosomy expositor of morals and ruefully concede: "To err is human--but it feels divine." * Although there is no exact counterpart of the seven sins in Jewish literature, a rabbinic midrash (an instance of scriptural exegesis) enumerates seven successive steps leading to an individual's downfall, beginning with the refusal to study Torah and concluding with the denial of God himself. ** The Catholic Encyclopedia states: "No mortal sin is committed in a state of invincible ignorance of in a half-conscious state." Arthur Krystal's last review for Harper's Magazine, "Poet in the Machine," appeared in the February 2004 issue. From checker at panix.com Wed May 25 00:47:59 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:47:59 -0400 (EDT) Subject: [Paleopsych] Raghavan Iyer: The Seven Deadly Sins: I. The Historical Context Message-ID: Raghavan Iyer: The Seven Deadly Sins: I. The Historical Context http://theosophy.org/tlodocs/SevenDeadlySins-1.htm He that is without sin among you, let him first cast a stone. John 8:7 Throughout Christian history, sin has functioned as the Archimedean lever of orthodox Christian morality. From the patristic period to the close of the Middle Ages, sin and its progeny exercised the imaginations of laymen and theologians alike, so much so that European society and culture are unintelligible to those unacquainted with sin. From the refinements of scholastic philosophy to the exuberance of popular fancy, sin functioned as a common measure of man for all alike and in every arena. Suffice it to say, this is not the case in the twentieth century. Indeed, any enquiry today into the seven deadly sins must have a certain quaintness which would itself be entirely unintelligible to an officer of the Inquisition. Even where there were doubts about the right response to sin and even its detailed nature, there was no more doubt of its reality in general than there is today regarding notions like progress. To enquire into sin today can, however, be instructive. Sin is, so to speak, a geologic formation in human history, largely obscured by recent deposits of events, but still there, not far beneath the social surface and obtruding visibly in certain places. To understand it in the past is to understand something of the supports of the present, as well as certain possibilities for the future. Not to understand it is like being haunted by the ghosts of dead ideas. The Christian notion of sin is, naturally, a successor to previous cultural conceptions. In particular, as one can see through the derivations of terms in the Indo-European tongues, sin and the sins reflect a crystallization of moral ideas around certain aspects of human nature and action. Activities and conditions that were morally neutral became charged with the electric force of sin and salvation, while other elements of human life once regarded as central to spirituality and ethics fell into conceptual and practical eclipse. Since the Renaissance and the Reformation, sin has been displaced by other conceptions and modalities, disclosing the pre-Christian era in a light that was not accessible during the period of Christian dominance, and also putting the era of sin in a not entirely favourable perspective. Hence, one can begin to examine the concept of sin not simply as a possession of Christianity and not simply as the precursor of certain contemporary moral and spiritual ideas, but as a specific approach to the articulation of elements in human life which antedate Christianity and also will be a part of the future. Viewed in this manner, one may ask what sorts of conceptions and ideas about human nature were assembled into the notion of sin. How were they modified in the process? What is there in the history of the idea of sin that illumines the timeless elements of human nature? And is there some way in which the collective experience of sin, the cultural living out of the idea over centuries, can be assimilated to serve the needs of the present? These and other related questions could be given a sharper focus by attaching a more specific meaning to the idea of contemporary moral and spiritual need. In particular, owing to the massive and pervasive violence of the twentieth century in every sphere, from the political to the social and psychological, it would be helpful to explore the historic development of the idea of sin and then to apply this enquiry to an understanding of violence. Despite the moral anomie of the present century, the idea of violence comes as close as any to arousing a universal moral concern comparable to that evoked by sin in earlier centuries. At least, like sin, violence is scarcely valued for its own sake. This cannot be said, however, for each of the specific modes of action and attitude identified in the past as deadly sins. Pride, for example, is often treated as an integral component of self-respect, a definite contemporary good. Gluttony, though not good for health and perhaps unattractive to spectators, certainly has its unabashed coterie. Such facts underline the necessity of recovering the historical meanings and content of sin and the seven deadly sins before attempting to relate them to contemporary moral realities such as pervasive violence. If one merely engages in perfunctory reflections on pride, avarice and the rest, this will neglect totally the force and substance of their lost status. Thus, one would overlook the longer-term threads of moral meaning once expressed in the notion of sin and now surrounding the notion of violence. To begin with the linguistic evidence, 'sin' comes from the Latin sons, 'guilty', (stem sont-, 'existing', 'real'), originally meaning 'real'. It is akin to the Old Norse sannr, 'true', 'guilty', from which come santh and eventually 'sooth' or 'the truth'. In Latin thought, according to Curtius, "Language regards the guilty man as the man who it was." The Old High German sin, 'to be', has the zero-grade form snt-ia, 'that which is', from the Latin root esse, 'to be', the Latin est, 'he is', the Greek esti, 'he is', the Sanskrit asti, 'he is', and perhaps also the Sanskrit satya, 'true' and 'real'. (The twenty-first letter of the Hebrew alphabet is sin, a variant of shen, 'tooth', from the shape of the letter, but is not related to the Indo-European 'sin'. Also, Sin, or zu-en, the Sumerian moon god, often rendered as en-zu, 'lord of wisdom', is unrelated to the term 'sin'. Furthermore, the relation to the Latin sinister, 'left', 'evil' and 'inauspicious', is an etymological speculation of unknown merit.) In Greek thought there is a significant distinction between the early Homeric conception of sinister acts which vitiate the relation between the agent and his or her environment, and the later conception of sinful acts considered in themselves morally wrong and hence offensive to the gods. Whereas the first meaning seems akin to the idea of ritual impurity, the second idea definitely involves the notion of specific moral misconduct. Thus, Theognis said that hubris -- overweening disregard of the rights of others -- arises out of koros -- a satiety such as when too much wealth attends a base man. Sophocles added that hubris results in a moral and prudential blindness, ate, where the evil appears good. Aeschylus explored the relation between such deeds and the rectifying principle of nemesis acting over successive generations, whilst the Orphics and Pythagoreans depicted its activity through successive reincarnations of the soul. In Roman thought there is also an older non-moral notion (scelus -- ill luck attendant upon violation of taboos -- and vitium -- a shortcoming in the performance of a ritual), which later gave way to a moral notion attached to misdeeds. Virgil portrayed heaven, hell and purgatory as the exclusive theatres for the experience of the consequences of moral misdeeds. Perhaps, like Plato, he thought misdeeds were equilibrated in both this world and the afterlife, but he was often misunderstood by Christian thinkers who took a one-life view. In the New Testament the Greek term translated as 'sin' is hamartia. It comes from the root hamart and the verb hamartano, originally meaning 'to miss', 'to miss the mark', and by extension 'to fail', 'to go wrong', 'to be deprived of', 'to lose', 'to err', 'to do wrong' or 'to sin'. As a substantive hamartia means generally 'failure', 'fault', 'sin' or 'errors with most Greek authors, but also includes 'bodily defect' and 'malady' as well as 'guilt', 'prone to error', 'erring in mind' and 'distraught'. In the four canonical Gospels, the term hamartia occurs three times in Matthew, all in contexts speaking of the forgiveness of sins. It occurs fourteen times in John, where it is likened to a form of blindness or incapacity and is connected to the ideas of forgiveness and non-condemnation. It occurs not at all in Mark or Luke. In the Acts and various letters there are about eighty occurrences. This distribution suggests that hamartia was perhaps a Gnostic term of reference, so far as the Gospels are concerned, and a point of interest or concern more to the disciples than it was to Jesus. Certainly he never speaks of hamartia in a harsh or violent manner. In subsequent history the Latin term peccatum, from the verb peccare, meaning 'to stumble', 'to commit a fault' and thus 'to sin', became the principal designation for sin in Christian theology. It is found, for example, in the formula of confession, "Peccavi '~ meaning "I have sinned." The Latin verb derives from peccus, 'stumbling', 'having an injured foot', itself from the comparative form pejor, 'worse', of the verbal root ped, meaning 'to fall'. This is the same root as the noun ped, 'foot', and traces to the Greek stem pod, 'foot', and the Sanskrit pada, 'foot', and padyate, 'he goes' or 'he falls'. The same family also produces the English 'pejorative', 'impair' and 'pessimism'. The enumeration of the seven deadly sins as specific categories of active moral transgression took place sporadically through the general development of Christian theology. While a popular notion in the patristic period, it did not gain a precise and permanent delineation, probably because of the open texture of theological disputation. In principle, the deadly sins are the causes of other and lesser forms of sin. They are fatal to spiritual progress. The distinction between mortal and venial sins is not a distinction of content such as separates the seven deadly sins from each other. Rather, as in the writings of St. Augustine, it is a juridical distinction of degree of gravity in any sinful act. Mortal sins are either sins serious in any instance or lesser sins so aggravated in their circumstance or degree of wilfulness as to become grave. Mortal sins involve spiritual death and the loss of divine grace. Venial sins are slight offenses against divine law in less important matters, or offenses in grave matters but done without reflection or without the full consent of the will. Actual sin is traceable to the will of the sinner, whereas original sin (peccatum originale) is an hereditary defect transmitted from generation to generation as a consequence of the choices made by the first members of the human race. The classification of sins was ordinarily, during the Middle Ages, part of a system of classification of virtues and vices. Whilst such efforts owed something to classical Greek ideas, they were also varied and distinctly Christian. In the twelfth century monastics like Bernard of Clairvaux and mystics like St. Hildegard of Bingen presented rich visionary descriptions of personified virtues and vices. Hildegard, in her Liber Vitae Meritorum, described "cowardly sloth": Ignavia had a human head, but its left ear was like the ear of a hare, and so large as to cover the head. Its body and bones were worm-like, apparently without bones; and it spoke trembling. She was also witness to the hellish consequences of various sins: I saw a hollow mountain full of fire and vipers, with a little opening; and near it a horrible cold place crawling with scorpions. The souls of those guilty of envy and malice suffer here, passing for relief from the one place to the other. Thus, through an array of boiling pitch, sulphur, swamps, icy rivers, tormenting dragons, fiery pavements, sharp-toothed worms, hails of fire and ice and scourges of sharpened flails, Hildegard traced out a catalogue of the varieties of sin and their consequences. With equal imagination, Alanus Magnus de Insulis, in his complex religious allegory Anticlaudianus, showed man protected by a host of more than a dozen virtues, clothed in the seven arts, and engaged in a complex struggle against a corresponding host of besetting sins and vices. Nature calls upon the celestial council of her sisters to aid in forming a perfect work. Led by Concord, they come forth to help -- Peace, Plenty, Favour, Youth, Laughter (banisher of mental mists), Shame, Modesty, Reason (the measure of the good), Honesty, Dignity, Prudence, Piety, Faith, Virtue and Nobility. Despite all this assistance, Nature can produce only the mortal, albeit perfect, body of man. The soul demands a divine artificer. Reason praises their plan to place a new Lucifer upon the earth to be the champion of all the virtues against vice, and he urges the celestial council to send an emissary to Heaven to request divine assistance. Prudence-Phronesis agrees to go, and Wisdom forms for her a chariot out of the seven arts: Grammar, Logic, Rhetoric, Arithmetic, Music, Geometry and Astronomy. Reason attaches the five senses to the chariot and then mounts it as its charioteer. He is able to bring Prudence-Phronesis to the gate of Heaven, but can go no further. There, Theology, the Queen of the Pole, takes Prudence into her care and conveys her, supported by Faith, into the Presence. She cannot bear the vision directly, but must look into a reflecting glass, wherein she adores and worships the eternal and divine All. Then she explains Nature's plight and asks for aid. Mind is summoned and ordered to fashion the new form and type of the human mind. Mind constructs the precious form in the reflecting glass, including in it all the graces of the patriarchs. Then the new form is ensouled and Prudence-Phronesis is entrusted with it. She returns in the chariot with Reason to the celestial council of Nature, where Concord unites the human mind with the mortal, though perfect, vesture formed by Nature. Unfortunately, when news of this new creature reaches Alecto in Tartarus, she is enraged. She summons the masters of every sin -- Injury, Fraud, Perjury, Theft, Rapine, Fury, Anger, Hate, Discord, Strife, Disease, Melancholy, Lust, Wantonness, Need, Fear and Old Age. She exhorts them to destroy this new creature who threatens their dominions. First, Folly -- accompanied by her helpers, Sloth, Gaming, Idle Jesting, Ease and Sleep -- attacks the man, but the virtues with which he is endowed repel the assault. So it goes until the final onslaught by Impiety, Fraud and Avarice, but the man, protected by all the virtues of Nature, by Reason and all its arts, and above all by his divine mind, prevails. Love and Virtue banish Vice and Discord, and the earth adorned by man springs forth in flowering abundance. With this, Alanus closes, observing that all good flows from the invisible and unmanifest source of All. The doctrinal structuring of this profusion of mystical and literary variety into a standardized set of seven deadly sins had begun earlier with St. Ambrose and St. Augustine, who spoke of pride, avarice, anger, gluttony and unchastity, as well as envy, vainglory, gloominess (tristitia) and indifference (acedia, from the Greek akedos, 'heedlessness'). It was Aquinas who, in his Summa Theologica, depicted a systematic series of seven specific virtues, coupled with corresponding gifts, and opposed by seven specific vices or sins. In this scheme there are three theological virtues -- fides, spes and caritas -- and four cardinal virtues -- prudentia, iustitia, fortitudo and temperantia. Fides, 'faith', is accompanied by the gifts of intellectus and scientia and opposed by the vices of infidelitas, haeresis, apostasia, blasphernia and caecitas mentis ('spiritual blindness'). Spes, 'hope', has timor as its corresponding gift and despratio and praesumptio as its opposing vices. Caritas, 'charity', is accompanied by the gifts of dilectio, gaudium, pax, misericordia, beneficentia, eleemosyna and correctio fraterna. It is opposed by the vices of odium, acedia, invidia, discordia, contentio, skhisma, bellum, rixa, seditio and scandalum. Then comes the first of the purely moral cardinal virtues, prudentia, 'prudence', which is accompanied by the gift of consilium and opposed by the vices of imp rudentia and neglegentia. Justitia, justice', the second cardinal virtue, has as its general gift pietas and is opposed to iniustitia. It comprehends ten lesser virtues as its parts. First comes religio, enacted through devotio, oratio, adoratio, sacrificium, oblatio, decumae, votum and iuramentum, and opposed by superstitio, idolatria, tentatio Dei, periurium, sacrilegium and simonia. Second is pietas, 'piety', along with its opposite, impietas. Third is observantia, enacted through dulia, 'service', and oboedientia and opposed by inoboedientia. Fourth comes gratia and its opposite, ingratitudo. Fifth is vindicatio or 'punishment'. Sixth is veritas, 'truth', opposed by hypocrisis, iactantia, 'boasting', and ironia. Seventh is amicitia, coupled with the vices of adulatio and litigium. The ninth is liberalitas, and its vices are avaritia and prodigalitas. The tenth and last of these virtues subordinate to iustitia is epieikeia or aequitas. Then comes the third of the cardinal virtues, fort itudo, enacted through martyrium and opposed by the vices of intimiditas and audacia. Fortitudo has four subordinate parts -- magnanimitas, magmficentia, patientia and perseverantia -- each with the evident opposing vice. Finally, the fourth cardinal virtue, temperantia, 'temperance', has as its opposite, intemperantia, along with the lesser constituents verecundia, honestas, abstinentia, sobrietas, castitas, dementia, modestia and humilitas, each of these having in turn its own appropriate vice. Despite the complexity of this system, or perhaps because of it, it did not lead to a popular designation of the virtues and vices, although it endorsed the idea that the mystical number seven should be employed in enumerating the sins. When the King James translation of the Greek New Testament was done, the following terms emerged as the English names of the seven deadly sins: pride, covetousness, lust, anger, gluttony, envy and sloth. 1. Pride: From the Anglo-Saxon prut, 'proud'; the Old French prod, 'valiant', 'notable', 'loyal', as in prud'homme; the Late Latin prode, 'advantageous'; and the Latin prodesse, 'to be beneficial'; the compound pro + esse, literally 'to be before'. Pro, 'before', is from the Greek pro, 'before', 'ahead', and akin to the Sanskrit pra-, 'before', 'forward'. In Mark 7:22, huperephania, 'haughtiness', is spoken of as one of the things that come out of a man, thus polluting him. There are two other references to pride in the Epistles. 2. Covetousness: From the Old French coveitier, 'to desire'; the Latin cupiditas, 'desirousness', and cupere, 'to desire'; the Greek kapnos, 'smoke' (from which comes the Latin vapor, 'steam'); and the Sanskrit kupyati, 'he swells with rage', 'he is angry', having to do with smoking, boiling, emotional agitation and violent motion. In Mark 7:22, pleonezia, 'taking more than one's share', is included in the list of things that come out of a man, thereby polluting him. In Luke 12:15, the same term is used when Jesus points out that abundance in life does not arise from possessions. This and similar terms for covetousness occur about fifteen times in the non-Gospel portions of the New Testament. (The term 'avarice', which is now often preferred to 'covetousness', is not part of the vocabulary of the King James version. It is a Latin term, avaritia, 'covetousness', from the verb avere, 'to long for', 'to covet', and avidus, 'avid', related to the Greek enees, 'gentle', and the Sanskrit avati, 'he favours'. Similarly, 'greed', from the Gothic gredus, 'to hunger', and the Old English giernan, 'to yearn', and the Old Norse giarn, 'eager' or 'willing', is not a common term in the King James and does not occur at all in the four Gospels. Its Latin roots are horiri and hortari, 'to urge', 'to encourage' and 'to cheer', from the Greek khairein, 'to rejoice', or 'to enjoy', and the Sanskrit haryati, 'he likes' or 'he yearns for'.) 3. Lust: From the Anglo-Saxon lust, 'pleasure'; the Old Norse losti, 'sexual desire'; the Medieval Latin lasciviosus, 'wanton', 'lustful'; the Latin lascivus, 'wanton', originally 'playful' as applied to children and animals; the Greek laste, 'a wanton woman', lasthe, 'a mockery', and lilaiesthai, 'to yearn'; and the Sanskrit lasati, 'he plays', and lalasas, 'desirous'. There is no reference to lust in the four Gospels. However, the terms orezis, 'appetite', epithumetas, 'desire of the heart', and hedone, 'pleasure', occur about two dozen times in the Epistles, almost always in a negative context. 4. Anger: From the Old Norse angr, 'sorrow', 'distress', and angra, 'to grieve'; akin to Old English enge, 'narrow', and the Germanic angst and angust, 'anxiety'; the Latin angor, 'strangling', 'tight', 'anguished', and angere, 'to distress', 'to strangle'; the Greek agkhein, 'to squeeze', 'to embrace', 'to strangle'; and the Sanskrit amhas, 'anxiety'. There is one reference, in Mark 3:5, to orges, irritation', (on the part of Jesus) in the four Gospels. There are two other references to anger in the Epistles. 5. Gluttony: From the Middle English glotonie, 'gluttony'; the Middle French glotoier, 'to eat greedily'; the Old French gloton, 'a glutton'; the Latin glutto, 'a glutton', derived from gluttire, 'to swallow', from gula, 'the throat' or 'gullet' (see 'gullible'); and the Greek delear, 'a bait', and deleazo, 'to entice' or 'catch by bait'. In Matthew 11:19 and Luke 7:34, Jesus, contrasting the crowd's reactions to himself and John the Baptist, says that they regard him as a phagos, 'a glutton' or 'man given to eating' (unlike John, who neither ate nor drank). There is no other mention of gluttony in the New Testament. 6. Envy: From the Old French envie, 'envy'; the Latin invidere, 'to look at askance' or 'to see with malice', from in, a prefix connoting an intensification of the term modified, and videre, 'to look' or 'to see', hence 'to look intensively'; with the Latin root videre arising from the Greek eidos, 'form', and idea, 'appearance' or 'idea', and eventually the Sanskrit veda and vidya, expressing 'knowledge' and 'vision'. Both Matthew 27:18 and Mark 15:10 refer to the phthonon, 'envy' or 'ill-will', towards Jesus of the crowd that chose to have Barabbas freed instead of Jesus. There are a dozen references to envy in the non-Gospel portions. 7. Sloth: From the Middle English slowthe, 'sloth'; the Old English slaw, the Old Saxon sleu and the Old High German sleo, 'slow', 'dull' or 'blunt'; and perhaps allied to the Latin laevus and the Greek laios, 'the left', and the Sanskrit srevayati, 'he causes to fail'. In Matthew 25:26, Jesus uses the term okneros, 'shrinking' or 'hesitating', to refer, in the parable of the talents, to the man who hid his portion under the ground out of fear. There are two other references to sloth in the Epistles. (Among Catholic writers, the Late Latin Aquinan term acedia, 'sloth', is sometimes preferred to the Saxon term. Acedia stems directly from the Greek akedos, 'careless', from a, 'not', and kedos, 'care', 'grief' and 'anxiety', derived from the Avestan sadra, 'sorrow'.) Generally, there is no enumeration or theory of the seven deadly sins in the New Testament. Pride, covetousness, gluttony and sloth are the only ones mentioned directly by Jesus. Even these are passing single references. Of these four deadly sins, pride and sloth are each mentioned only a few times in the non-Gospel portions of the New Testament. Gluttony is totally neglected in the Epistles. Only covetousness seems to be a major concern, receiving mention in approximately twelve places. Anger and envy as such are not spoken of by Jesus at all, although they are mentioned in the Gospels. In the Epistles, however, envy is mentioned twelve times. Lust, which is not even mentioned in the Gospels, is referred to more than twenty-four times in the various Epistles. Overall, Jesus pays little direct attention either to sin or to the species of sin, whilst the disciples, particularly in the Epistles, draw a great deal more attention to sin and, in particular, lust, covetousness and envy. Such, at least, is the testimony of the Greek text of the New Testament as rendered in the King James Version. It is at this point, where the seven deadly sins receive their authoritative delineation in the English language, that their significance began to wane. The forces of the Renaissance and the Reformation initiated the fundamental moral mutation in European culture that led to modernity. The England of Queen Elizabeth gave way to the England of King James, and it was not so long from there to the Long Parliament. There and elsewhere people started to take a less sacrosanct view of sin and the seven deadly sins. Most important, the effort to reground morality independent of theological conceptions had taken root. It is not necessary here to go into the post-history of the notion of sin, which includes both the reaction against it as well as the effort to salvage some meaning out of it, and a great deal else. Rather, this is the point at which the structure of the concept should be examined, internally, in relation to what went before, and in relation to the present conception of violence. [2]Hermes, November 1985 References 2. http://theosophy.org/tlodocs/hermes.htm From checker at panix.com Wed May 25 00:48:22 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:48:22 -0400 (EDT) Subject: [Paleopsych] Richard Newhauser's Course Outline for The Seven Deadly Sins as Cultural Constructions in the Middle Ages Message-ID: Richard Newhauser's Course Outline for The Seven Deadly Sins as Cultural Constructions in the Middle Ages http://www.trinity.edu/rnewhaus/outline.html NEH Summer Seminar 2004 The Seven Deadly Sins as Cultural Constructions in the Middle Ages OUTLINE Introduction There has been a great deal of attention devoted to the concepts of The Seven Deadly Sins recently in popular forms of discourse, from the new series of cultural studies at Oxford University Press and the outstanding articles on the sins by major novelists and poets published in The New York Times Book Review in 1993 to an hour-long special broadcast the same year on MTV and the feature-length film Seven of 1995. From elite culture to popular culture, in other words, the seven deadly sins have retained their interest as cultural constructions. Gluttony - from Conflictus 'In Campo Mundi' (Budapest, Kegyesrendi K?zponti K?nyvt?r MS CX 2) It is time to revisit them in research on the period of their greatest dissemination and utility, the Middle Ages. The most recent research on this topic, in fact, has allowed these seven concepts to emerge from a narrowly theological inquiry and to be seen, individually and as a series, in the same light as other historically defined objects of study central to the Humanistic endeavor. By focusing on the major cultural contexts in which the sins were defined, the seminar will seek to deepen the participants' appreciation for the ways in which the conception of morality in the Middle Ages was a response to varying cultural factors, and will make the study of the sins available for inclusion in the participants' regular college instruction. The seminar has been designed so as to provide a location in which scholars in the Humanities and the Social Sciences can discuss the ways in which the culture of the Middle Ages constructed morality. The practical goals of the summer will be for me to assist the participants in their research project, and to make available to the participants in the seminar an exciting group of guest lecturers. The results of the seminar will, I hope, be disseminated in special sessions at the annual Medieval Congress in Kalamazoo and, possibly, in a volume of essays that I will edit. I am seeking to recruit participants drawn from a wide variety of fields: college teachers who specialize in anthropology, history, and the study of literature, as well as specialists in art history, sociology, and those with a primary focus on theology and ethics. From two years of residence at institutes for advanced study in America and Britain, I am aware that interaction among disciplines is an essential ingredient in achieving a reinvigorated educational environment that will bring a new vision to the relationship between the sins and the niches of culture that were of vital importance in the construction of these moral categories. The participants will be expected to have a basic knowledge and understanding of medieval culture and history in its broad outlines, not limited to their own fields of specialized research. It will be helpful, although not a prerequisite, if participants have a working knowledge of French and German, in particular, or other modern European languages in which the scholarship on the sins is regularly published. It will be especially useful if they can read Medieval Latin. Nevertheless, the majority of the medieval texts used in the seminar will be made available to the seminar in their original languages with an accompanying English translation. Intellectual Rationale NEH support for the study of the seven deadly sins at American universities began in 1978 with an NEH Summer Seminar for College and University Teachers at the University of Pennsylvania, directed by Prof. Siegfried Wenzel. The present seminar seeks to follow this educational innovation and reinvigorate its content, in the current climate of ethical discourse in our culture, by taking maximum advantage of the unique manuscript, research, and human resources available at the University of Cambridge, England, and its institute for advanced study, Clare Hall. The Study of the Seven Deadly Sins: From Dogma to Cultural Constructs The seven deadly sins (pride, envy, wrath, avarice, sloth, gluttony, lust in their most frequent order) are sometimes thought of as inflexible categories of medieval dogma or, when they are found in examples of contemporary popular culture (such as the feature-length film Seven), as signifiers for something of an arcane perversion, a vehicle for an evil which is both mysterious and ancient. Such a view, of course, does not address the longevity of the idea of these seven constructs as comprehending the basic categories of evil in western culture. The very fact that even as this list of seven sins was being replaced by psychological, utilitarian, and other models of behavioral analysis it still could be adopted from Catholic to Protestant use during the Reformation, and further adopted for secular utilization both before and after that point, makes the seven sins a worthy object of cultural inquiry in the Humanities. Current research in the intellectual history of moral thought in the Middle Ages has demonstrated, moreover, how nuanced and differentiated the constructs actually were that came to be known as the seven deadly sins, how much their definition depended on a complex interaction with the cultural environments in which they were enumerated. The most recent research on this topic, in other words, has allowed these seven concepts to emerge from a narrowly theological inquiry and to be seen, individually and as a series, in the same light as other historically defined objects of study central to the Humanistic endeavor. In this way, current research does not define the categories of the sins merely as theological entities, but rather as differentiated articulations of what can be called discrete forms of an interrupted actualization of socially accepted forms of desire. Parallel to this definition, the virtues can be understood as ideals of the socialization of desire. In the 19^th and earlier 20^th century, and primarily in German scholarship, the sins were studied in three main contexts: First, they were seen as part of the history of Catholic dogma on matters of moral theology, something which appears clearly in the sub-title of the major work on the sins and dogma in this period, the monograph by Otto Z?ckler. Second, the origins of the sins became part of the historical study of monastic spirituality in Egypt, where established lists of evil thoughts (later reformulated as the sins) first appeared. The focus here was on the debt this aspect of Egyptian monasticism owed to both Hellenism and Early Christian literature. Envy - from Conflictus 'In Campo Mundi' (Budapest, Kegyesrendi K?zponti K?nyvt?r MS CX 2) Stefan Schiwietz's three-volume Das morgenl?ndische M?nchtum, published between 1904 and 1938, is typical of endeavors in this second context, as is the monograph by Siegfried Wibbing. Third, the iconography of vices and virtues formed the subject of a number of studies of medieval art, in particular in the tradition of Prudentius's Psychomachia, such as one can find in Adolf Katzenellenbogen's classic monograph. The common factor in these studies is a tendency to examine their subject from structural and historical perspectives in which the content of the sins is imagined to be relatively stable. Much of this earlier research was summarized and extended into the area of literary scholarship in 1952 in the monumental monograph by Morton Bloomfield, which not only was the first major American study of the sins, but also contributed a far more comprehensive view of the place of the sins in medieval culture that was also sensitive to some of the major changes in the composition of the lists of sins in response to varying cultural factors. Bloomfield's work proved highly influential in the educational context of American universities, in particular, but it also served as the starting point for the ongoing interest among subsequent European medievalists in this aspect of medieval moral thought. The publication in 1967 of Siegfried Wenzel's study of sloth and his fundamental article in Speculum the next year detailing problems in the history of the sins not addressed by Bloomfield's work set the agenda for much historiographical work to come. As a result, factors such as the place of the virtues in the comprehension of moral thought in the Middle Ages, the influence of Aristotle, and the genesis of rationales for the sins in Scholastic thought were the focus of some later work, such as the recent studies by Carla Casagrande and Silvana Vecchio. At the same time, the study of individual sins has been, and continues to be, advanced in work by Lester Little, Alexander Murray, or more recently Richard Newhauser on avarice; Mireille Vincent-Cassy on envy and sloth; and Pierre Payer or Ruth Karras on lust. Yet much scholarship of the last twenty years has also moved beyond an agenda in which the seven deadly sins are seen to function almost hegemonically in the environment of pastoral theology. John Bossy's important essay in 1988 articulated ways in which the seven sins were seen by late-medieval culture to be inadequate, a topic which was in some regards anticipated by Bloomfield's work, but not fully realized there. Likewise, analyses of other enumerations of morality in the Middle Ages, like Casagrande and Vecchio on the sins of the tongue, or Newhauser on the nine accessory sins, have called attention to the way in which cultural exigencies (such as the oral nature of preaching and confession) elicited a response that gives evidence of the flexibility of medieval moral thought. But recent scholarship has also begun to address topics and use methodologies that open the question of the cultural use of the sins to a more diverse analysis and call into question some of the assumptions of earlier scholarship. Barbara Rosenwein et al. on anger, for example, is deeply invested in the current debate on the use and construction of the emotions in historical research, Michael Theunissen has questioned the supposed historical break between the melancholy articulated in antique texts, sloth in the Middle Ages, and modernity's representation of depression. Other approaches to the delineation of the moral categories of the sins have adopted methods of psychological research (Patrick Boyde), or the findings of anthropology (Newhauser), or a gender studies perspective (Karras) to yield new insight into the ways in which cultures fill the categories of moral analysis with an ever-changing content. With so much recent attention focused on the sins, it seems that the time is right to revisit the content of a past NEH Summer Seminar with a new group of interested and engaged college teachers in order to reinvigorate the educational potential of the study of the seven deadly sins at American universities. The Cultural Contexts of the Seven Deadly Sins In order to allow the participants to clearly relate the presentation of the sins to a specific context, it is proposed here that the seminar focus on the locations of medieval moral thought and their interaction with the contents of the presentations themselves. It is in this way that one can speak of the sins as cultural constructions. The following narrative will lay the foundation for the content and implementation of the project. The longevity and centrality of the seven sins testifies to the authoritativeness and versatility of what began as an element of monastic education. Their origin is found in the list of eight "evil thoughts" (gluttony, lust, avarice, wrath, sadness, sloth, vainglory, pride) that developed in the hermit communities of northern Egypt. These eight logismoi may have been common in the oral teaching of the Egyptian monks, Avarice - from Conflictus 'In Campo Mundi' (Budapest, Kegyesrendi K?zponti K?nyvt?r MS CX 2) but in written form they are found earliest in the Greek works of Evagrius Ponticus (c. 345399). In the octad, Evagrius systematized the theory of demonic intrusions on the contemplative work of the anchorite so that the monk would be better armed to defeat the demons who used temptations to hinder his attainment of apatheia ("passionlessness"). John Cassian (c. 360433/35) learned of the octad from Evagrius and made the order of logismoi in Evagrius's De octo spiritibus malitiae central to his Latin works written for cenobitic monasteries in Marseilles. Here, the "evil thoughts" were now termed vitia, each with a list of sub-sins to which it gives rise. Cassian emphasized the concatenation of the first six sins, a sequential relationship in which an excess of one vice becomes the foundation for the subsequent one. Vainglory and pride become dangerous precisely when the previous six have been extirpated. The ascetic orientation of these early monastic octads, written for communities of holy men, can be seen in the way control of bodily desires lays the foundation for the defeat of more spiritual temptations. Pope Gregory I (c. 540604) synthesized Cassian's monastic thought with Augustine's view of sin as reflective of the will. Using most of the octad's components, Gregory reversed the order of sins: what he explicitly called two "carnal sins" come after five spiritual ones, with pride serving as the root of all seven "principal vices": vainglory, envy, wrath, sadness, avarice, gluttony, lust (Moralia in Iob, 31.45.8790). When pride itself was included in the list, the result could be understood as a variant of a sin octad, but in either case, Gregory considered these sins the origins of all sinfulness. The excesses of the ego depicted in the heptad's spiritual sins emphasize the importance of humility for Gregory as the central virtue of active obedience to authority within the community in moral, monastic, and secular political terms. Pride was most commonly (though not exclusively) considered the foundation of sinfulness in the early Middle Ages. Gregory asserts that the determination of intention in any act necessitates close examination of motives that may reveal a gap between the appearance of virtue and its origin in the impulses of vice. He presupposes, thus, a certain amount of moral ambiguity in any act. The heptad reflects an ideal of hierocratic ideology and social hierarchy. In the early Middle Ages many other presentations of vices and virtues were addressed to the needs of the nobility: Martin of Braga composed the very popular Formula vitae honestae (57079), a treatise on the cardinal virtues, for the moral instruction of the Suevic King Miro and his court. Typical of the aristocrats directly involved in the ethical renewal of the Carolingian reforms is Wido, Margrave of the Marca Britanniae, to whom Alcuin addressed his influential Liber de virtutibus et vitiis (c. 800). The reformers emphasized ethica, the study of virtue leading to correct living, along with the liberal arts and logic as the disciplines of philosophy: Alcuin frames his treatise with systems of virtue (at the beginning, theological; at the end, cardinal virtues). His compilation of scriptural and patristic texts as governing authorities served the further end of aiding uniformity in the Carolingian church, which allies his and other Carolingian and post-Carolingian treatises on moral theology with the genre of the florilegium. The internalization of concepts of the individual and spirituality in the late eleventh and twelfth centuries anchored moral theology in psychological processes. Hugh of St. Victor (10961141) reinterpreted concatenation as a description of developing sinfulness which began with the common classification of sin according to the subject it is directed against: pride removes the sinner from God, envy from his neighbor, wrath from himself. The last four sins marked stages in the sinner's descent into slavery to sin (De quinque septenis). The intended audience of this view of the sins now includes new classes of an urban population. The renewal of interest in Augustinian theology in the 12^th century marks a tendency to see caritas as the most important virtue instead of Gregory the Great's focus on humility. The Ethics of Peter Abelard (1079-1142) for the first time systematically analyzed the importance of intention and conscience, i.e., the inner disposition of each human being, in the determination of what constitutes vice and virtue. The interconnection between monastic theology and the developing "theology of the schools" produced many other presentations using the symmetry of vices and virtues (or related qualities, especially the gifts of the Holy Spirit). The Liber de fructu carnis et spiritus by Conrad of Hirsau (c. 1070c. 1150) treats the Gregorian heptad and an opposed list (theological plus cardinal virtues) and was influential in the development of illuminations of matching trees of vices and virtues; Alan of Lille's De virtutibus et de vitiis et de donis spiritus sancti (c. 117080) examines the gifts of the Holy Spirit, defines the sins and their progeny, and makes the theological virtues a category of one of the cardinal virtues; the fa?ade of Notre-Dame cathedral in Paris (early 13^th cent.) arranges personifications of virtues with roundels of exemplified sins in a way that summarizes types of representation of both. With the shift to a growing profit economy in the eleventh and twelfth centuries, treatments of avarice and its sub-sins (usury, illicit merchant practices, etc.) began to vie with pride more frequently in discussions of which sin is the root of all others. Early Scholastic literature began to treat sin and virtue within a wider approach to systematic theology, though attempts to adduce a theoretical rationale for a system of sins (generally as aberrations of the human will) produced any number of classifications of the sins: Peter Lombard's Sentences (c. 1150), the standard textbook for Scholastic education in theology, had suggested four: Augustine's distinction of sins by their origin in cupidity or fear; Jerome's classification of sins of thought, word, or deed; the distinction according to the subject sin is directed against; and the Gregorian heptad (Sent., 2.3044). The seven sins could not easily be justified as the most important or the most serious sins. The phenomenology of sin and virtue became central to theology as a discipline, but this opened up new avenues of classification. The use of Aristotle's Nichomachean Ethics reinforced academic moral theology's move beyond hamartiology (and an interest in only seven chief sins) to instead become a theory of virtue, devoted to questions touching the divisions of the virtues (intellectual, moral, theological), their causes, and their interconnection. The importance of the sacrament of penance influenced frequent Scholastic attempts to distinguish between explicit violations of God's law (deadly sins) and acts that do not directly breach this law (venial sins). The attempts to define venial sin also included the idea of a diminution of any inherent human sinfulness due in part to the imperfect nature of human intention or human knowledge, as one can see, for example, in "Le profit de savoir quel est p?ch? mortel et v?niel" and other works by Jean Gerson (d. 1429). The seven sins no longer suffice as a schematic organization of the multitude of errors that Gerson discusses, which in one treatise amount to 58 different kinds of deception by the devil (i.e., vices disguised as virtues). With an endless choice of feigned virtues that self-examination will expose as sins, Gersons sinner has arrived at what has been described as a paralysis of the soul typical of a late-medieval guilt culture. An interest in the Jewish scriptures that had begun in the twelfth century, uneasiness with the lack of a biblical foundation for the capital vices, and a concern to bind morality into a juridical system resulted in the emergence of the Ten Commandments, especially among Franciscan theologians beginning with Duns Scotus, as the moral system that would be universally taught after the sixteenth century. In pastoral theology, art, and literature, the capital vice tradition remained dominant through the sixteenth century. The reforming efforts of the Church to control the content of catechesis by reinstructing congregations at all social levels in matters of the faith culminated in canon 21 of the Fourth Lateran Council (1215) Wrath - from Conflictus 'In Campo Mundi' (Budapest, Kegyesrendi K?zponti K?nyvt?r MS CX 2) that legislated confession for all Christians at least once a year. Many regional councils demanded that clergy preach on vices and virtues, as well. The examination of the conscience envisioned here included material specific to women and all classes of society. The question of how to organize the sins to be confessed and preached was answered very early by drawing on the capital vices, which now became the seven deadly sins. Robert of Flamborough's early thirteenth-century Liber poenitentialis recommended the heptad precisely because the genetic relationship of the vices (and their progeny) facilitated confession. Eventually, the number of progeny was vastly expanded, but the basic classification of seven chief sins and their chief remedies remained, though often in tandem with other catechetical systems. The outpouring of penitential and homiletic texts treating vice and virtue, initially addressed to the clergy, was the work especially of the Dominicans and Franciscans (in particular in the cities), and it influenced the development of vernacular works on morality, now addressed to the urban laity. William Peraldus's widely transmitted Summa virtutum ac vitiorum (12361250), which played a seminal role in the development of the sins of the tongue, influenced important vernacular treatments of the vices and virtues such as Friar Laurent of Bois' Somme le roi (1280) and (indirectly) Chaucer's "Parson's Tale" (late 14^th century). The recognition of the cognitive value of images for educating pious Christians drew in the late Middle Ages on the intersection between pastoral literature, the natural-philosophical understanding of animals, and traditional moral iconography to produce emblematic presentations of the vices and virtues in many media and for many functions, from supporting the Benedictine reform to promoting civic ethics (as in the Regensburg tapestry of the vices and virtues, c. 1400). The confluence of pastoral literature and a high degree of emblematic iconography also characterizes late-medieval and Renaissance literary treatments of the vices and virtues, from Dante's (12651321) Divine Comedy to morality plays. Content and Implementation of the Project The weekly work of the seminar will generally be conducted in meetings at Clare Hall, University of Cambridge, from 9:30 a.m. to noon, on Monday, Wednesday, and Thursday. Excursions have also been planned to St. Mary's Church, Hardwick (near Cambridge), and to important manuscript collections in Cambridge itself: St. John's College, the University Library, and the Fitzwilliam Museum. In addition to attending the sessions of the seminar, the participants will be able to use the facilities of the Cambridge University Library to work on their research project and they will be able to consult me about their work during regularly scheduled individual conferences. On part of Wednesday and on Thursday of the last week of the seminar the participants will give a brief presentation of the initial results of their research. During each week, the work of the seminar will focus on a series of readings that grew out of a particularly important cultural location in the history of the sins. Sections of these texts, with English translations wherever possible, will be distributed to the participants at the opening of the seminar. During each week, the work of the seminar will emphasize one or more of the sins, not to the exclusion of others, but so as to concentrate the analysis of the connection between the specific context for that week and the content of the sin or sins. In this way, the flexibility of the category of moral thought will emerge as it adjusts to changes in cultural contexts. Week One: Desert and Monastery (July 12-15). Visiting faculty: Prof. Ian Goodyer The opening session of the seminar on Monday will be taken up with individual meetings between me and each participant in order to discuss the independent project s/he will work on for the summer and the anticipated outcomes. Each participant will schedule at least one more individual conference with me during the third week of the seminar. The first formal session on Wednesday will briefly characterize the kinds of interactions between contexts and content that the seminar will deal with in greater detail for the next weeks. For the early monks, sloth and tristitia (sadness) were particularly characteristic sins, being defined in ways that made them especially responsive to the situation of monks engaged in the intensive work of meditation. These sins, then, will be the focus of the first week, illustrated by treatises by Evagrius Ponticus (De malignis cogitationibus, Praktikos, etc.) and John Cassian (Collationes patrum and De institutis coenobiorum), in particular, with Gregory the Great (Morals on Job) serving as a transition to the work of the next week and Conrad of Hirsau (Liber de fructu carnis et spiritus) providing a view of some of the continuity of monastic thought. We will use Siegfried Wenzel, The Sin of Sloth: Acedia in Medieval Thought and Literature (Chapel Hill, NC, 1967), and more recent work by R?diger Augst and Gabriel Bunge to understand sloth as a monastic vice. With the visit of Professor Ian Goodyer during the first week, we will also begin to develop a psychological model for the analysis of these two sins, in particular, based on the psychiatric analysis of adolescent depression, a field in which Professor Goodyer is an internationally renowned expert. His articles on this subject have appeared in such publications as the Journal of Psychosomatic Research, Psychological Medicine, and the Journal of Child Psychology and Psychiatry.[1] (bibliobraphy) Week Two: Court (July 19-22). Visiting faculty: Prof. David Ganz The focus of the seminar during the second week will be the medieval court as a factor in defining aristocratic morality. In pride, medieval moral thought identified a sin considered particularly characteristic of noblemen and represented pictorially in outlandish clothing. Wrath, however, was also frequently attributed to the nobility, though as we will see by reading the works of Prudentius (Psychomachia), Martin of Braga (Formula vitae honestae, De ira, and De superbia), and Alcuin (Liber de virtutibus et vitiis), the potential ambiguity of moral designations can also be seen here. Wrath, in fact, was a defining characteristic of masculinity, as Richard Barton has shown in Anger's Past: The Social Uses of an Emotion in the Middle Ages, ed. Barbara Rosenwein (Ithaca and London, 1998), and as much praised for its God-like qualities as condemned for its irrationality. The dilemma here is obvious, for if overweening ego was to be condemned as pride, God-like wrath could be praised only with difficulty. The seminar will explore the ways in which the tensions here respond to tensions in the court, especially in the Carolingian period. David Ganz, Professor of Latin Paleography, Department of English and Classics, King's College, University of London, will open the discussion of the iconography of the vices by presenting the participants with some important manuscripts at the Fitzwilliam Museum. Professor Ganz's work on early-medieval libraries and the intellectual history of the Carolingian period is well known; his work has appeared in the New Cambridge Medieval History and in a wide series of journals and collections of essays. [2](bibliobraphy) Week Three: University (July 26-29): Visiting faculty: Dr. Istv?n Bejczy, Dr. Paul Binski With the growth of the universities one can note a tendency to refashion moral thought as a theology of virtue rather than an analysis of sin, as can be observed fully formed in the sections of Aquinas's Summa theologiae the seminar will read in the third week. Dr. Istv?n Bejczy will emphasize this fact in his lecture. Dr. Bejczy's expertise in 12^th-century moral theology that stems from his current project directing six research fellows in the production of a multi-volume history of the cardinal virtues will benefit the participants in the seminar in the examination of the origins of academic theology and the ways in which academic discourse shaped the connection between vices and virtues. Partially as a pedagogic device, and partially as a presentation of mental acuity, parallel lists of sins and their virtuous opponents were drawn up by authors like Hugh of St. Victor (De quinque septenis) and Alan of Lille (De virtutibus et de vitiis et de donis spiritus sancti). At the same time, the importance of intention central to Peter Abelard's work (especially his Ethics) raises the question that anthropologists ask about the determination of another person's inner state, namely what right is invoked to allow the interpreter to say s/he has seen the truth of that inner state (we will use an English translation of my essay Zur Zweideutigkeit in der Moraltheologie. Als Tugenden verkleidete Laster, in P. von Moos, ed., Der Fehltritt. Vergehen und Versehen in der Vormoderne [K?ln, Weimar, Wien, 2001], pp. 377-402). Finally, the interest in demonology witnessed in the work of Peter Lombard, for example, will be seen to influence the conception of envy. Dr. Paul Binski, Reader in the History of Medieval Art, University of Cambridge, will provide a transition to 13th-century English art and the era of reform by focusing on the iconography of the sculptures of the Salisbury chapter house, the king's chamber at Westminster, the Apocalypses, and such pastoral texts as the Somme le Roi. Dr. Binski's studies of Westminster Abbey and the Plantagenets: Kingship and the Representation of Power 1200-1400 (Yale Univ. Press, 1995) and the forthcoming Art and Authority in the Age of Becket (Yale Univ. Press, 2004) have established him as a leading scholar of the history of medieval English art.[3] (bibliobraphy) Week Four: Church and Pastoralia (August 2-5): Visiting faculty: Dr. Richard Beadle, Dr. Sylvia Huot Corporeal sins had long been separated from sins considered "spiritual" in moral thought, but the material for the fourth week's work will allow us to examine the class distinctions implicated in some of the ways the sins of the flesh were presented to penitents, especially in the popular work of William Peraldus (Summa de vitiis et virtutbus) and the vernacular treatises in English and French to be read for this week (Geoffrey Chaucer, "The Parson's Tale"; Jean Gerson, "Le profit de savoir quel est p?ch? mortel et v?niel"). Both lords and peasants drank to excess and had erotic experiences, but they did not get drunk or have intercourse in the same way. The kinds of wine the aristocracy drank were different from what the commoners could afford; the kind of debauchery considered typical of the nobility was also different from peasant indiscretions. We will be visited this week by Dr. Richard Beadle, Reader in English Literature and an internationally known expert on English manuscripts of the later Middle Ages, whose edition of the York Mystery Plays is an essential tool for teaching early English drama, and Dr. Sylvia Huot, Reader in Medieval French Literature, whose work on the manuscripts and interpretation of the Romance of the Rose will guide us in understanding this important Old French treatment of courtly and moral values. Both guest lecturers will help the seminar focus on the question of whose interests were served by insisting on class distinctions in moral analysis. Both speakers will also continue with the presentation of manuscripts with vernacular pastoral works in Cambridge libraries. During Dr. Beadle's presentation, we will meet in the Old Library at St. John's College, one of the outstanding examples of college library architecture in Cambridge. During Dr. Huot's presentation, we will meet in the Fitzwilliam Museum.[4] (bibliobraphy) Week Five: City (August 9-12): Visiting faculty: Dr. Nigel Harris The final week will return the seminar to the topic of sloth, but now conceived in connection with, or at a distance to, economic activity. Work, as a profit-making activity in late-medieval cities, will be the focus of this week's seminar study, both in its relationship with leisure and in its problematic connection with avarice. Both John Gower (Confessio Amantis) and Dante, in particular, are conscious of the ways in which the wealth of cities is the result of the acquisitive urge. They can both condemn avarice, but they also evince ways of valorizing the search for profit as necessary for the functioning of the city and essential to their literary well being. Work by the historian John Bossy and the Dante scholar Patrick Boyde will help elucidate the ways in which new formulations of sins and virtue were beginning to replace the seven deadly sins as schematic presentations of morality. The guest lecture by Dr. Nigel Harris, Senior Lecturer in German Studies, University of Birmingham, will focus the attention of the seminar on the ways in which urban politics in the Austrian/Bavarian area influenced the production of a civic ethics that also empties avarice of some of its onerousness. Dr. Harris is the editor of a major text on the vices produced in late-medieval Austria, the Etymachia, a battle of personified vices and virtues.[5] (bibliobraphy) Project Faculty and Staff Director A word about myself: much of my scholarship has been focused on the history of the virtues and vices from late antiquity to the early modern period. I am a former Visiting Distinguished Professor in the History Department at the Katholieke Universiteit, Nijmegen, Holland; a Fellow at the National Humanities Center; the holder of two NEH Summer Stipends; the recipient of a Guggenheim Fellowship and a Fellowship from the ACLS; and a Life Member of Clare Hall, University of Cambridge. I regularly teach a seminar for undergraduates at Trinity University entitled "Sins and Sinners in Western Culture" that examines changing understandings of morality in the cultural contexts of the Middle Ages and beyond. My publications have focused in particular on the genre of treatments of the vices (The Treatise on Vices and Virtues in Latin and the Vernacular, Typologie des sources du moyen ?ge occidental, 68 [1993]), the sin of avarice (The Early History of Greed [Cambridge University Press, 2000]) and curiosity as a vice (articles in Deutsche Vierteljahrsschrift and a number of collections of essays). My complete vita can be found online at [6]http://www.trinity.edu/rnewhaus/vita.html. Visiting Instructors The seminar seeks to encourage interaction between the American participants and some of the finest European scholars working on questions of medieval moral constructs and other experts whose work will help to revitalize the study of the sins in the Middle Ages. A number of these scholars have appointments at the University of Cambridge, others are only a short distance away. These visiting instructors will provide invaluable lectures for the seminar and join the participants for lunch and informal conversation after the seminar. [red_diamond.gif] Dr. Ian Goodyer, MD, Fellow, Wolfson College, University of Cambridge, and Professor of Child and Adolescent Psychiatry, Department of Psychiatry, University of Cambridge. An internationally renowned expert on adolescent depression, Prof. Goodyer's work will aid the seminar in constructing a paradigm for the psychological analysis of sloth and tristitia (sadness), in particular. [red_diamond.gif] Dr. David Ganz, Professor of Latin Paleography, Department of English and Classics, and Director, Centre for Late Antique and Medieval Studies, King's College, University of London. Prof. Ganz's knowledge of Carolingian ethics and important early manuscripts in the Fitzwilliam Museum in Cambridge will give the participants direct contact with important primary sources for the study of the sins. [red_diamond.gif] Dr. Sylvia Huot, Fellow, Pembroke College, University of Cambridge, and Reader in Medieval French Literature, University of Cambridge. Dr. Huot's expertise in medieval French literature of all types and illuminated manuscripts of French origin in Cambridge libraries will deepen the participants' understanding of pictorial representations of the sins and the breadth of their transmission. [red_diamond.gif] Dr. Richard Beadle, Fellow, St. John's College, University of Cambridge, and Lecturer in English, University of Cambridge. Dr. Beadle is an internationally known expert on English manuscripts of the later Middle Ages and will provide the participants with an insider's look at the English texts on the sins in the manuscript and rare book collection at St. John's College. [red_diamond.gif] Dr. Paul Binski, Fellow, Gonville and Caius College, and Reader in the History of Medieval Art, University of Cambridge, Department of History of Art. Dr. Binski is one of the leading authorities on English art in the high and later Middle Ages. He will help the seminar place groups of virtues, vices, and remedies in the context of 13th-century English art and the era of reform. [red_diamond.gif] Dr. Nigel Harris, Senior Lecturer, Department of German Studies, University of Birmingham. Dr. Harris's expertise in the interaction between lay and clerical groups in Bavaria and Austria during the later Middle Ages will help make comprehensible the ways in which an urban environment and urban politics influenced the content and presentation of pastoral theology. [red_diamond.gif] Dr. Istv?n Bejczy, Senior Researcher, Department of History, Katholieke Universiteit, Nijmegen, Holland. Dr. Bejczy's expertise in 12^th-century moral theology that stems from his current project directing six research fellows in the production of a multi-volume history of the cardinal virtues will benefit the participants in the seminar in the examination of the origins of academic theology and the ways in which academic discourse shaped the connection between vices and virtues. [slate_knot.gif] Selection of Participants Selection Process and Criteria Applications will be evaluated by a committee composed of Prof. Richard G. Newhauser and Prof. Siegfried Wenzel, Emeritus, Department of English at the University of Pennsylvania. The committee will, of course, pay close attention to the r?sum? and the quality of the essay which forms part of the application that each potential participant submits, but at the same time it will attempt to create a diverse group with a wide range of disciplines and interests in order to achieve a cross-fertilization of ideas. The committee will be particularly interested in supporting participants who plan to incorporate the outcomes of the seminar in their teaching or design new courses around the seven deadly sins. Institutional Context: Clare Hall, University of Cambridge The physical setting of the seminar could hardly be trumped by any place more conducive to study and research. The staff member of the seminar at Clare Hall will help register the participants at the University Library so they will have reading privileges at the college and faculty libraries associated with the University of Cambridge and at the University Library itself, one of the finest open-stacks libraries in the world (open 9:00 a.m. to 7:00 p.m. on weekdays and from 9:00 a.m. to 5:00 p.m. on Saturday). Participants will be especially encouraged to become familiar with the University Library's special collections rooms for manuscripts and early printed books. Both have been newly renovated. Clare Hall, the seminar's home, has been an independent college for graduate education in the University of Cambridge since it received a Royal Charter from Queen Elizabeth II in 1984. It is located on spacious grounds along Herschel Road directly behind the University Library and a brief walk from the Sidgwick Arts site which houses the faculties of Religion, Modern Languages, English, History, and Anthropology. The college has at any one time over 150 graduate students from 40 countries and a large number of visiting fellows, so that it serves as Britain's most active institute for advanced study. It is, thus, an ideal location for a Summer Seminar because of its ongoing intellectual environment. The college also has modest sports facilities: a swimming pool, a gym, and my favorite access to squash courts. If participants are interested in punting on the Cam, the college has a punt which can be rented. More detailed information about the college is available at [7]http://www.clarehall.cam.ac.uk/. Participants will be able to use the college computer rooms for Internet access, printing, word processing, and e-mail (for which they will need an Internet accessible e-mail account). The computer labs are fully equipped with PCs and Macs and are generally accessible 24 hours per day. We have also been granted the use of the Robert Honeycombe Building on the grounds of Clare Hall as a dormitory for most of the participants in the seminar. It consists of 13 student rooms: 12 singles and 1 double. The rooms are not ensuite but have good bathroom facilities on each floor. There is also a kitchen and common room with a television. The communal areas will be cleaned daily and the bedrooms will be cleaned and linens changed once a week. These accommodations will cost ?500 (currently about $850) per room for the five weeks of the seminar. Two participants will be able to share a two-bedroom flat in the Gillian Beer House on the grounds of the college. Here, too, kitchen facilities and a weekly linen service are included in the price of ?1400 for the flat for the five weeks of the seminar (?700 per person [currently about $1,190]). Neither of these accommodations is suitable for families. While there is an organization in Cambridge that can locate accommodations for families in the city, these will be considerably more expensive than what Clare Hall has offered us for individual participants. The college provides meals on all weekdays at the following costs: lunch ?9.50; dinner ?9.50; Wednesday dinner ?15.00 (served with wine). I will be making arrangements with scholars who are in residence in Cambridge to have informal lunches with us once or twice per week during the seminar, but these lunches will be held in the house I will rent for the summer at Clare Hall or in the Robert Honeycombe Building. Stipends Participants will receive a stipend of $ 3,250. Since the seminar will be held overseas, each participant will receive a check for this full amount before the seminar begins. Application Information Application information can be found [8]here. Completed applications should be postmarked no later than March 1, 2004, and should be addressed as follows: Richard Newhauser Director, NEH Summer Seminar 2004 Department of English Trinity University One Trinity Place San Antonio, TX 78212-7200 Applications can also be sent to me as an e-mail attachment formatted in Microsoft Word addressed to: [9]rnewhaus at trinity.edu. The most important part of the application is the essay that must be submitted as part of the complete application. This essay should include any personal and academic information about you that is relevant to your application; your reasons for applying to this particular project; your interest, both intellectual and personal, in the topic; your qualifications to do the work of the project and make a contribution to it; what you hope to accomplish by participation, including any individual research and writing projects; and the relation of the study to your teaching. Preliminary Bibliography I. Primary Presentations a. Desert and Monastery Cassian, John. Collationes patrum. Edited and translated by E. Pichery. 3 vols. SC 42 [New ed., 1966], 54 [New ed., 1967], 64. Paris: Cerf, 19551959. ---. De institutis coenobiorum. Edited and translated by Jean-Claude Guy. SC 109. Paris: Cerf, 1965. Conrad of Hirsau. Liber de fructu carnis et spiritus. PL 176:9971006. Evagrius Ponticus. De malignis cogitationibus. In ?vagre le Pontique: Sur les pens?es, edited by Paul G?hin, Claire Guillaumont, and Antoine Guillaumont. SC 438. Paris: Cerf, 1998. ---. Praktikos. In ?vagre le Pontique: Trait? pratique ou le moine, edited and translated by Antoine Guillaumont and Claire Guillaumont. 2 vols. SC 17071. Paris: Cerf, 1971. ---. De octo spiritibus malitiae. PG 79:1145A64D. ---. De vitiis quae opposita sunt virtutibus. PG 79:113944. Gregory the Great. Moralia in Iob. Edited by M. Adriaen. 3 vols. CCSL 143143B. Turnhout: Brepols, 19791985. b. Court Alcuin. Liber de virtutibus et vitiis. PL 101:61338. Martin of Braga. Formula vitae honestae, De , and De superbia. In Martini episcopi Bracarensis opera omnia, edited by C. W, Barlow, 23650, 150-58, 69-73. Papers and Monographs of the American Academy in Rome, 12. New Haven: Yale Univ. Press, 1950. Prudentius. Psychomachia. In Aurelii Prudentii Clementis carmina. Edited by M. P. Cunningham. CCL 126. Turnhout: Brepols, 1966. c. University Abelard, Peter. Peter Abelard's Ethics: An Edition with Introduction. Edited by D. E. Luscombe. Oxford: Clarendon, 1971. Alan of Lille. Liber poenitentialis. Edited by Jean Long?re. 2 vols. Analecta Mediaevalia Namurcensia, 17-18. Louvain: Editions Nauwelaerts, 1965. ---. De virtutibus et de vitiis et de donis spiritus sancti. Edited by Odo Lottin. In Psychologie et morale aux XIIe et XIIIe si?cles, 6:2792. Gembloux: J. Duculot, 1960. Aristotle. Ethica Nicomachea. Translated by Robert Grosseteste. Edited by Ren?-Antoine Gauthier. In Aristoteles Latinus, vol. 26/3. Leiden: Brill, 1972. Hugh of St. Victor. De quinque septenis. In H. de Saint-Victor. Six opuscules spirituels. Edited by R. Baron, 10018. SC 155. Paris: Cerf, 1969. Peter Lombard. Magistri Petri Lombardi Parisiensis episcopi Sententiae in IV libris distinctae. 2 vols. 3^rd ed. Spicilegium Bonaventurianum, 45. Grottaferrata: Collegium S. Bonaventurae, 19711981. Thomas Aquinas. Summa theologiae. 9 vols. In Sancti Thomae Aquinatis doctoris angelici opera omnia iussu impensaque Leonis XIII P. M. edita, vols. 412. Rome: Typographia Polyglotta S. C. de Propaganda Fide, 18881906. d. Church and Pastoralia Alexander Carpenter. Destructorium viciorum. Cologne, 1485; Paris, 1521. Chaucer, Geoffrey. "The Parson's Tale." In The Riverside Chaucer. Edited by L. D. Benson et al., 288328. 3rd ed. New York: Houghton Mifflin, 1987. Gerson, Jean. "Le profit de savoir quel est p?ch? mortel et v?niel." In Jean Gerson. uvres compl?tes. Edited by Pal?mon Glorieux, 7/1:37089. Paris: Descl?e, 1966. Hugh Ripelin of Strasbourg. Compendium theologicae veritatis. In Albertus Magnus, Opera omnia, edited by S. C. A. Borgnet, 34:1-261. Paris: Vives, 1899. Lavynham, Richard. A Litil Tretys on the Seven Deadly Sins. Edited by J. P. W. M. van Zutphen. Rome: Institutum Carmelitanum, 1956. Mannyng, Robert. Handlyng Synne. Edited by F. J. Furnivall. EETS os, 119. London: Kegan Paul, Trench, Tr?bner & Co., 1901. Peraldus, William. Summa virtutum ac vitiorum Guilhelmi Paraldi Episcopi Lugdunensis de ordine predicatorum. Paris: Johannes Petit, Johannes Frellon, Franciscus Regnault, 1512. Robert of Flamborough. Liber poenitentialis. Edited by J. J. Francis Firth. Studies and Texts, 18. Toronto: Pontifical Institute of Mediaeval Studies, 1971. Ps.-Vincent of Beauvais. Speculum morale. In Vincentii Burgundi Speculum quadruplex sive speculum maius, vol. 3. Douai: Ex officina Typographica Baltazaris Belleri, 1624. Reprint, Graz: Akademische Druck- und Verlagsanstalt, 1964. e. City Berthold of Regensburg. Berthold von Regensburg: Vollst?ndige Ausgabe seiner Predigten. Edited by Franz Pfeiffer and Joseph Strobl. 2 vols. Vienna: W. Braumueller, 1862-80. Reprint, Berlin: de Gruyter, 1965. Book for a Simple and Devout Woman: A Late Middle English Adaptation of Peraldus's "Summa de vitiis et virtutibus" and Friar Laurent's "Somme le Roi." Edited by F. N. M. Diekstra. Mediaevalia Groningana, 24. Groningen: Egbert Forsten, 1998. Dante Alighieri. The Divine Comedy. Edited and trans. by John D. Sinclair. 3 vols. New York: Oxford Univ. Press, 19391946. The Latin and German "Etymachia": Textual History, Edition, Commentary. Edited by Nigel Harris. MTU 102. Munich: Beck, 1994. Gower, John. Confessio Amantis. In The English Works of John Gower, edited by G. C. Macaulay. 2 vols. EETS es, 81-82. London: Kegan Paul, Trench, Tr?bner & Co., 1900-1. Notre-Dame Cathedral. Paris. Peter the Chanter. Verbum abbreviatum. PL 205:21-528A. Tapestry of the Vices and Virtues. Regensburg, Historisches Museum. II. Secondary Works a. General Works on the Vices Bloomfield, Morton W. The Seven Deadly Sins: An Introduction to the History of a Religious Concept, with Special Reference to Medieval English Literature. [East Lansing, MI:] Michigan State Univ. Press, 1952. Reprint, 1967. Bossy, John. "Moral Arithmatic: Seven Sins into Ten Commandments." In Conscience and Casuistry in Early Modern Europe. Edited by Edmund Leites, 21434. Cambridge: Cambridge University Press, Paris: Editions de la maison des sciences de l'homme, 1988. Boyde, Patrick. Human Vices and Human Worth in Dante's "Comedy." Cambridge, Eng.: Cambridge Univ. Press, 2000. Casagrande, Carla, and Silvana Vecchio. "La classificazione dei peccati tra settenario e decalogo (secoli XIIIXV)." Documenti e studi sulla tradizione filosofica medievale 5 (1994): 33195. ---. I sette vizi capitali: Storia dei peccati nel Medioevo. Saggi, 832. Turin: Giulio Einaudi, 2000. ---. "P?ch?." In Dictionnaire Raisonn? de l'Occident M?di?val. Edited by Jacques Le Goff and Jean-Claude Schmitt, 87791. Paris: Fayard, 1999. Delumeau, Jean. Sin and Fear. The Emergence of a Western Guilt Culture 13^th 18^th Centuries. Trans. Eric Nicholson. New York: St. Martin's Press, 1990. Howard, Donald R. The Three Temptations: Medieval Man in Search of the World. Princeton: Princeton Univ. Press, 1966. Huizinga, Johan. The Autumn of the Middle Ages. Translated by Rodney Payton and Ulrich Mammitzsch. Chicago: Univ. of Chicago Press, 1996. In the Garden of Evil: The Vices and Culture in the Middle Ages. Edited by Richard G. Newhauser. Forthcoming. Jehl, Rainer. "Die Geschichte des Lasterschemas und seiner Funktion." Franziskanische Studien 64 (1982): 261359. Kent, Bonnie. Virtues of the Will: The Transformation of Ethics in the Late Thirteenth Century. Washington, D. C.: Catholic Univ. of America Press, 1995. Kroll, Jerome, and Bernard Bachrach. "Sin and Mental Illness in the Middle Ages." Psychological Medicine 14 (1984): 507-14. Lottin, Odon. Psychologie et morale aux XIIe et XIIIe si?cles. 6 vols. Gembloux: J. Duculot, 19421960. [Vol. 1. 2^nd ed. 1957]. MacIntyre, Alasdair C. After Virtue: A Study in Moral Theory. 2^nd ed. Notre Dame, IN: Univ. of Notre Dame Press, 1984. Markus, Robert A. The End of Ancient Christianity. Cambridge, Eng. and New York: Cambridge Univ. Press, 1990. Newhauser, Richard G. The Treatise on Vices and Virtues in Latin and the Vernacular. Typologie des sources du moyen ?ge occidental, 68. Turnhout: Brepols, 1993. ---. "Zur Zweideutigkeit in der Moraltheologie: Als Tugenden verkleidete Laster." In Der Fehltritt: Vergehen und Versehen in der Vormoderne. Edited by Peter Von Moos, 377402. Norm und Struktur, 15. Cologne, Weimar, and Vienna: B?hlau, 2001. Solignac, Aim?. "P?ch?s capitaux." In Dictionnaire de Spiritualit? 12/1:85362. Tentler, Thomas. Sin and Confession on the Eve of the Reformation. Princeton: Princeton Univ. Press, 1977. Tuve, Rosamond. "Notes on the Virtues and Vices." Journal of the Warburg and Courtauld Institutes 26 (1963): 264303; 27 (1964): 4272. Utley, Francis L. "The Seven Deadly Sins: Then and Now." Indiana Social Sciences Quarterly 25 (1975): 3150. Wenzel, Siegfried. "The Seven Deadly Sins: Some Problems of Research." Speculum 43 (1968): 122. Z?ckler, Otto. Das Lehrst?ck von den sieben Haupts?nden: Beitr?ge zur Dogmen- und zur Sittengeschichte, in besonders der vorreformatorischen Zeit. In O. Z?ckler. Biblische und kirchenhistorische Studien, 3. Munich: Beck, 1893. b. Individual Vices or Systems of Vice Anger's Past: The Social Uses of an Emotion in the Middle Ages. Edited by Barbara Rosenwein. Ithaca and London: Cornell Univ. Press, 1998. Augst, R?diger. Lebensverwirklichung und christlicher Glaube: Acedia: Religi?se Gleichg?ltigkeit als Problem der Spiritualit?t bei Evagrius Ponticus. Saarbr?cker theologische Forschungen, 3. Frankfurt am Main: Peter Lang, 1990. Bunge, Gabriel. Akedia: Die geistliche Lehre des Evagrios Pontikos vom ?berdru?. 4^th rev. ed. W?rzburg: Der Christliche Osten, 1995. Cadden, Joan. " 'Nothing Natural Is Shameful': Vestiges of a Debate about Sex and Science in a Group of Late-Medieval Manuscripts." Speculum 76 (2001): 66-89. Casagrande, Carla, and Silvana Vecchio. I peccati della lingua: disciplina ed etica della parola nella cultura medievale. Rome: Istituto della Enciclopedia Italiana, 1987. Karras, Ruth Mazo. "Two Models, Two Standards: Moral Teaching and Sexual Mores." In Bodies and Disciplines: Intersections of Literature and History in Fifteenth-Century England, edited by Barbara A. Hanawalt and David Wallace, 123-38. Medieval Cultures, 9. Minneapolis: Univ. of Minnesota Press, 1996. Little, Lester K. "Pride Goes before Avarice: Social Change and the Vices in Latin Christendom." The American Historical Review 76 (1971): 1649. Markus, Robert A. "De civitate Dei: Pride and the Common Good." Collectanea Augustiniana 1 (1990): 245-59. Murray, Alexander. Reason and Society in the Middle Ages. Oxford: Clarendon, 1978. Newhauser, Richard G. "From Treatise to Sermon: Johannes Herolt on the novem peccata aliena." In De ore domini: Preacher and Word in the Middle Ages. Edited by T. L. Amos et al., 185209. Studies in Medieval Culture, 27. Kalamazoo, MI: Medieval Institute Press, 1989. ---. The Early History of Greed: The Sin of Avarice in Early Medieval Thought and Literature. Cambridge Studies in Medieval Literature, 41. Cambridge, Eng.: Cambridge Univ. Press, 2000. Payer, Pierre. The Bridling of Desire: Views of Sex in the Later Middle Ages. Toronto: Univ. of Toronto Press, 1993. Nani, T. Suarez. "Du go?t et de la gourmandise selon Thomas d'Aquin." In I cinque sensi / The Five Senses. Micrologus, 10. Florence: Edizioni del Galluzzo, SISMEL, 2002. In press. Theunissen, Michael. Vorentw?rfe der Moderne: Antike Melancholie und die Acedia des Mittelalters. Berlin and New York: de Gruyter, 1996. Vincent-Cassy, Mireille. "L'Envie en France au Moyen Age." Annales E.S.C. 35 (1980): 253-71. ---. "Quand les femmes deviennent paresseuses." In Femmes: Mariages-Lignages, XIIe-XIVe si?cles: M?langes offerts ? Georges Duby, 431-47. Biblioth?que du Moyen Age, 1. Bruxelles: De Boeck Universit?, 1992. Wenzel, Siegfried. The Sin of Sloth: Acedia in Medieval Thought and Literature. Chapel Hill, NC: Univ. of North Carolina Press, 1967. ---. "The Three Enemies of Man." Mediaeval Studies 29 (1967): 47-66. c. The Vices in Art Baumann, Priscilla. "The Deadliest Sin: Warnings Against Avarice and Usury on Romanesque Capitals in Auvergne." Church History 59 (1990): 7-18. Bl?cker, Susanne. Studien zur Ikonographie der Sieben Tods?nden in der niederl?ndischen und deutschen Malerei und Graphik von 14501560. Bonner Studien zur Kunstgeschichte, 8. M?nster and Hamburg: LIT, 1993. Katzenellenbogen, Adolf. Allegories of the Virtues and Vices in Mediaeval Art from Early Christian Times to the Thirteenth Century. Translated by Alan J. P. Crick. London: The Warburg Institute, 1939. Reprint, Toronto: Univ. of Toronto Press, 1989. Norman, Joanne S. Metamorphoses of an Allegory: The Iconography of the Psychomachia in Medieval Art. American University Studies, series IX: History, 29. New York: Lang, 1988. O'Reilly, Jennifer. Studies in the Iconography of the Virtues and Vices in the Middle Ages. New York, London: Garland Publishing, Inc., 1988. Schweitzer, Franz-Josef. Tugend und Laster in illustrierten didaktischen Dichtungen des sp?ten Mittelalters: Studien zu Hans Vintlers "Blumen der Tugend" und zu "Des Teufels Netz." Germanistische Texte und Studien, 41. Hildesheim: Olms, 1993. Virtue and Vice: The Personifications in the Index of Christian Art. Edited by Colum Hourihane. Princeton: Index of Christian Art, 2000. Voelkle, William M. "Morgan Manuscript M.1001: The Seven Deadly Sins and the Seven Evil Ones." In Monsters and Demons in the Ancient and Medieval Worlds: Papers Presented in Honor of Edith Porada, edited by Ann E. Farkas et al., 101-14. Mainz: P. von Zabern, 1987. d. Origins and the Vices Brakke, David. "The Making of Monastic Demonology: Three Ascetic Teachers on Withdrawal and Resistance." Church History 70 (2001): 19-48. Goehring, James E. Ascetics, Society, and the Desert: Studies in Early Egyptian Monasticism. Studies in Antiquity and Christianity. The Roots of Egyptian Christianity, [6]. Harrisburg, PA: Trinity Press International, 1999. Nussbaum, Martha C. The Therapy of Desire: Theory and Practice in Hellenistic Ethics. Princeton: Princeton Univ. Press, 1994. O'Laughlin, Michael. "The Anthropology of Evagrius Ponticus and Its Sources." In Origen of Alexandria: His World and His Legacy, edited by Charles Kannengiesser and William L. Petersen, 357-73. Notre Dame, IN: Univ. of Notre Dame Press, 1988. R?hser, G?nter. Metaphorik und Personifikation der S?nde: Antike S?ndenvorstellungen und paulinische Hamartia. Sorabji, Richard. Emotion and Peace of Mind: From Stoic Agitation to Christian Temptation. Oxford: Oxford Univ. Press, 2000. Stewart, Columba. Cassian the Monk. New York and Oxford: Oxford Univ. Press, 1998. Straw, Carole. Gregory the Great: Perfection in Imperfection. Transformation of the Classical Heritage, 14. Berkeley, Los Angeles, and London: Univ. of California Press, 1988. Wibbing, Siegfried. Die Tugend- und Lasterkataloge im Neuen Testament. Beihefte zur Zeitschrift f?r die neutestamentliche Wissenschaft und die Kunde der ?lteren Kirche, 25. Berlin: T?pelmann, 1959. e. Pastoralia and the Vices Handling Sin: Confession in the Middle Ages. Edited by Peter Biller and A. J. Minnis. York: York Medieval Press, 1998. Michaud-Quantin, Pierre. Sommes de casuistique et manuels de confession au Moyen Age (XIIe-XIVe si?cles). Analecta mediaevalia Namurcensia, 13. Louvain, Lille, and Montreal: Nauwelaerts, 1962. Owst, Gerald R. Literature and Pulpit in Medieval England. 2^nd rev. ed. Oxford: Blackwell, 1961. ---. Preaching in Medieval England: An Introduction to Sermon Manuscripts of the Period c. 1350-1450. Cambridge, Eng.: Cambridge Univ. Press, 1926. Reprint, New York: Russell & Russell, 1965. Spencer, H. Leith. English Preaching in the Late Middle Ages. Oxford: Clarendon; New York: Oxford Univ. Press, 1993. References 1. http://www.trinity.edu/rnewhaus/outline.html#1 2. http://www.trinity.edu/rnewhaus/outline.html#2 3. http://www.trinity.edu/rnewhaus/outline.html#3 4. http://www.trinity.edu/rnewhaus/outline.html#4 5. http://www.trinity.edu/rnewhaus/outline.html#5 6. http://www.trinity.edu/rnewhaus/vita.html 7. http://www.clarehall.cam.ac.uk/ 8. http://www.trinity.edu/rnewhaus/application.html 9. mailto:rnewhaus at trinity.edu From checker at panix.com Wed May 25 00:48:33 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:48:33 -0400 (EDT) Subject: [Paleopsych] CS Monitor: Is sin in? Message-ID: Is sin in? http://www.csmonitor.com/2003/0807/p15s02-lire.htm 2003.8.7 (note date) Centuries after the seven deadly sins became the ultimate measure of moral depravity, a new series of essays asks if they are still relevant. By [2]Kim Campbell | Staff writer of The Christian Science Monitor When Pope Gregory I refined the list of seven deadly sins toward the end of the 6th century, he never guessed that one day they'd become ice cream flavors. But 1,400 years later, people can lick gluttony off a stick while they ponder that particular sin and its infamous brethren - anger, pride, envy, sloth, lust, and greed. Clerics aren't too happy about the sins being trivialized, especially of late in Europe, where the gimmicky ice cream originates. They also have to contend with a group of chefs in France, who earlier this year petitioned Pope John Paul II to take gluttony off the list - or, rather, to change the current word used in French to describe the sin because its meaning has changed. All the hoopla lends itself to the idea that the deadlies don't have the fearsome reputation they once did. Even those who are religious often have to be reminded of what's on the list. But at the same time - maybe thanks to all the reality TV - analyzing human nature is more interesting than ever, and these historic vices present a good place to start. Popular culture takes a stab at exploring them every few years, through an MTV special, or a movie (1995's grisly "Seven"), or, more commonly, a book. This month, a more thoughtful approach to the deadly seven will commence when the first in a series of palm-sized volumes on the sins is published by Oxford University Press. The chosen writers - some religious, some not - examine each of the deadly sins and what they say about contemporary society and human nature. The authors are adapting their books from related talks they gave at the New York Public Library, taking intriguing positions on the sins, which they often find are more or less deadly than advertised. Essayist Joseph Epstein, for example, argues that envy should still have a place among the top vices, being destructive to character. But gluttony? Novelist Francine Prose isn't convinced, especially given the way overweight people are treated. "These days, few people seriously consider the idea that eating too much or enjoying one's food is a crime against God," she writes in the second book in the series. Instead, thanks to modern standards of health and beauty, "The wages of sin have changed, and now involve a version of hell on earth: the pity, contempt, and distaste of one's fellow mortals." In her view, the deadly sins are a checklist for the conscience, she explains in an interview. "Frankly, I don't think people are thinking about sin enough." Secular and religious observers alike argue that the deadly sins are relevant today because they have universal application - in areas like psychology and human relations. At least one theologian points to the unbridled American consumer culture as an example of how greed, like some of the other sins, has become institutionalized. "[The seven deadly sins] are useful and relevant today not only in terms of charting an individual's spiritual progress, but also in looking at patterns that we've developed socially and culturally, ways we oppress ourselves and one another," says John Grabowski, professor of moral theology at the Catholic University of America in Washington. Checklists of vices have existed since before Christianity, when outlining offensive actions was commonplace. Although the seven deadlies aren't in the Bible as a group, hints of them can be found in Proverbs 6, verses 16 to 19, for example, where "a proud look, a lying tongue, and hands that shed innocent blood," are enumerated as among seven things the "Lord doth hate." The list that Pope Gregory I refined was first arrived at by a 4th-century monk named Evagrius of Pontus. He choose eight: gluttony, lust, avarice (greed), sadness, anger, acedia (spiritual sloth), vainglory, and pride. Gregory I added envy, and merged vainglory with pride and acedia with sadness (which eventually became sloth). Especially in the tradition of the monks, the list of vices became a teaching tool for organizing thought to fight the "evil within," which threatened advances in holiness and virtue, says Professor Grabowski. Over the centuries, artists and authors have perpetuated the myth and meaning of the sins by depicting them in their works, including assigning them animal form. Today, Roman Catholics refer to them as "cardinal" sins because they are the ones that lead to all others. Still, everyone from James Bond's creator, Ian Fleming, to readers of the nondenominational website [3]Beliefnet.com have suggested other vices for the list - including snobbery and narcissism. Ms. Prose jokes that there should also be an allowance for offenses like "grabbing a parking place you haven't actually been waiting for." That would probably qualify as greed, the subject Phyllis Tickle is tackling in her installment in the series, due out next April. An author and expert on religion in America, Ms. Tickle argues only somewhat facetiously that her sin is the worst, "the mother of the sins." She explains that one idea she addresses in her book sounds at first like heresy: That without sin, there would be no faith. "Ultimately, sin is the thing that drives us back on the path to righteousness.... Without [sin] there is no human progress. Human life is dependent on sin," she argues. What's compelling and perhaps most progressive about discussions of the deadly sins today is that they are not just from the perspective of the overtly religious. Mr. Epstein, for example, writes in "Envy," the first book in the series due out this month, that for those who don't embrace the notion of sin, "I would invite you instead to consider envy less as a sin than as very poor mental hygiene." He recommends fighting it off, as it "clouds thought, clobbers generosity, precludes any hope of serenity, and ends in shriveling the heart." Even those at the far end of the liberal spectrum, like controversial sex columnist and author Dan Savage, see a role for these moral guideposts in determining individual behavior. His approach to researching the sins is perhaps not for everyone - he set out to commit all seven as a way to prove that there are ethical sinners for his 2002 book "Skipping Towards Gomorrah: The Seven Deadly Sins and the Pursuit of Happiness in America." But his view on how to apply them is not uncommon: "What matters is not that you have lust in your heart, it's what you do with that," he says. Perhaps that approach is why the seven deadly sins are increasingly being suggested as a tool for psychologists. It's an idea that was explored in depth by Jewish scholar Solomon Schimmel in his recent book "The Seven Deadly Sins: Jewish, Christian and Classical Reflections on Human Psychology." In an interview, he sums up his goal in writing the book as trying to "help people lead happier lives by using the wisdom of religious traditions." In the book, he argues that secular psychology needs to confront the role of values in everyday life if it hopes to ameliorate anxieties: "We need to reclaim the rich insights into human nature of earlier moral reflection if we want to lead more satisfying lives." His modern approach - like that of the Oxford writers - offers up the development of the sins. As Grabowski describes it, it's clear that "maybe this isn't just an esoteric ancient religious concept, that it really has some ongoing contribution to make in terms of behavioral sciences, other sciences, and their investigation of the human person today." References 2. http://www.csmonitor.com/cgi-bin/encryptmail.pl?ID=CBE9EDA0C3E1EDF0E2E5ECEC 3. http://Beliefnet.com/ From checker at panix.com Wed May 25 00:49:50 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:49:50 -0400 (EDT) Subject: [Paleopsych] SW: Language and the Origin of Numerical Concepts Message-ID: Cognition: Language and the Origin of Numerical Concepts http://scienceweek.com/2004/sa041210-5.htm The following points are made by R. Gelman and C.R. Gallistel (Science 2004 306:441): 1) Intuitively, our thoughts are inseparable from the words in which we express them. This intuition underlies the strong form of the Whorfian hypothesis (after Benjamin Whorf (1897-1941), namely, that language determines thought (aka "linguistic determinism"). Many cognitive scientists find the strong hypothesis unintelligible and/or indefensible (1), but weaker versions of it, in which language influences how we think, have many contemporary proponents (2,3). 2) The strong version rules out the possibility of thought in animals and humans who lack language, although there is an abundant experimental literature demonstrating quantitative inference about space, time, and number in preverbal humans (4), in individuals with language impairments (5), and in rats, pigeons, and insects. Another problem is the lack of specific suggestions as to how exposure to language could generate the necessary representational apparatus. It would be wonderful if computers could be made to understand the world the way we do just by talking to them, but no one has been able to program them to do this. This failure highlights what is missing from the strong form of the hypothesis, namely, suggestions as to how words could make concepts take form out of nothing. 3) The antithesis of the strong Whorfian hypothesis is that thought is mediated by language-independent symbolic systems, often called the language(s) of thought. Under this account, when humans learn a language, they learn to express in it concepts already present in their prelinguistic system(s). Their prelinguistic systems for representing the world are language-like only in that they are compositional: Larger, more complex meanings (concepts) are built up by the combination of elementary meanings. 4) Recently reported experimental studies with innumerate Piraha and Munduruku Indian subjects from the Brazilian Amazonia give evidence regarding the role of language in the development of numerical reasoning. Either the subjects in these reports have no true number words or they have consistent unambiguous words for one and two and more loosely used words for three and four. Moreover, they do not overtly count, either with number words or by means of tallies. Yet, when tested on a variety of numerical tasks -- naming the number of items in a stimulus set, constructing sets of equivalent number, judging which of two sets is more numerous, and mental addition and subtraction -- these subjects gave results indicative of an imprecise nonverbal representation of number, with a constant level of imprecision, measured by the Weber fraction. The Weber fraction for these subjects is roughly comparable to that of numerate subjects when they do not rely on verbal counting. In one of the reports, the stimulus sets had as many as 80 items, so the approximate representation of number in these subjects extends to large numbers. 5) Among the most important results in these reports are those showing simple arithmetic reasoning -- mental addition, subtraction, and ordering. These findings strengthen the evidence that humans share with nonverbal animals a language-independent representation of number, with limited, scale-invariant precision, which supports simple arithmetic computation and which plays an important role in elementary human numerical reasoning, whether verbalized or not (5). The results do not support the strong Whorfian view that a concept of number is dependent on natural language for its development. Indeed, they are evidence against it. The results are, however, consistent with the hypothesis that learning to represent numbers by some communicable notation (number words, tally marks, numerals) might facilitate the routine recognition of exact numerical equality. References (abridged): 1. L. Gleitman, A. Papafragou, in Handbook of Thinking and Reasoning, K. J. Holyoak, R. Morrison, Eds. (Cambridge Univ. Press, New York, in press) 2. D. Gentner, S. Golden-Meadow, Eds., Language and Mind: Advances in the Study of Language and Thought (MIT Press, Cambridge, MA, 2003) 3. S. C. Levinson, in Language and Space, P. Bloom, M. Peterson, L. Nadel, M. Garrett, Eds. (MIT Press, Cambridge, MA, 1996), Chap. 4 4. R. Gelman, S. A. Cordes, in Language, Brain, and Cognitive Development: Essays in Honor of Jacques Mehler, E. Dupoux, Ed. (MIT Press, Cambridge, MA, 2001), pp. 279-301 5. B. Butterworth, The Mathematical Brain (McMillan, London, 1999) Science http://www.sciencemag.org -------------------------------- Related Material: COGNITIVE SCIENCE: NUMBERS AND COUNTING IN A CHIMPANZEE Notes by ScienceWeek: In this context, let us define "animals" as all living multi-cellular creatures other than humans that are not plants. In recent decades it has become apparent that the cognitive skills of many animals, especially non-human primates, are greater than previously suspected. Part of the problem in research on cognition in animals has been the intrinsic difficulty in communicating with or testing animals, a difficulty that makes the outcome of a cognitive experiment heavily dependent on the ingenuity of the experimental approach. Another problem is that when investigating the non-human primates, the animals whose cognitive skills are closest to that of humans, one cannot do experiments on large populations because such populations either do not exist or are prohibitively expensive to maintain. The result is that in the area of primate cognitive research reported experiments are often "anecdotal", i.e., experiments involving only a few or even a single animal subject. But anecdotal evidence can often be of great significance and have startling implications: a report, even in a single animal, of important abstract abilities, numeric or conceptual, is worthy of attention, if only because it may destroy old myths and point to new directions in methodology. In 1985, T. Matsuzawa reported experiments with a female chimpanzee that had learned to use Arabic numerals to represent numbers of items. This animal (which is still alive and whose name is "Ai") can count from 0 to 9 items, which she demonstrates by touching the appropriate number on a touch-sensitive monitor. Ai can also order the numbers from 0 to 9 in sequence. The following points are made by N. Kawai and T. Matsuzawa (Nature 2000 403:39): 1) The author report an investigation of Ai's memory span by testing her skill in numerical tasks. The authors point out that humans can easily memorize strings of codes such as phone numbers and postal codes if they consist of up to 7 items, but above this number of items, humans find memorization more difficult. This "magic number 7" effect, as it is known in human information processing, represents an apparent limit for the number of items that can be handled simultaneously by the human brain. 2) The authors report that the chimpanzee Ai can remember the correct sequence of any 5 numbers selected from the range 0 to 9. 3) The authors relate that in one testing session, after choosing the first correct number in a sequence (all other numbers still masked), "a fight broke out among a group of chimpanzees outside the room, accompanied by loud screaming. Ai abandoned her task and paid attention to the fight for about 20 seconds, after which she returned to the screen and completed the trial without error." 4) The authors conclude: "Ai's performance shows that chimpanzees can remember the sequence of at least 5 numbers, the same as (or even more than) preschool children. Our study and others demonstrate the rudimentary form of numerical competence in non-human primates." Nature http://www.nature.com/nature -------------------------------- Related Material: COGNITIVE SCIENCE: ON THE MENTAL REPRESENTATION OF NUMBER The following points are made by A. Plodowski et al (Current Biology 2003 13:2045): 1) How are numerical operations implemented within the human brain? It has been suggested that there are at least three different codes for representing number: a verbal code that is used to manipulate number words and perform mental numerical operations (e.g., multiplication); a visual code that is used to decode frequently used visual number forms (e.g., Arabic digits); and an abstract analog code that may be used to represent numerical quantities [1]. Furthermore, each of these codes is associated with a different neural substrate [1-3]. 2) Several features of numbers are of interest to cognitive neuroscientists. First, investigations of animals and infants indicate that the ability to process numerical magnitude can be independent of language. Second, identical numerical quantities can be represented in several different notations. Third, different numerical operations can be performed on the same operands. Dehaene [1] has proposed a triple code model that distinguishes between an auditory verbal code, a visual code for Arabic digits, and an analog magnitude code that represents numerical quantities as variable distributions of brain activation. Dehaene and colleagues [1-3] propose that there are specific relationships between individual numerical operations and different numerical codes. The analog magnitude code is used for magnitude comparison and approximate calculation, the visual Arabic number form for parity judgments and multidigit operations, and the auditory verbal code for arithmetical facts learned by rote (e.g., addition and multiplication tables). 3) Previous studies have used behavioral and neuroimaging techniques (both ERP and fMRI) to explore the effects of notation (i.e., Arabic versus verbal code) on magnitude estimation [2,3]. The authors extend these studies using dense-sensor event-related EEG recording techniques to investigate the temporal pattern of notation-specific effects observed in a parity judgement (odd versus even) task in which single numbers were presented in one of four different numerical notations. Contrasts between different notations demonstrated clear modulations in the visual evoked potentials (VEP) recorded. The authors observed increased amplitudes for the P1 and N1 components of the VEP that were specific to Arabic numerals and to dot configurations but differed for random and recognizable (die-face) dot configurations. The authors suggest these results demonstrate clear, notation-specific differences in the time course of numerical information processing and provide electrophysiological support for the triple-code model of numerical representation.[4,5] References (abridged): 1. Dehaene, S. (1992). Varieties of numerical abilities. Cognition 44, 1-42 2. Dehaene, S. (1996). J. Cogn. Neurosci. 8, 47-68 3 Pinel, P., Dehaene, S., Riviere, D., and LeBihan, D. (2001). Neuroimage 14, 1013-1026 4. Guthrie, D. and Buchwald, J.S. (1991). Significance testing of difference potentials. Psychophysiology 28, 240-244 5. Nunez, P.L., Silberstein, R.B., Cadusch, P.J., Wijesinghe, R.S., Westdorp, A.F., and Srinivasan, R. (1994). A theoretical and experimental study of high resolution EEG based on surface Laplacians and cortical imaging. Electroencephalogr. Clin. Neurophysiol. 90, 40-57 Current Biology http://www.current-biology.com From checker at panix.com Wed May 25 00:50:04 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:50:04 -0400 (EDT) Subject: [Paleopsych] SW: On the Origins of Human Language Message-ID: Anthropology: On the Origins of Human Language http://scienceweek.com/2004/sa041203-3.htm The following points are made by Gary F. Marcus (Nature 2004 431:745): 1) If, as Francois Jacob argued, evolution is like a tinkerer who builds something new by using whatever is close at hand, then from what is the human capacity for language made? Most accounts of the evolution of language have focused on characterizing changes that are internal to the language system. Were the earliest forms of language spoken or (like sign language) gestured? Did language arise suddenly? Or did it emerge gradually, progressing step by step from a simple one-word "protolanguage" (limited to brief comments about the "here and now") into a more complex system that combined individual words into structured meaningful sentences encompassing the future, the past and the possible -- as well as the concrete present? Regardless of how these questions are resolved, if we seek the ultimate origins of language, we also need to look further back, beyond the first protolinguistic systems, to whatever prelinguistic systems may have preceded any form of language. 2) Possible prelinguistic precursors might include systems for planning or sequencing complex events, categorization, automating repetitive actions, and representing space and time. In each case, there are parallels between candidate prelinguistic cognitive (or motor) precursors and systems found in language. For example, many animals are able to construct mental maps for navigation, and all known languages draw heavily on spatial metaphors. Thus, it is tempting to conclude that machinery for the mental representation of space plays some role in -- or is at the very least available to -- the machinery for language. 3) But parallels alone are not enough to establish shared lineage between two systems -- they could instead represent convergent (independent) evolution. For example, a language system could have evolved its own machinery for automating repeated tasks, independent of pre-existing machinery for automatizing other cognitive functions. A more telling way of establishing prelinguistic ancestry could come from evolutionary contrivances, properties of language that existed not because of some selective advantage, but simply because they have descended from ancestral systems evolved for other purposes. Just as the panda's thumb is not a true digit, but a modified sesamoid bone pressed into service for gripping bamboo, some properties of our capacity for language may be better understood not as optimal solutions to a system for communication, but as cobbled-together remnants of ancestral cognitive systems. 4) In language, one good candidate comes from the study of memory. According to an optimal design, if the capacity for understanding language were evolved from scratch, it would be possible to reliably retrieve individual bits of syntactic structure on the basis of their location in a hierarchical structure, independently of their content -- as in most digital computers. Instead, human language systems seem to rely on "content-addressable" memory, a form of memory -- widespread in the vertebrate world and with an apparently ancient evolutionary source -- that retrieves information directly on the basis of its content, rather than through location. Unlike a computer's binary-tree structure, content-dependent memory in mammalian brains is subject to degradation over time and to interference between similar or intervening items. 5) Human speakers are thus less likely to resolve the relation between "admired" and "the newspaper" in a sentence such as: "It was the newspaper that was published by the undergraduates that the editor admired," than in the briefer sentence "It was the newspaper that the editor admired." In languages such as English that lack rich case-marking, in most cases listeners can correctly interpret only two levels of embedding, not because of a strict limit on the size of representable binary trees, but because similar items become confused in memory.(1-5) References (abridged): 1. Christiansen, M. H. & Kirby, S. Language Evolution (Oxford University Press, 2003) 2. Gould, S. J. The Panda's Thumb (Norton, 1980) 3. Jackendoff, R. Foundations of Language: Brain, Meaning, Grammar, Evolution (Oxford University Press, 2002) 4. Marcus, G. F. The Birth of the Mind (Basic Books, 2004) 5. McElree, B. et al. J. Mem. Language 48, 67-91 (2003) Nature http://www.nature.com/nature -------------------------------- Related Material: LANGUAGE GRAMMAR PROPOSED TO HAVE EVOLVED BY NATURAL SELECTION Notes by ScienceWeek: Language is considered a quintessentially human trait, and attempts to shed light on the evolution of human language have come from diverse areas including studies of primate social behavior, the diversity of existing human languages, the development of language in children, the genetic and anatomical correlates of language competence, and theoretical studies of cultural evolution, learning, and lexicon formation. One major question is whether human language is a product of evolution or a side-effect of a large and complex brain evolved for non-linguistic purposes. The following points are made by M.A. Nowak and D.C. Krakauer (Proc. Nat. Acad. Sci. 1999 96:8028): 1) The authors provide an approach to language evolution based on evolutionary game theory, the authors exploring the ways in which protolanguage can evolve in a nonlinguistic society and how specific signals can become associated with specific objects. 2) The authors argue that grammar originated as a simplified rule system that evolved by natural selection to reduce mistakes in communication, and they suggest their theory provides a systematic approach for thinking about the origin and evolution of human language. Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Related Material: ON THE GESTURAL ORIGINS OF HUMAN LANGUAGE Notes by ScienceWeek: A view currently held by many anthropologists and linguistics researchers is that the remarkable flexibility of human language is achieved at least in part through the human invention of grammar, a recursive set of rules that allows the generation of sentences of any desired complexity. The linguist Noam Chomsky has attributed this to a unique human endowment termed "universal grammar", with Chomsky suggesting that all human languages are variants of this fundamental endowment. The following points are made by Michael C. Corballis (American Scientist Mar-Apr 1999 87:138): 1) There is little doubt that the great apes (orangutan, gorilla, chimpanzee) (and perhaps other species such as dolphins) can use symbols to represent actions and objects in the real world, but these animals lack nearly all the other ingredients of true language. 2) Since the common ancestor of human beings and chimpanzees lived approximately 5 million years ago, it is a reasonable inference that grammatical language must have evolved in the hominid line (i.e., the line of human primates) at some point following the split from the line that led to the modern chimpanzee. There has been much disagreement as to when this might have happened. 3) One major view holds that it is impossible to conceive of grammar as having been formed incrementally; grammar therefore must have evolved as a single catastrophic event, probably late in hominid evolution. But many researchers hold a contrary view, that language evolved gradually, shaped by natural selection, and that the cognitive prerequisites of language are already present in the great apes and antedated the split of our hominid ancestors from the chimpanzee line, probably by several million years. 4) The author suggests that at least a partial reconciliation of these alternative perspectives may be that language emerged not from vocalization, but from manual gestures, and switched to a vocal mode relatively recently in hominid evolution, perhaps with the emergence of Homo sapiens. This is an old idea, apparently first suggested by Condillac in the 17th century, but argument in its favor has continued to grow. 5) The author points out that there are countless different sign languages invented by deaf people all over the world, and there is little doubt that these are genuine languages with fully developed grammars. The spontaneous emergence of sign languages among deaf communities everywhere confirms that gestural communication is as natural to the human condition as is spoken language. Indeed, children exposed from an early age only to sign language go through the same basic stages of acquisition as children learning to speak, including a stage when they "babble" silently in sign. 6) The authors proposes the following speculative scenario concerning the historical development of human language: a) 6 or 7 million years ago: Simple gestures first anticipated more complex forms of communication, shortly after the human line diverged from the great apes. At this stage vocalizations served only as emotional cries and alarm calls. b) Approximately 5 million years ago: With the advent of bipedalism, a more sophisticated form of gesturing involving hand signals may have evolved among the early hominids now labelled as "Australopithecus". c) Approximately 2 million years ago: In association with the increasing brain size of the genus Homo, hand gestures became fully syntactic (i.e., with syntax; with ordered arrangements), but vocalizations also became prominent. d) 100,000 years ago: Homo sapiens switched to speech as its primary means of communication, with gestures now playing a secondary role. e) Modern times: The development of telecommunication now permits the routine use of spoken language in the complete absence of hand gestures, but even so, many people find themselves gesturing when they speak on the telephone. 7) Concerning the question of what it was that enabled our species to prevail over other large-brained hominids, the author concludes: "Perhaps the most plausible answer is that they prevailed because of superior technology. But that technology might have resulted, not from an increase in brain size or intelligence, but from a switch from manual to vocal language that allowed them to use their hands for the manufacture of tools and weapons and their voices for instruction." American Scientist http://www.americanscientist.org From checker at panix.com Wed May 25 00:50:17 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:50:17 -0400 (EDT) Subject: [Paleopsych] SW: On Cognitive Memory Message-ID: Neuroscience: On Cognitive Memory http://scienceweek.com/2004/sb041126-3.htm The following points are made by Yasushi Miyashita (Science 2004 306:435): 1) Since the pioneering observations on patient H.M., who developed a severe and selective deficit in the formation of explicit (or declarative) memory after a bilateral resection of the medial temporal lobe (i.e., the hippocampus and nearby regions), subsequent studies of patients have located the source of various types of impairment in explicit memory in many brain areas (1). Notably, although patients with localized frontal lobe lesions do not have an amnesia typically observed in patients with medial temporal lobe lesions, they do exhibit impairments in memory of temporal context or temporal order, memory of the source of facts or events, or metamemory (i.e., knowledge about one's memory capabilities and about strategies that can aid memory) (2-4). 2) The identified brain-wide distributed network, called here the "cognitive memory system", is composed of three major subsystems, namely, the medial temporal lobe, the temporal cortex, and the frontal cortex. Although the ultimate storage sites for explicit memories appear to be in the cortex [but see (5) for another strong position], the medial temporal lobe plays a critical enabling role necessary for storage to take place. Domain-specific cortical regions in the temporal lobes are reactivated during remembering and contribute to the contents of a memory. The reactivation process is mediated by various signals, such as the top-down signal from the prefrontal cortex or the backward signal from the limbic cortex. Frontal regions mediate the strategic attempts for retrieval and encoding and also monitor its outcome, with the dissociated frontal regions making functionally separate contributions. 3) This large-scale cognitive network was initially identified in humans by using neuropsychology and functional imaging. However, molecular, cellular, and network components of this cognitive system have been systematically dissected by recent technical advancements, particularly in animal studies. These include cell type-restricted gene manipulations in mice, a combination of molecular biology and single-unit recording in monkeys, and a sophisticated scan design of event-related functional magnetic resonance imaging (fMRI) in humans. 4) In summary: a) A brain-wide distributed network orchestrates cognitive memorizing and remembering of explicit memory (i.e., memory of facts and events). The network was initially identified in humans and is being systematically investigated in molecular/genetic, single-unit, lesion, and imaging studies in animals. b) The types of memory identified in humans are extended into animals as episodic-like (event) memory or semantic-like (fact) memory. The unique configurational association between environmental stimuli and behavioral context, which is likely the basis of episodic-like memory, depends on neural circuits in the medial temporal lobe, whereas memory traces representing repeated associations, which is likely the basis of semantic-like memory, are consolidated in the domain-specific regions in the temporal cortex. These regions are reactivated during remembering and contribute to the contents of a memory. c) Two types of retrieval signal reach the cortical representations. One runs from the frontal cortex for active (or effortful) retrieval (top-down signal), and the other spreads backward from the medial temporal lobe for automatic retrieval. By sending the top-down signal to the temporal cortex, frontal regions manipulate and organize to-be-remembered information, devise strategies for retrieval, and also monitor the outcome, with dissociated frontal regions making functionally separate contributions. d) The challenge is to understand the hierarchical interactions between these multiple cortical areas, not only with a correlational analysis but also with an interventional study demonstrating the causal necessity and the direction of the causality. References (abridged): 1. L. R. Squire, D. L. Schacter, Neuropsychology of Memory (Guilford, New York, ed. 3, 2002) 2. D. T. Stuss, D. F. Benson, The Frontal Lobes (Raven, New York, 1986) 3. J. M. Fuster, The Prefrontal Cortex: Anatomy, Physiology, and Neuropsychology of the Frontal Lobe (Lippincott-Raven, Philadelphia, 1997). 4. M. Petrides, in Handbook of Neuropsychology, F. Boller, J. Grafman, Eds. (Elsevier, Amsterdam, 2000). 5. J. O'Keefe, L. Nadel, The Hippocampus as a Cognitive Map (Oxford Univ. Press, Oxford, 1978). Science http://www.sciencemag.org -------------------------------- Related Material: ON THE NEUROBIOLOGY OF LEARNING AND MEMORY The following points are made by H. Okano et al (Proc. Nat. Acad. Sci. 2000 97:12403): 1) The authors state they define memory as a behavioral change caused by an experience, and they define learning as a process for acquiring memory. According to these definitions, there are different kinds of memory. Some memories, such as those concerning events and facts, are available to our consciousness; this type of memory is called "declarative memory". However, another type of memory, called "procedural memory", is not available to consciousness. This is the memory that is needed, for example, to use a previously learned skill. We can improve our skills through practice: with training, the ability to play tennis, for example, will improve. Declarative memory and procedural memory are independent: there are patients with impaired declarative memory whose procedural memory is completely normal. Because of this fact, current researchers believe there must be separate mechanisms for each type of memory, and that these separate mechanisms probably also require separate brain areas as well. 2) The *cerebrum and *hippocampus are considered important for declarative memory, and the *cerebellum is considered important for procedural memory. The current belief is that memory requires alterations in the brain. The most popular candidate site for memory storage is the *synapse, where nerve cells communicate with each other. A change in the transmission efficacy at the synapse (called "synaptic plasticity") has been considered to be the cause of memory, and a particular pattern of synaptic usage or stimulation (conditioning stimulation) is believed to induce synaptic plasticity. Many questions remain to be answered, such as how synaptic plasticity is induced and how synaptic plasticity is implicated in learning and memory. 3) One current frontier in the study of synaptic plasticity is the attempt to clarify the role of plasticity in learning and memory. The strategy has been to examine the correlation between synaptic plasticity and learning by inhibiting the plasticity in a living animal. To do this, investigators have used inhibitors for certain molecules that are apparently required for synaptic plasticity. Another set of useful tools involves genetically engineered mutant mice, such as "knockout" and transgenic mice. A "knockout" mouse is a mutant mouse that is deficient in a specific native molecule. By using mutant mice, the relationship between synaptic plasticity and learning ability has been examined in detail. Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Notes by ScienceWeek: cerebrum: What is called the "cerebrum" is the bulk of brain as seen by the naked eye, the "great ravelled knot" that sits on top of the phylogenetically older parts (brainstem and midbrain) of the whole brain. The surface of the cerebrum, an enormously extended surface because of the many deep folds of the cerebrum, is a thin sheet called the "cerebral cortex" (cortex = rind or bark). hippocampus: A region of the cerebral cortex in the *medial part of the temporal lobe. In humans, among other functions, the hippocampus is apparently involved in short-term memory, and analysis of the neurological correlates of learning behavior in animals indicates that the hippocampus is also involved in memory in other species. cerebellum: The human cerebellum is about the size of a large apple, is placed at the lower back of the head under the optic lobes of the cerebrum, and is apparently involved in the input-output control of automatic sensorimotor functions. If you are sitting at your breakfast table, holding a newspaper in one hand, and using the other hand to routinely and repetitively dip a spoon into cold cereal and bring the cold cereal to your mouth while you read the newspaper, it is the cerebellum which is governing the automatic feeding movements while your cerebral cortex processes the information that you read. synapse: In general, nerve cells have a single long extension (the "axon") that propagates the electrical output (the action potential) of the cell. The term "synapse" refers to the junction between the terminal of a neuron's axon and another neuron. When studying the synapse, the first neuron is called the "presynaptic" neuron, and the second neuron is called the "postsynaptic" neuron. -------------------------------- Related Material: NEUROBIOLOGY: ON THE BIOLOGICAL BASIS OF MEMORY Notes by ScienceWeek Exactly 100 years ago, two psychologists, G.E. Mueller and A. Pilzecker, proposed what came to be called the perseveration-consolidation hypothesis of memory. In studies with human subjects, Mueller and Pilzecker found that memory of newly learned information was disrupted by the learning of other information shortly after the original learning, and they suggested that processes underlying new memories initially persist in a fragile state and then consolidate over time. This consolidation hypothesis still guides research, particularly research in neurobiology on the time-dependent involvement of neural systems and cellular processes enabling lasting memory. At the present time, the concept of "synaptic plasticity" underlies nearly all theories of memories, the term referring to changes in the behavior of the junction (synapse) between two nerve cells resulting from past history. Two prominent aspects of synaptic plasticity considered to be related to memory are "facilitation" and "potentiation". The term "facilitation" refers to a progressive increase in the amount of *neurotransmitter substance released at a synapse by successive nerve impulses (action potentials), the increase occurring during an input barrage consisting of repetitive stimulation (stimulus train). The term "potentiation" refers to an increase in neurotransmitter substance released by an action potential following repetitive stimulation of a synapse. Both facilitation and potentiation can be long-lasting, and "long-term potentiation" has been a focus of much research on the cellular basis of memory, particularly in the hippocampus, a brain cortex structure in the medial part of the temporal lobe. In humans, among other functions, the hippocampus is apparently involved in short-term memory, and analysis of the neurological correlates of learning behavior in the rat indicates that the hippocampus of the rat is also involved in memory. The following points are made by James L. McGaugh (Science 2000 287:248): 1) The author points out that the idea that synaptic mechanisms of long-term potentiation and long-term facilitation underlie memory remains a hypothesis. 2) The author points out that although studies of long-term potentiation and memory have focused on the involvement of the hippocampus, much evidence indicates that the hippocampus has only a time-limited role in the consolidation and/or stabilization of lasting memory. 3) The author points out that there are forms of memory that apparently do not involve the hippocampus and that may not use any known mechanisms of synaptic plasticity. 4) The author points out that despite theoretical conjectures, little is known about system and cellular processes mediating consolidation that continues for several hours or longer after learning, consolidation that creates lifelong memories. Concerning the above caveats, the author concludes: "These issues remain to be addressed in this new century of research on memory consolidation." Science http://www.sciencemag.org -------------------------------- Notes by ScienceWeek: neurotransmitter substance: Neurotransmitters are chemical substances released at the terminals of nerve axons in response to the propagation of an impulse to the end of that axon. The neurotransmitter substance diffuses into the synapse, the junction between the presynaptic nerve ending and the postsynaptic neuron, and at the membrane of the postsynaptic neuron the transmitter substance interacts with a receptor. Depending on the type of receptor, the result may be an excitatory or an inhibitory effect on the postsynaptic nerve cell. From checker at panix.com Wed May 25 00:50:29 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:50:29 -0400 (EDT) Subject: [Paleopsych] SW: On the Fundamental Constants over Time Message-ID: Theoretical Physics: On the Fundamental Constants over Time http://scienceweek.com/2004/sc041119-5.htm The following points are made by K.A. Olive and Y-Z. Qian (Physics Today 2004 October): 1) Are any of nature's fundamental parameters truly constant? If not, are they fixed by the vacuum state of the Universe, or do they vary slowly in time even today? To fully answer those questions requires either an unambiguous experimental detection of a change in a fundamental quantity or a significantly deeper understanding of the underlying physics represented by those parameters. 2) At first glance, a long list of quantities usually assumed to be constant could potentially vary: Newton's constant G(subN), Boltzmann's constant k(subB), the charge of the electron e, the electric permittivity epsilon-0, and magnetic permeability mu-0, the speed of light c, Planck's constant h-bar, Fermi's constant G(subF), the fine-structure constant alpha = e^(2)/(h-bar)c and other gauge coupling constants, Yukawa coupling constants that fix the masses of quarks and leptons, and so on. One must, however, distinguish what may be called a fundamental dimensionless parameter of the theory from a fundamental unit. Dimensionless parameters include gauge couplings and quantities that, like the ratio of the proton to electron mass, are combinations of dimensioned quantities whose units cancel. Their variations represent fundamental and observable effects. 3) In contrast, variations in dimensioned quantities are not unambiguously observable.(1) To point out the ambiguity is not to imply that a universe with, say, a variable speed of light is equivalent to one in which the speed of light is fixed. But no observable difference between those two universes can be uniquely ascribed to the variation in c. It thus becomes operationally meaningless to talk about measuring the variation in the speed of light or whether a variation in alpha is due to a variation in c or h-bar. It is simply a variation in alpha. 4) Lev Okun(2) provides a nice example, based on the hydrogen atom, that illustrates the inability to detect the variation in c despite the physical changes such a variation would cause.(2) Lowering the value of c lowers the rest-mass energy of an electron, E(sube) = m(sube)c^(2). When 2E(sube) becomes smaller than the binding energy of the electron to the proton in a hydrogen atom, E(subb) = m)sube)e^(4)/2(h-bar)^(2), it becomes energetically favorable for the proton to decay to a hydrogen atom and a positron. Clearly, that's an observable effect providing evidence that some constant of nature has changed. However, the quantity that determines whether the above decay occurs is the ratio E(subb)/2E(sube) = e^(4)/4(h-bar)^(2)c^(2) = alpha^(2)/4. Therefore, one cannot say which constant among e, h-bar, and c is changing, only that the dimensionless alpha is.(3-5) References (abridged): 1. M. J. Duff, L. B. Okun, G. Veneziano, J. High Energy Phys. 2002(03), 023 (2002). 2. L. B. Okun, Sov. Phys. Usp. 34, 818 (1991) 3. B. Bertoti, L. Iess, P. Tortora, Nature 425, 374 (2003) 4. J. K. Webb et al., Phys. Rev. Lett. 82, 884 (1999); M. T. Murphy et al., Mon. Not. R. Astron. Soc. 327, 1208 (2001) ; J. K. Webb et al., Phys. Rev. Lett. 87, 091301 (2001); M. T. Murphy et al., Mon. Not. R. Astron. Soc. 327, 1223 (2001) 5. M. T. Murphy, J. K. Webb, V. V. Flambaum, Mon. Not. R. Astron. Soc. 345, 609 (2003) Physics Today http://www.physicstoday.org -------------------------------- Related Material: ASTROPHYSICS: ON THE FINE STRUCTURE CONSTANT The following points are made by L.L. Cowie and A. Songaila (Nature 2004 428:132): 1) The physical constants might not be so constant. If any variation in their values were measured, it could give us profound constraints on a long-sought quantized theory of gravity. So it is no surprise then that a claim(1,2) to have measured a variation over time in the value of the fine-structure constant, alpha, has led to a spate of papers incorporating this result into a wide range of theories. Chand et al(3) have used the same technique of measuring radiation from distant quasars but reach the opposite conclusion -- there is no significant variation in alpha . What should we believe? 2) The fine-structure constant was originally uncovered in studies of the closely spaced spectral lines of atoms such as hydrogen and helium. This "fine structure" reflects the quantization of electron energies within the atom. The constant is defined as the product 2(pi)e^(2)/hc (where e is the charge of the electron, h is Planck's constant and c is the speed of light) but is perhaps more familiar in its numerical form: alpha = 1/137. 3) Curiously enough, the most stringent limits on the variation of alpha come not from laboratory or astronomical measurements but from a natural nuclear reactor in Africa. About 1.8 billion years ago, in what is now the Oklo mine in Gabon, a spontaneous chain reaction began, involving the fission of the uranium isotope U-235. The capture of thermal neutrons from the fission process by other elements can be related to alpha . The best recent analysis of data from Oklo shows that any fractional change in the fine-structure constant (delta-alpha/alpha) has been less than 10^(-7) over nearly two billion years(4). 4) Although less sensitive, cosmological measurements can nevertheless probe much longer periods, encompassing much of the 14-billion-year life of the Universe. Gas in and between galaxies that lie along the line of sight from the Earth to quasars scatters the light from these enormously distant sources. Atoms and molecules in the gas absorb certain wavelengths, imprinting absorption lines in the radiation spectra of these objects. Often the lines of many elements, in many ionization states, might be seen from a particular patch of gas. The wavelengths at which the lines occur depend on the distance of the absorbing gas from the observer, because the radiation becomes "redshifted" to longer wavelengths, owing to the expansion of the Universe, as it travels from its source. 5) If there had been any variation in the fine-structure constant over the billions of years of the light's journey, that would have affected the energy levels in the atoms and would therefore have shifted the wavelengths of the absorption lines. We cannot measure absolute shifts in these wavelengths, because we have no way of independently knowing the distance to the source and hence the redshift that the radiation has undergone. But we can measure relative shifts of the wavelengths from all the absorption lines seen for a particular system. Absorption lines have been detected for quasars so distant that the radiation we see from them corresponds to a time when the Universe was only 6% of its present age(5). References (abridged): 1. Webb, J. K. et al. Phys. Rev. Lett. 87, 091301 (2001) 2. Murphy, M. T., Webb, J. K. & Flambaum, V. V. Mon. Not. R. Astron. Soc. 345, 609-638 (2003) 3. Chand, H., Srianand, R., Petitjean, P. & Aracil, B. Astron. Astrophys. (in the press); preprint at http://arxiv.org/abs/astro-ph/0402177 (2004) 4. Damour, T. & Dyson, F. Nucl. Phys. B 480, 37-54 (1996) 5. White, R. L., Becker, R. H., Fan, X. & Strauss, M. A. Astron. J. 126, 1-14 (2003) Nature http://www.nature.com/nature -------------------------------- Related Material: GENERAL PHYSICS: ON THE VALUES OF THE FUNDAMENTAL CONSTANTS Notes by ScienceWeek: In physics, the term "fundamental constants" (universal constants) refers in general to those constants that do not change throughout the Universe. For example, the charge on an electron, the speed of light in a vacuum, the Planck constant, the gravitational constant, are some of the constants considered as "fundamental constants". In 1931, the physicist F.K. Richtmyer (d. 1939), author of a textbook well-known to an entire generation of physics students, remarked: "Why should one wish to make measurements with ever increasing precision? Because the whole history of physics proves that a new discovery is quite likely to be found lurking in the next decimal place." The essential basis for this view is that accurate values of the fundamental constants are required for the critical comparison of theory with experiment, and it is only such comparisons that enable our understanding of the physical world to advance. A closely related idea is that by comparing the numerical values of the same fundamental constants obtained from experiments in the different fields of physics, the self-consistency of the basic theories of physics can be tested. The following points are made by P.J. Mohr and B.N. Taylor (Physics Today March 2001): 1) The authors point out that the values of the fundamental constants are determined by a broad range of experimental measurements and theoretical calculations involving many fields of physics and measurement science (metrology). The best value of even a single constant is likely to be determined by an indirect chain of information based on seemingly unrelated phenomena. For example, the value of the mass of the electron in kilograms is based mainly on the combined information from experiments that involve classical mechanical and electromagnetic measurements, the highest precision optical laser spectroscopy, experiments involving trapped electrons, and condensed matter quantum phenomena, together with condensed matter theory and extensive calculations in quantum electrodynamics. 2) Two additional features of the values of the fundamental constants are not evident from a table of numbers: a) The numbers form a tightly linked set -- very few of the values are independent of the others. In general, a change in a single item of the data on which the constants are based will change many of the values. b) The numbers are based only on the information available at a particular time. Therefore, the recommended values change over time, and the type of information from which the values are obtained changes as well. For example, in the distant past, the charge of the electron was determined by the classic oil-drop experiment, but that method is no longer competitive. Now the electron charge is determined indirectly from other constants. 3) The author points out that the basic approach to finding a self-consistent set of values for the fundamental constants is to identify the critical experiments, determine the theoretical expressions as functions of the fundamental constants that make predictions for the measured quantities, and adjust the value of the constants to achieve the best match between theory and experiment. The idea of making systematic study of potentially relevant experimental and theoretical information in order to produce a set of self-consistent values of the constants dates back to Raymond T. Birge, who published such a study in 1929 as the very first article in what is now the _Reviews of Modern Physics_. The Task Group on Fundamental Constants, established by the Committee on Data for Science and Technology in 1969, has published three sets of recommended values of the fundamental constants, one set in 1973, one set in 1986-1987, and the latest in 1999-2000. The most recent set is termed the "1998 recommended values", because it is based on the information available as of 31 December 1998. Physics Today http://www.physicstoday.org From checker at panix.com Wed May 25 00:51:09 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:51:09 -0400 (EDT) Subject: [Paleopsych] SW: On Timescales Message-ID: Theoretical Physics: On Timescales http://scienceweek.com/2004/sb041119-5.htm The following points are made by Alexander E. Kaplan (Nature 2004 431:633): 1) Our Universe, according to the "Big Bang" theory, is approximately 14 billion years, or 5 x 10^(17) seconds (s) old. The ultimate timescale ("Planck time") of quantum cosmology --approximately 10^(-43) s, the Big Bang's birth-flash -- is an elementary grain or pixel of time, within which our normal physics of four-dimensional space and time breaks down into a much greater number of dimensions, as hypothesized by the "superstring" theory. 2) In logarithmic terms, we, with a lifetime of approximately 70 years (roughly 2 x 10^(9) s), exist on a scale that has more in common with the age of the Universe than with Planck time. We have learned how to keep track of time -- we could even regard ourselves as "Homo temporal" -- but how much of it is controlled and used by us? Although the "long" end of this scale is still only of academic interest, the "short" end is becoming a hot and bustling frontier of science and technology. The most familiar examples would be communication and computers. In the quest of higher computer performance, one of the major parameters is the clock frequency or, inversely, the clock cycle. An old 1989 UNIX computer had a clock frequency of approximately 17 MHz; today's off-the-shelf computers have a clock cycle of almost 3 GHz, or 0.3 nanoseconds. 3) Lasers have been moving even faster into shorter time domains. Soon after the invention of laser in 1959, the length (duration) of a pulse of light passed the nanosecond (ns, 10^(-9) s) and then picosecond (ps, 10-12 s) thresholds, and the race was on to get to even shorter pulses. The sub-picosecond and femtosecond (fs, 10^(-15) s) domain became a field of rich research, with topics such as the registration of super-fast processes, time-resolved spectroscopy, characterization of semiconductors with sub-ps relaxation times, and the control of chemical reactions and fs time-resolution by powerful laser pulses. This domain also hosts the so-called Terahertz technology, which uses these pulses as, for example, a diagnostic tool to "see-through" opaque materials and structures.(1-5) References: 1. Paul, P.M. et al. Science 292, 1689 (2001) 2. Hentschel, M. et al. Nature 414, 509 (2001) 3. Zewial A. Nature 412, 279 (2001) 4. Kaplan, A.E. & Shkolnikov, P.L. Phys. Rev. Lett. 88, 74801 (2002) 5. Greene, B. The Elegant Universe, (Random House, New York, 2003) Science http://www.sciencemag.org -------------------------------- Related Material: ON THE NEUROPSYCHOLOGY OF TIME The following points are made by P.A. Lewis and V. Walsh (Current Biology 2002 12:R9): 1) Immanuel Kant (1724-1804) attempted to explain the special status of time and space in perception by arguing that our understanding of the Universe is limited by the way our brains process information. Specifically, Kant noted that we perceive all events as occurring in time and space, but it is not clear whether these dimensions exist in reality or are byproducts of our mental organization. For the neuroscientist, the question is slightly different: Allowing that our perceptions are mental constructs and therefore often differ from, or ignore, physical reality (e.g., illusions), the question becomes, How do brain structures and processes shape these perceptions? 2) Within most sensory modalities, there is a clear starting point, because the dimensions being examined -- size, color, pitch, pressure, etc. -- can be measured using known receptor systems. For time, however, it is less clear how to approach the issue, since we do not appear to have a set of peripheral time sensors or a primary time area in the brain. So how do we come to be aware of time, and what mechanisms do we use to measure it? 3) Psychologists and physiologists have been investigating time measurement since the early 17th century, and approaches they have used fall into two main categories: a) examination of the psychophysical properties of temporal estimation data, and b) investigations aiming to isolate the necessary brain regions using focal lesions or, more recently, neuroimaging. An important fundamental concept that has emerged from this work is that of multiple neural clocks. Measurement of intervals with different durations, or for different behavioral purposes, appears to draw upon quite discrete and different mechanisms in many cases. Current Biology http://www.current-biology.com -------------------------------- Related Material: HISTORY OF PHYSICS: ON THE MEASUREMENT OF TIME Notes by ScienceWeek: Although a verbal definition of time (other than a purely operational definition) presents philosophical difficulties, from the standpoint of physics, time is the most accurately measured physical quantity. In general, there are two independent and fundamental time scales: a) the dynamical time scale, which is based on the regularities of the motions of the celestial bodies fixed in their orbits by gravitation; b) the atomic time scale, which is based on the characteristic frequency of electromagnetic radiation emitted or absorbed in quantum transitions between internal energy states of atoms or molecules. The first known device for indicating the time of day was the "gnomon", which appeared in approximately 3500 BC. This instrument consisted of a vertical stick or pillar, the length of the shadow cast by the stick or pillar providing an indication of the time of day. By the 8th century BC, more precise devices were in use. The earliest known sundial still preserved is an Egyptian shadow clock dating at least from the 8th century BC, and which consists of a straight base with a raised crosspiece at one end. On the base is inscribed a scale of 6 time divisions. The base is placed in an east-west direction with the crosspiece at the east end in the morning and at the west end in the afternoon. The shadow of the crosspiece on the base indicates the time. The Babylonian hemispherical sundial (hemicycle), apparently invented by the astronomer Barosus in approximately 300 BC, consisted of a cubical block into which was cut a hemispherical opening. To the opening was fixed a pointer whose end lay at the center of the hemispherical space. The path traveled by the shadow of the pointer was approximately a circular arc whose length and position varied according to the seasons. An appropriate number of arcs were inscribed on the internal surface of the hemisphere, each arc divided into 12 subdivisions. Each day, reckoned from sunrise to sunset, had 12 equal intervals or "hours". Since the length of the day varied according to the season, these hours were known as "temporary hours". The Greeks developed and constructed sundials of considerable complexity in the 3rd and 2nd centuries BC, including instruments with either vertical, horizontal, or inclined dials, indicating time in temporary hours. The Romans also used sundials with temporary hours, and some of these Roman sundials were portable. The Arabs increased the variety of sundial designs, and at the beginning of the 13th century AD the Arabs wrote on the construction of sundials with cylindrical, conical, and other surfaces. In general, a "clock" is a device that performs regular movements in equal intervals of time, the device linked to a counting mechanism that records the number of movements. The first public clock that struck the hours was made and erected in Milan (IT) in 1335. The oldest surviving clock is that at Salisbury Cathedral, which dates from 1386. In approximately 1500, small portable clocks driven by a spring appeared, the dials with an hour hand only. The pendulum was applied as a time controller in clocks beginning in 1656, although Galileo had already suggested this in 1582. The familiar subdivision of the day into 24 hours, the hour into 60 minutes, and the minute into 60 seconds is of ancient origin, but these subdivisions came into general use in approximately 1600 AD. When the increasing accuracy of clocks led to the adoption of the "mean solar day", which contained 86,400 seconds, the "mean solar second" became the basic unit of time. The adoption of the International System (SI) second, defined on the basis of atomic phenomena, as the fundamental time unit, occurred provisionally in 1964 and finally in 1967. A second is now defined as 9,192,631,770 cycles of radiation associated with the transition between the two hyperfine levels of the ground state of the cesium-133 atom. The number of cycles of radiation was chosen to make the length of the defined second correspond as closely as possible to that of the previous standard, the astronomically determined second of "Ephemeris Time" (defined as 1/(86,400) of the mean solar day). The following points are made by J.C.Bergquist et al (Physics Today March 2001): 1) The authors point out that although a unit of time can be constructed from other physical constants, time is usually viewed as an arbitrary parameter to describe dynamics. The frequency of any periodic event, such as the mechanical oscillation of a pendulum, or the quantum oscillation of an atomic dipole, can be adopted to define the unit of time, the second. 2) For centuries, the mean solar day served as the unit of time, but Earth's period of rotation is irregular and slowly increasing. In 1956, the International Astronomical Union and the International Committee on Weights and Measures recommended adopting Ephemeris Time, based on Earth's orbital motion around the Sun, as a more accurate and stable basis for the definition of time. This recommendation was formally ratified in 1960 by the General Conference on Weights and Measures. 3) Until the definition of the second in terms of atomic time in 1967, most work in standards laboratories was devoted to developing secondary standards, such as lumped-element circuits and quartz crystals, whose resonant frequencies could be calibrated relative to Ephemeris Time. But frequencies derived from resonant transitions in atoms or molecules offer important advantages over macroscopic oscillators. Any unperturbed atomic transition is identical from atom to atom, so two clocks based on such a transition should generate the same time. Also, unlike macroscopic devices, atoms do not wear out, and as far we know they do not change their properties over time. 4) The basic idea of most atomic clocks is straightforward: a) First, identify a transition between two non-degenerate energy states of an atom. b) Then, create an ensemble of these atoms (e.g., in an atomic beam or storage device). c) Next, illuminate the atom with radiation from a tunable source that operates near the transition frequency. d) Sense and control the frequency where the atoms absorb maximally. e) When maximal absorption is achieved, count the cycles of the oscillator: a certain number of elapsed cycles generates a standard interval of time. But although the general idea of an atomic clock is straightforward, in practice there are a number of experimental difficulties that limit accuracy. The latest atomic clocks use a single ion to measure time with an anticipated precision of one part in 10^(18). Physics Today http://www.physicstoday.org From checker at panix.com Wed May 25 00:51:19 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:51:19 -0400 (EDT) Subject: [Paleopsych] SW: On Obsessive-Compulsive Disorder Message-ID: Medical Biology: On Obsessive-Compulsive Disorder http://scienceweek.com/2004/sb041119-3.htm The following points are made by Michael A. Jenike (New Engl. J. Med. 2004 350:259): 1) Consider the following case: A 33-year-old woman has a seven-year history of hand washing for two to six hours a day, as well as urges to check doors and stoves extensively before leaving her home. Her life is restricted, and her family members are upset about her behavior. 2) The above description is that of a typical patient with an anxiety disorder called "obsessive-compulsive disorder" (OCD), which affects 2 to 3 percent of the world's population.(1) The patient has a general sense that something terrible may occur if a particular ritual is not performed, and the failure to perform a ritual may lead immediately to severe anxiety or a very uncomfortable, nagging feeling of incompleteness. In addition to checking and washing rituals, patients with OCD often present with persistent intrusive thoughts, extreme slowness or thoroughness, or doubts that lead to reassurance-seeking rituals. Patients with OCD commonly seek care from physicians other than psychiatrists. For example, in one study, 20 percent of patients who visited a dermatology clinic had OCD, which had been previously diagnosed in only 3 percent.(2) 3) The mean age at the onset of OCD ranges from 22 to 36 years, with the disorder developing in only 15 percent of patients older than 35 years.(3) Men tend to have an earlier age at onset than women, but women eventually catch up, and roughly 50 percent of adults with OCD are women.(3) OCD is typically a chronic disorder with a waxing and waning course.(3) With effective treatment, the severity of symptoms can be reduced, but typically some symptoms remain.(3) On average, people with OCD see three to four doctors and spend more than nine years seeking treatment before they receive a correct diagnosis. It takes an average of 17 years from the onset of OCD to obtain appropriate treatment. 4) OCD tends to be underdiagnosed and undertreated. Patients may be secretive or lack insight about their illness. Many health care providers are not familiar with the symptoms or are not trained in providing treatment. Some people may not have access to treatment, and sometimes insurance plans do not cover behavioral therapy, although the situation is improving. This lack of access or coverage is unfortunate, since earlier diagnosis and proper treatment can help patients to avoid the suffering associated with OCD and lessen the risks of related problems, such as depression, marital difficulties, and problems related to employment.(4) 5) OCD may have a genetic basis.(5) Concordance for OCD is greater among pairs of monozygotic twins (80 to 87 percent) than among pairs of dizygotic twins (47 to 50 percent). The prevalence of OCD is increased among the first-degree relatives of patients with OCD, as compared with the relatives of control subjects, and the age at onset in the proband (the patient, the index case) is inversely related to the risk of OCD among the relatives.(5) There is evidence of a dominant or codominant mode of transmission of OCD. 6) In rare cases, a brain insult such as encephalitis, a streptococcal infection (in children), striatal lesions (congenital or acquired), or head injury directly precedes the development of OCD. There is some evidence of a neurologic basis for OCD. For example, patients with OCD have significantly more gray matter and less white matter than normal controls, suggesting a possible developmental abnormality. Neuroimaging studies have documented consistent differences in regional brain activity between patients with OCD and control subjects, and the abnormal activity in patients with OCD shifts toward normal after either successful treatment with serotonin-reuptake inhibitors or effective behavioral therapy. References (abridged): 1. Diagnostic and statistical manual of mental disorders, 4th ed.: DSM-IV. Washington, D.C.: American Psychiatric Association, 1994 2. Fineberg NA, O'Doherty C, Rajagopal S, Reddy K, Banks A, Gale TM. How common is obsessive-compulsive disorder in a dermatology outpatient clinic? J Clin Psychiatry 2003;64:152-155 3. Maj M, Sartorius N, Okasha A, Zohar J, eds. Obsessive-compulsive disorder. 2nd ed. Chichester, England: John Wiley, 2002 4. The Expert Consensus Panel for Obsessive-Compulsive Disorder. Treatment of obsessive-compulsive disorder. J Clin Psychiatry 1997;58:Suppl 4:2-72 5. Pauls DL, Alsobrook JP II, Goodman W, Rasmussen S, Leckman JF. A family study of obsessive-compulsive disorder. Am J Psychiatry 1995;152:76-84 New Engl. J. Med. http://www.nejm.org From checker at panix.com Wed May 25 00:51:31 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:51:31 -0400 (EDT) Subject: [Paleopsych] SW: On the Concept of Force Message-ID: Theoretical Physics: On the Concept of Force http://scienceweek.com/2004/sa041119-6.htm The following points are made by Frank Wilczek (Physics Today 2004 October): 1) Newton's second law of motion, F = ma, is the soul of classical mechanics. Like other souls, it is insubstantial. The right-hand side is the product of two terms with profound meanings. Acceleration is a purely kinematical concept, defined in terms of space and time. Mass quite directly reflects basic measurable properties of bodies (weights, recoil velocities). The left-hand side, on the other hand, has no independent meaning. Yet clearly Newton's second law is full of meaning, by the highest standard: It proves itself useful in demanding situations. Splendid, unlikely looking bridges, like the Erasmus Bridge (known as the Swan of Rotterdam), do bear their loads; spacecraft do reach Saturn. 2) The paradox deepens when we consider force from the perspective of modern physics. In fact, the concept of force is conspicuously absent from our most advanced formulations of the basic laws. It doesn't appear in Schr?edinger's equation, or in any reasonable formulation of quantum field theory, or in the foundations of general relativity. Astute observers commented on this trend to eliminate force even before the emergence of relativity and quantum mechanics. 3) In his 1895 Dynamics, the prominent physicist Peter G. Tait, who was a close friend and collaborator of Lord Kelvin (1824-1907) and James Clerk Maxwell (1831-1879), wrote "In all methods and systems which involve the idea of force there is a leaven of artificiality.... there is no necessity for the introduction of the word "force" nor of the sense-suggested ideas on which it was originally based."(1) 4) Particularly striking, since it is so characteristic and so over-the-top, is what Bertrand Russell (1872=1970) had to say in his 1925 popularization of relativity for serious intellectuals, /The ABC of Relativity/: "If people were to learn to conceive the world in the new way, without the old notion of "force," it would alter not only their physical imagination, but probably also their morals and politics.... In the Newtonian theory of the solar system, the sun seems like a monarch whose behests the planets have to obey. In the Einsteinian world there is more individualism and less government than in the Newtonian."(2) The 14th chapter of Russell's book is entitled "The Abolition of Force." (3,4) References (abridged): 1. P. G. Tait, Dynamics, Adam & Charles Black, London (1895) 2. B. Russell, The ABC of Relativity, 5th rev. ed., Routledge, London (1997) 3. I. Newton, The Principia, I. B. Cohen, A. Whitman, trans., U. of Calif. Press, Berkeley (1999) 4. S. Vogel, Prime Mover: A Natural History of Muscle, Norton, New York (2001), p. 79 Physics Today http://www.physicstoday.org -------------------------------- Related Material: THEORETICAL PHYSICS: ON THE STRONG FORCE The following points are made by Ian Shipsey (Nature 2004 427:591): 1) The fundamental particles called quarks exist in atom-like bound states, such as those constituting protons and neutrons, the quarks held together by the strong force. The heavier varieties of quark, such as the bottom quark, can disintegrate to produce other, lighter particles, and the pattern of the decay rates is constrained, but not determined, in the theory of fundamental particles, the so-called "standard model". That pattern, especially the part involving the bottom quark, is sensitive to new physical phenomena. But although accurate measurements of the rates have been made, the window on new physics has been obscured. This is because the binding effect of the strong force between quarks modifies the decay rates: unless correction factors can be accurately worked out, the data cannot be fully interpreted for signs of any physics that is as yet unknown. This has been the case for almost 40 years. 2) The standard model describes all observed particles and their interactions. Particles interact by exchanging other particles that convey force. For example, in an atom, electrons bind to protons by swapping photons. This is the electromagnetic force, described by the theory of quantum electrodynamics (QED). In a proton, two types of quark, called "up" and "down", are bound together so tightly, by exchanging particles called gluons, that this is known as the "strong force". Its associated theory is quantum chromodynamics, or QCD. In the standard model there is a third force, the "weak force", which is the mediator of radioactive beta-decay. Another example of the weak force in action is the decay of a heavy bottom quark into an up quark, through the emission of a W particle (which then itself decays to an electron and an anti-neutrino). 3) Despite its success, the standard model leaves many questions unanswered. For example, although the observable Universe is made of matter and there is no evidence for significant quantities of antimatter, equal amounts of both should have been created in the Big Bang. When matter and antimatter meet, they annihilate each other: if a small asymmetry between matter and antimatter did not exist at the time of the Big Bang, there would be no matter in the Universe today. So how did that asymmetry arise? 4) If heavy particles that existed in the early Universe decayed preferentially into matter over antimatter, that could have created the matter excess. In the standard model, two types of quark, bottom and strange, do decay asymmetrically. But this effect alone is far too small to account for the asymmetry. However, there are many theories that predict the existence of other, massive particles that could readily produce the asymmetry. And because of the connection between asymmetry and mass, these theories also address other puzzles, such as why electrons are almost 10,000 times lighter than bottom quarks. Nature http://www.nature.com/nature -------------------------------- Related Material: ON THE CASIMIR FORCE The following points are made by E. Buks and M.L. Roukes (Nature 2002 419:119): 1) In 1948, Hendrik Casimir (1909-2000) calculated that the quantum fluctuations of an electromagnetic field, so-called zero-point fluctuations, give rise to an attractive force between objects(1). This force is a particularly striking consequence of the quantum theory of electrodynamics(2). Casimir's calculations were idealized -- he considered two perfectly conducting parallel plates at absolute-zero temperature -- but there are implications for more realistic objects. For example, Kenneth et al.(3) have extended these considerations to real-world materials. 2) Their work follows that of Boyer in 1974, who also studied the case of parallel plates but with one plate perfectly conducting and the other having infinite magnetic permeability (permeability is a measure of the material's response to an applied magnetic field). For this special case Boyer found that quantum fluctuations induce a force with the opposite sign, causing the plates to repel each other(4). Kenneth et al (3) extend understanding of the Casimir force phenomenon to the more general situation of realistic "dielectric" materials that are characterized by both their electrical permittivity (a measure of the material's response to an applied electric field) and their magnetic permeability. Their numerical results show that repulsive forces can arise in the general class of materials with high magnetic permeability. 3) Although the Casimir effect is deeply rooted in the quantum theory of electrodynamics, there are analogous effects in classical physics. A striking example was discussed in 1836, in P. C. Caussee's L'Album du Marin (The Album of the Mariner)(5). Caussee reported a mysteriously strong attractive force that can arise between two ships floating side by side -- a force that can lead to disastrous consequences. A physical explanation for this force was offered only recently by Boersma (1996), who suggested that it originates in the radiation pressure of water waves acting differently on the opposite sides of the ships. His argument goes as follows: the spectrum of possible wave modes around the two ships forms a continuum (any arbitrary wave-vector is allowed); but between the vessels their opposing sides impose boundary conditions on the wave modes, restricting the allowed values of the component of the wave-vector that is normal to the ships' surfaces. This discreteness created in the spectrum of wave modes results in a local redistribution of modes in the region between the ships, with the consequence that there is a smaller radiation pressure between the ships than outside them. References (abridged): 1. Casimir, H. B. G. Proc. Kon. Ned. Akad. 51, 793-795 (1948). 2. Bordag, M., Mohideen, U. & Mostepanenko, V. M. Phys. Rep. 353, 1-205 (2001). 3. Kenneth, O., Klich, I., Mann, A. & Rezen, M. Phys. Rev. Lett. 89, 033001 (2002). 4. Boyer, T. H. Phys. Rev. A 9, 2078-2084 (1974). 5. Caussee, P. C. L'Album du Marin (Mantes, Charpentier, 1836). Nature http://www.nature.com/nature From checker at panix.com Wed May 25 00:51:49 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:51:49 -0400 (EDT) Subject: [Paleopsych] On Chromosomes and Sex Determination Message-ID: Evolutionary Biology: On Chromosomes and Sex Determination http://scienceweek.com/2004/sc041112-3.htm The following points are made by Brian Charlesworth (Current Biology 2004 14:R745): 1) Determination of sexual identity by genes associated with highly differentiated sex chromosomes is often assumed to be the norm, given our familiarity with the X and Y chromosomes of mammals and model organisms such as Drosophila. Even within tetrapod vertebrates, however, there is a wide diversity of sex determination mechanisms, with many examples of species with genetic sex determination but microscopically similar X and Y chromosomes, and numerous cases of environmental sex determination [1]. There is an even wider range of sexual systems in teleost fishes, with examples of self-fertilizing hermaphrodites [2], sequential hermaphrodites [3], and environmental sex determination [1]. 2) Even where sex is genetically determined, the mechanisms vary enormously, with clearly distinguishable sex chromosomes being very rare [1,4]. It is easiest to see the footprints of the evolutionary forces that drive the evolution of sex chromosomes in cases where the divergence of X and Y chromosomes has not reached its limit, with no genetic recombination between X and Y chromosomes over most of their length and a lack of functional genes on the Y chromosome [5]. The comparative genetics of sex determination systems in fish species may thus yield important insights into the evolution of sex chromosomes. 3) Despite pioneering classical genetic studies of sex determination in fish such as the medaka, the guppy, and the platyfish [1], it has been difficult to obtain detailed genetic information on sex chromosome organisation in these species. With modern genomic methods, however, it is now feasible, but laborious, to characterize the sex determining regions of fish genomes. Studies of chromosomal regions that determine male development in two unrelated groups of fish species show the promise of this approach. References (abridged): 1. Bull, J.J. (1983). Evolution of Sex Determining Mechanisms. (Menlo Park, CA: Benjamin Cummings) 2. Weibel, A.C., Dowling, T.E. and Turner, B.J. (1999). Evidence that an outcrossing population is a derived lineage in a hermaphroditic fish (Rivulus marmoratus). Evolution 53, 1217-1225 3. Charnov, E.L. (1982). The Theory of Sex Allocation. (Princeton, NJ: Princeton University Press) 4. Volff, J.-N. and Schartl, M. (2001). Variability of sex determination in poeciliid fishes. Genetica 111, 101-110 5. Charlesworth, B. and Charlesworth, D. (2000). The degeneration of Y chromosomes. Phil. Trans. Roy. Soc. Lond. B. 355, 1563-1572 Current Biology http://www.current-biology.com -------------------------------- Related Material: EVOLUTIONARY BIOLOGY: EVOLUTION OF PLANT SEX CHROMOSOMES The following points are made by Deborah Charlesworth (Current Biology 2004 14:R271): 1) Plant sex chromosomes are particularly interesting because they evolved much more recently than those of mammals or Drosophila -- most plants with separate sexes seem to have evolved recently from ancestors with both sex functions [1,2]. Plant sex chromosomes may thus tell us about the initial stages of the evolutionary process that has led to the massive gene loss that has occurred in Y chromosomes. 2) The sex determination system of papaya (Carica papaya) has been studied genetically since 1938, when it was established that an apparently single locus determines the male, female or hermaphrodite state. As in many familiar animal systems, including Drosophila and mammals, female papaya are the homozygous sex, while males and hermaphrodites are heterozygotes. In most dioecious plants -- those with separate sexes, rather than hermaphroditism -- males are also the heterozygous sex [1]. 3) Many animals and most dioecious plant species, such as Silene latifolia, have a visibly distinctive X/Y sex chromosome pair. The mammalian Y is smaller than the X, whereas the S. latifolia Y chromosome is larger than its X. Many dioecious plants, however, including papaya and kiwi fruit [3], have no such chromosome heteromorphism; in these species, the sex-determining genes seem to map to small regions of one normal-looking chromosome [3,4]. 4) To understand the papaya sex determining region, a detailed map has now been made of the papaya chromosome (chromosome LG1) carrying the sex-determining genes [5]. At present, most of the markers used are "anonymous" DNA sequence variants, not in coding sequences, and detected by the "amplified fragment length polymorphism" (AFLP) approach. As expected for a chromosome carrying the sex-determining genes, LG1 includes markers that co-segregate perfectly with sex. The finding of many such markers --225 out of 342 LG1 markers -- indicates that the sex-determining genes are spread over an extensive region that could include many genes. Physical mapping of the non-recombining genome region (obtained by sequencing bacterial artificial chromosome (BAC) clones carrying sequences corresponding to some of the markers) allowed Liu et al.[5] to estimate that the region involved in sex determination in papaya extends over roughly 4.4 Mb, only about 10% of chromosome LG1. References (abridged): 1. Westergaard, M. (1958). The mechanism of sex determination in dioecious plants. Adv. Genet. 9, 217-281 2. Charlesworth, B. and Charlesworth, D. (1978). A model for the evolution of dioecy and gynodioecy. Am. Nat. 112, 975-997 3. Harvey, C.F., Gill, C.P., Fraser, L.G., and McNeilage, M.A. (1997). Sex determination in Actinidia. 1. Sex-linked markers and progeny sex ratio in diploid A. chinensis. Sex. Plant Repro. 10, 149-154 4. Semerikov, V., Lagercrantz, U., Tsarouhas, V., Ronnberg-Wastljung, A., Alstrom-Rapaport, C., and Lascoux, M. (2002). Genetic mapping of sex-linked markers in Salix viminalis L. Heredity 91, 293-299 5. Liu, Z., Moore, P.H., Ma, H., Ackerman, C.M., Ragiba, M., Pearl, H.M., Kim, M.S., Charlton, J.W., Yu, Q., and Stiles, J.I. et al. (2004). A primitive Y chromosome in Papaya marks the beginning of sex chromosome evolution. Nature 427, 348-352 Current Biology http://www.current-biology.com -------------------------------- Related Material: ON EVOLUTION AND SEXUAL REPRODUCTION The following points are made by Richard E. Lenski (Science 2001 294:533): 1) Why have some organisms evolved the capacity for sexual reproduction, whereas others make do with reproducing asexually? Since the time of August F. Weismann (1834-1914), most biologists have been taught that sex produces variation and thereby promotes evolutionary adaptation. But how does sex achieve this effect, and under what circumstances is it worthwhile? 2) The traditional explanation for sex is that it accelerates adaptation by allowing two or more beneficial mutations that have appeared in different individuals to recombine within the same individual. Without sexual recombination, individual clones that possess different beneficial mutations compete with one another, slowing adaptation by clonal interference. Sex, according to the traditional explanation, allows simultaneous improvements at several genetic loci, whereas multiple adaptations must occur sequentially in clonal organisms. 3) The above explanation, however, has recently come into question. First, sex imposes a 50 percent reduction in reproductive output: If a female can produce viable offspring on her own, why dilute her genetic contribution to subsequent generations by mating with a male? Second, the circumstances under which this kind of model provides sufficient advantage to offset the cost of sex are restrictive, requiring certain forms of selection and environmental fluctuations. Third, alternative models propose that the advantage of sex lies in eliminating deleterious mutations rather than in combining beneficial mutations. Still another hypothesis, involves an interplay between deleterious and beneficial mutations. Finally, empirical tests of these hypotheses have so far failed to produce a clear winner, so the field is ripe for significant experiments. Science http://www.sciencemag.org From checker at panix.com Wed May 25 00:52:07 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:52:07 -0400 (EDT) Subject: [Paleopsych] SW: On School-Associated Student Suicides Message-ID: Public Health: On School-Associated Student Suicides http://scienceweek.com/2004/sc041105-4.htm The following points are made by J. Kaufman et al (Morb. Mort. Wkly. Rep. 2004;53:476): 1) During 1994-1999, at least 126 students carried out a homicide or suicide that was associated with a private or public school in the United States.(1) Although previous research has described students who commit school-associated homicides, little is known about student victims of suicide. To describe the psychosocial and behavioral characteristics of school-associated suicide victims, the Centers for Disease Control and Prevention (CDC) analyzed data from school and police officials. The results of that analysis indicated that among the 126 students who carried out school-associated homicides or suicides, 28 (22%) died by suicide, including eight who intentionally injured someone else immediately before killing themselves. Two (7%) of the suicide victims were reported for fighting and four (14%) for disobedient behavior in the year preceding their deaths; none were associated with a gang. However, potential indicators of suicide risk such as expressions of suicidal thoughts, recent social stressors, and substance abuse were common among the victims. The authors suggest these findings underscore the need for school staff to learn to recognize and respond to chronic and situational risk factors for suicide. 2) The need for safe schools has prompted considerable interest in understanding and preventing all types of lethal school-associated violence. The finding that 22% of students who carried out such violence took their own lives indicates that a sizeable proportion of lethal school-associated violence was self-directed. In addition, the finding that approximately one in four suicide victims injured or killed someone else immediately before their suicide suggests an overlap between risk for committing school-associated homicide and risk for suicide. Efforts to prevent incidents of lethal school-associated violence should address youth suicidal ideation and behavior. 3) Suicide-prevention efforts are needed not only to address the risk for school-associated violence, but also to reduce the much larger problem of self-directed violence among adolescents overall. In 2001, suicide was the third leading cause of death in the United States among youths aged 13-18 years, accounting for 11% of deaths in this age group.(2) In 2003, approximately one in 12 high school students in the US reported attempting suicide during the preceding 12 months.(3) Data from Oregon indicate that approximately 5% of adolescents treated in hospitals for injuries from a suicide attempt made that attempt at school.(4) 4) The finding that the majority of students who were school-associated suicide victims were involved in extracurricular activities suggests that these students could be familiar to school staff who might recognize warning signs. Although these students were unlikely to stand out (e.g., by fighting or involvement in gangs) in the manner of those who commit school-associated homicides,(1) other established risk factors for suicidal behavior were common (e.g., expression of suicidal thoughts, recent household move, and romantic breakup). These findings support the need for school-based efforts to identify and assist students who describe suicidal thoughts or have difficulty coping with social stressors. School-based prevention efforts are likely to benefit from school officials working closely with community mental health professionals to enhance the abilities of school counselors, teachers, nurses, and administrators to recognize and respond to risk factors for suicide. 5) The findings that one in four of the school-associated suicides were preceded by a recent romantic breakup and nearly one in five suicide victims were under the influence of drugs or alcohol at the time of their deaths underscore the potential importance of situational risk factors. Youth suicidal behavior often is an impulsive response to circumstances rather than a wish to die. Efforts to help students cope with stressors and avoid substance abuse are important elements of suicide-prevention strategies.(5) References (abridged): 1. Anderson M, Kaufman J, Simon TR, et al. School-associated violent deaths in the United States, 1994-1999. JAMA. 2001;286:2695-702 2. CDC. Web-based Injury Statistics Query and Reporting System (WISQARSTM). Atlanta, Georgia: U.S. Department of Health and Human Services, CDC, National Center for Injury Prevention and Control, 2004. 3. CDC. Youth Risk Behavior Surveillance--United States, 2003. In: CDC Surveillance Summaries (May 21). MMWR. 2004;53(No. SS-2) 4. CDC. Fatal and nonfatal suicide attempts among adolescents--Oregon, 1988-1993. MMWR Morb Mortal Wkly Rep. 1995;44:312-315, 321-323 5. Centers for Disease Control and Prevention. School health guidelines to prevent unintentional injury and violence. MMWR Recomm Rep. 2001;50(RR-22):1-73 Centers for Disease Control and Prevention http://www.cdc.gov -------------------------------- Related Material: PUBLIC HEALTH: METHODS OF SUICIDE AMONG ADOLESCENTS The following points are made by Centers for Disease Control (MMWR 2004 53:471): 1) In 2001, suicide was the third leading cause of death among persons aged 10-19 years.(1) The most common method of suicide in this age group was by firearm (49%), followed by suffocation (mostly hanging) (38%) and poisoning (7%).(1) During 1992-2001, although the overall suicide rate among persons aged 10-19 years declined from 6.2 to 4.6 per 100,000 population,(1) methods of suicide changed substantially. To characterize trends in suicide methods among persons in this age group, CDC analyzed data for persons living in the US during 1992-2001. 2) The results of that analysis indicated a substantial decline in suicides by firearm and an increase in suicides by suffocation in persons aged 10-14 and 15-19 years. Beginning in 1997, among persons aged 10-14 years, suffocation surpassed firearms as the most common suicide method. The decline in firearm suicides combined with the increase in suicides by suffocation suggests that changes have occurred in suicidal behavior among youths during the preceding decade. Public health officials should develop intervention strategies that address the challenges posed by these changes, including programs that integrate monitoring systems, etiologic research, and comprehensive prevention activities. 3) Among persons aged 10-14 years, the rate of firearm suicide decreased from 0.9 per 100,000 population in 1992 to 0.4 in 2001, whereas the rate of suffocation suicide increased from 0.5 in 1992 to 0.8 in 2001. Rate regression analyses indicated that, during the study period, firearm suicide rates decreased an average of approximately 8.8% annually, and suffocation suicide rates increased approximately 5.1% annually. Among persons aged 15-19 years, the firearm suicide rate declined from 7.3 in 1992 to 4.1 in 2001; the suffocation suicide rate increased from 1.9 to 2.7. Rate regression analyses indicated that, during the study period, the average annual decrease in firearm suicide rates for this age group was approximately 6.8%, and the average annual increase in suffocation suicide rates was approximately 3.7%. Poisoning suicide rates also decreased in both age groups, at an average annual rate of 13.4% among persons aged 10-14 years and 8.0% among persons aged 15-19 years. Because of the small number of suicides by poisoning, these decreases have had minimal impact on changes in the overall profile of suicide methods of youths. 4) Among persons aged 10-14 years, suffocation suicides began occurring with increasing frequency relative to firearm suicides in the early- to mid-1990s, eclipsing firearm suicides by the late 1990s. In 2001, a total of 1.8 suffocation suicides occurred for every firearm suicide among youths aged 10-14 years. Among youths aged 15-19 years, an increase in the frequency of suffocation suicides relative to firearm suicides began in the mid-1990s; however, in 2001, firearms remained the most common method of suicide in this age group, with a ratio of 0.7 suffocation suicides for every firearm suicide. 5) The findings in this report indicate that the overall suicide rate for persons aged 10-19 years in the US declined during 1992-2001 and that substantial changes occurred in the types of suicide methods used among those persons aged 10-14 and 15-19 years. Rates of suicide using firearms and poisoning decreased, whereas suicides by suffocation increased. By the end of the period, suffocation had surpassed firearms to become the most common method of suicide death among persons aged 10-14 years. 6) The reasons for the changes in suicide methods are not fully understood. Increases in suffocation suicides and concomitant decreases in firearm suicides suggest that persons aged 10-19 years are choosing different kinds of suicide methods than in the past. Data regarding how persons choose among various methods of suicide suggest that some persons without ready access to highly lethal methods might choose not to engage in a suicidal act or, if they do engage in suicidal behavior, are more likely to survive their injuries.(4) However, certain subsets of suicidal persons might substitute other methods.(5) Substitution of methods depends on both the availability of alternatives and their acceptability. Because the means for suffocation (e.g., hanging) are widely available, the escalating use of suffocation as a method of suicide among persons aged 10-19 years implies that the acceptability of suicide by suffocation has increased substantially in this age group. References (abridged): 1. CDC. Web-based Injury Statistics Query and Reporting System (WISQARSTM). Atlanta, Georgia: U.S. Department of Health and Human Services, CDC, National Center for Injury Prevention and Control, 2004. 2. National Center for Health Statistics. Multiple cause-of-death public-use data files, 1992 through 2001. Hyattsville, Maryland: U.S. Department of Health and Human Services, CDC, 2003 3. Anderson RN, Minino AM, Fingerhut LA, Warner M, Heinen MA. Deaths: injuries, 2001. Natl Vital Stat Rep. 2004;52:1-5 4. Cook PJ. The technology of personal violence. In: Tonry M, ed. Crime and Justice: An Annual Review of Research, vol. 14. Chicago, Illinois: University of Chicago Press, 1991:1-71 5. Gunnell D, Nowers M. Suicide by jumping. Acta Psychiatrica Scandinavica. 1997;96:1-6 Centers for Disease Control and Prevention http://www.cdc.gov -------------------------------- Related Material: ON THE RISK OF ATTEMPTED SUICIDE THROUGHOUT THE LIFESPAN The following points are made by S.R. Dube et al (J. Am. Med. Assoc. 2001 286:3089): 1) Suicide was the 8th leading cause of death in the US in 1998, and particularly high rates have been reported among young persons and older adults. Each year, more than 30,000 people in the US commit suicide, but recognition of persons who are at high risk for suicide is difficult, making efforts to prevent its occurrence problematic. In 1999, the US surgeon general brought attention to this complex public health issue by recommending that the investigation and prevention suicide be a national priority. 2) An expanding body of research suggests that childhood trauma and adverse experiences can lead to a variety of negative health outcomes, including substance abuse, depressive disorders, and attempted suicide among adolescents and adults. Childhood sexual and physical abuse have been strongly associated with suicide attempts. A recent study of Norwegian drug addicts demonstrated that a high proportion of them attempted suicide and that an even higher proportion of drug addicts who had experienced childhood adversity had attempted suicide. In another study, low-income women with a history of alcohol problems and experience of childhood abuse and neglect were at increased risk for suicide attempts. 3) The authors conducted a study to examine the relationship between the risk of suicide attempts and adverse childhood experiences and the number of such experiences. 17,337 adult health maintenance organization members (54 percent female) were surveyed. The authors report that a strong graded relationship exists between adverse childhood experiences and risk of attempted suicide throughout the life span. Alcoholism, depressed affect, and illicit drug use, which are strongly associated with such experiences, appear to partially mediate this relationship. J. Am. Med. Assoc. http://www.jama.com From checker at panix.com Wed May 25 00:52:18 2005 From: checker at panix.com (Premise Checker) Date: Tue, 24 May 2005 20:52:18 -0400 (EDT) Subject: [Paleopsych] SW: On the Beginning of the Last Ice Age Message-ID: ---------- Forwarded message ---------- Date: Tue, 24 May 2005 13:53:30 -0400 (EDT) From: Premise Checker To: Premise Checker: ; Subject: SW: On the Beginning of the Last Ice Age Paleoclimate: On the Beginning of the Last Ice Age http://scienceweek.com/2004/sb041105-6.htm The following points are made by Kurt M. Cuffey (Nature 2004 431:133): 1) The relatively warm and stable climate that humanity has enjoyed for the past 10,000 years will inevitably give way to a new ice age -- a tremendous environmental transformation that is destined to bury the sites of Boston, Edinburgh and Stockholm under glacial ice. In The Day After Tomorrow, the Hollywood movie most notable for its public abuse of thermodynamics, a new ice age starts in only one week. What does such a transition look like in reality? 2) A new ice core(1) that samples the entire 3-km thickness of the north-central Greenland ice sheet provides us with an unprecedentedly rich and precise view of the onset of the most recent ice age, some 120,000 years ago. And it is from the location of greatest interest -- the North Atlantic region, where rapid climate changes have been most dramatic in the past. This achievement is the result of the efforts of the multinational North Greenland Ice Core Project (NGRIP). Individual years of snow deposition are distinguishable for events as far back as 123,000 years in the past. A few years ago, such a resolution was thought to be unattainable. 3) The story of how the impossible became possible would itself make a fine movie, complete with dramatic scenes of enlightenment in the mass spectrometry lab as isotope analyses reveal past climatic changes. In the early 1990s, two deep ice cores recovered from central Greenland yielded a detailed environmental history extending 100,000 years back in time, through the entirety of the last glacial climate(2). These showed that climate was extremely unstable. Within the space of a few decades, the North Atlantic region could evidently warm by 10 C, while smaller changes of temperature and moisture occurred over wide areas of the planet(3). 4) Unfortunately, structural disturbance of the deepest ices(4) prevented these cores from revealing either the onset of the glacial period or events during the preceding warm interglacial period, known as the "Eemian", about 130,000 to 120,000 years ago. Initial reports to the contrary(5), highlighting apparent climate instabilities within the Eemian, were clearly mistaken, as the new report demonstrates(1). Yet the degree of climate stability during the Eemian is of intense interest: the Eemian was slightly warmer than the world is now, providing an analogue for a possible future climate warmed by atmospheric pollution. References (abridged): 1. North Greenland Ice Core Project members Nature 431, 147-151 (2004) 2. Hammer, C., Mayewski, P. A., Peel, D. & Stuiver, M. (eds) J. Geophys. Res. 102 (C12), 26317-26886 (1997) 3. Severinghaus, J. P. & Brook, E. J. Science 386, 930-934 (1999) 4. Chappellaz, J., Brook, E., Blunier, T. & Malaize, B. J. Geophys. Res. 102, 26547-26557 (1997) 5. Greenland Ice-Core Project members Nature 364, 203-208 (1993) Nature http://www.nature.com/nature -------------------------------- Related Material: OCEAN SCIENCE: GLOBAL WARMING AND THE NEXT ICE AGE The following points are made by A.J. Weaver and C. Hillaire-Marcel (Science 2004 304:400): 1) A popular idea in the media is that human-induced global warming will cause another ice age. But where did this idea come from? Several recent magazine articles (1-3) report that abrupt climate change was prevalent in the recent geological history of Earth and that there was some early (albeit controversial) evidence from the last interglacial -- thought to be slightly warmer than preindustrial times (4) -- that abrupt climate change was the norm (5). Consequently, the articles postulate a sequence of events that goes something like this: If global warming were to boost the hydrological cycle, enhanced freshwater discharge into the North Atlantic would shut down the AMO (Atlantic Meridional Overturning), the North Atlantic component of global ocean overturning circulation. This would result in downstream cooling over Europe, leading to the slow growth of glaciers and the onset of the next ice age. 2) This view prevails in the popular press despite a relatively solid understanding of glacial inception and growth. What glacier formation and growth require is, of course, a change in seasonal incoming solar radiation (warmer winters and colder summers) associated with changes in Earth's axial tilt, its longitude of perihelion, and the precession of its elliptical orbit around the Sun. These small changes must then be amplified by feedback from reflected light associated with enhanced snow/ice cover, vegetation associated with the expansion of tundra, and greenhouse gases associated with the uptake (not release) of carbon dioxide and methane. 3) Several modeling studies provide outputs to support this progression. These studies show that with elevated levels of carbon dioxide, such as those that exist today, no permanent snow can exist over land in August (as temperatures are too warm), a necessary prerequisite for the growth of glaciers in the Northern Hemisphere. These same models show that if the AMO were to be artificially shut down, there would be regions of substantial cooling in and around the North Atlantic. Berger and Loutre (2002) specifically noted that "most CO2 scenarios led to an exceptionally long interglacial from 5000 years before the present to 50,000 years from now ... with the next glacial maximum in 100,000 years. Only for CO2 concentrations less than 220 ppmv was an early entrance into glaciation simulated." They further argued that the next glaciation would be unlikely to occur for another 50,000 years. 4) Although most paleoclimatologists would agree that the past is unlikely to provide true analogs of the future, past climate synopses are valuable for confronting the results of modeling experiments or for illustrating global warming. A reduction of the AMO due to a global warming-induced increase in freshwater supplies to the North Atlantic is often discussed in relation to a short event that occurred some 8200 years ago (8.2 ka). During this event, one of the largest glacial lakes of the Laurentide Ice Sheet, Lake Ojibway, drained into the North Atlantic through Hudson Strait, quickly releasing enormous quantities of fresh water. However, unequivocal evidence that this event resulted in a substantial reduction of the AMO has apparently not yet been obtained. References (abridged): 1. S. Rahmstorf, New Scientist 153, 26 (8 February 1997) 2. W. H. Calvin, Atlantic Monthly 281, 47 (January 1998) 3. B. Lemley, Discover 23, 35 (September 2002) 4. IPCC, Climate Change 2001, The Scientific Basis. Contribution of Working Group I to the Third Scientific Assessment Report of the Intergovernmental Panel on Climate Change, J. T. Houghton et al., Eds. (Cambridge Univ. Press, Cambridge, 2001) 5. GRIP Project Members, Nature 364, 203 (1993) Science http://www.sciencemag.org -------------------------------- Related Material: ON THE QUATERNARY ICE AGE The following points are made by Karl Sabbagh (citation below): 1) Over millennia, the world has experienced a series of ice ages, periods when large areas of the surface were covered with a sheet of ice thousands of meters deep. The most widely accepted theory for the cause of the ice ages is based on the fact that there have been changes over millions of years in the tilt of the Earth's axis and in the circularity of its orbit. Those changes have been cyclical and led to long periods when some parts of the Earth's surface received less light and heat from the Sun. 2) Scientists recognize five major ice ages, the first about 2 billion years ago, then three more about 600 million, 400 million, and 300 million years ago, and the last beginning 1.7 million years ago and finishing only about 10,000 years ago. The last, the Quaternary Ice Age, was a period when the diversity of life-forms on land not covered by ice was greater than it had been during the previous ice ages and during which there were major effects on plants and animals as the temperature dropped and ice sheets and glaciers formed from water that had evaporated from the sea. As the cold took its grip, not overnight but over thousands of years, the relationship between land and sea changed, caused partly by climate, partly by the weight of ice pressing down on the land. 3) The process that froze water vapor into ice and deposited it on the land effectively depleted the sea of water that would otherwise condense into rain, run into rivers, and flow on to the sea. As more water froze and water continued to evaporate from the ocean, the sea level dropped, revealing areas that had been covered by water and expanding the total land area. For an area like the Hebrides, a series of separate islands before the Ice Age, this process would have turned them into much larger areas of land, possibly even into one interconnecting land mass. Adapted from: Karl Sabbagh: A Rum Affair: A True Story of Botanical Fraud. Da Capo Press, 2001, p.23. More information at: http://www.amazon.com/exec/obidos/ASIN/0306810603/scienceweek From ljohnson at solution-consulting.com Wed May 25 04:26:51 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Tue, 24 May 2005 22:26:51 -0600 Subject: [Paleopsych] Adaptiveness of depression Message-ID: <4293FE8B.7020609@solution-consulting.com> Apropos of our recent discussion on the survival value of PTSD, here is an interesting expert interview from medscape psychiatry on depression. FYI, the 1925 birth cohort had a lifetime prevalance of 4% for depression; today it appears to be 17%; these guys say 25% but I think that is high. In any case, it is an epidemic. LJ http://www.medscape.com/viewarticle/503013_print (registration required) Expert Interview Mood Disorders at the Turn of the Century: An Expert Interview With Peter C. Whybrow, MD Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape Editor's Note: On behalf of Medscape, Randall F. White, MD, interviewed Peter C. Whybrow, MD, Director of the Semel Institute for Neuroscience & Human Behavior and Judson Braun Distinguished Professor and Executive Chair, Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California, Los Angeles. Medscape: The prevalence of mood disorders has risen in every generation since the early 20th century. In your opinion, what is behind this? Peter C. Whybrow, MD: I think that's a very interesting statistic. My own sense is that, especially in recent years, it can be explained by changes in the environment. The demand-driven way in which we live these days is tied to the increasing levels of anxiety and depression. You see that in the latest cohort, the one that was studied with the birth date of 1966, depression has grown quite dramatically compared with those who were born in cohorts before then. So anxiety now starts somewhere in the 20s or 30s, and depression is also rising, so the prevalence now for most people in America is somewhere around 25%. Medscape: Lifetime prevalence? Dr. Whybrow: Yes, lifetime prevalence. I think it's a socially driven phenomenon; obviously there's not a change in the genome. I think we've been diagnosing depression fairly accurately for a fair length of time now, since the 1960s, and the people who were born in the 1960s are now being diagnosed with depression at a higher rate than those who were born earlier and who were diagnosed in the 1960s, 1970s, and 1980s. Medscape: And is this true of both unipolar and bipolar mood disorders? Dr. Whybrow: It's particularly true of unipolar disorder. There has been a growth in interest in bipolar disorder, partly I think because of the zeal of certain authors who have seen cyclothymia and other oscillating mood states as part of a larger spectrum of manic-depressive illness, much as Kraepelin did. And I think that has expanded the prevalence of the bipolar spectrum to probably 5% or 6%, but the major increase in prevalence, I think, would be diagnosed as unipolar depression. Medscape: Do you think that unipolar and bipolar mood disorders are distinct, or do they lie on a continuum that includes all the mood disorders in our nosology? Dr. Whybrow: The way I see it is they are both phenotypes, but they have considerable overlap. If you think about them from the standpoint of the psychobiology of the illnesses, I think they are distinct. Medscape: Why are women more vulnerable than men to depression? Dr. Whybrow: My own take on that is that it is driven by the change in hormones that you see in women. Estrogen and progesterone, plus thyroid and steroids, are the most potent modulators of central nervous system activity. If you tie the onset of symptoms to menarche or the sexual differentiation in boys and girls, you find that prior to that age, which is now around 11 to 13, boys and girls have essentially the same depressive symptoms. As adolescence appears, you find this extraordinary increase in young women who complain of depressive symptoms of one sort or another. Boys tend to have other things, of course, particularly what some consider socially deviant behavior. The other interesting thing one sees quite starkly in bipolar illness is that, after the age of 50 or so, when menopause occurs, severe bipolar illness can actually improve. I've seen that on many occasions. Also interesting and relevant to the hormonal thesis is the way in which thyroid hormone and estrogen compete for each other at some of the promoter regions of various genes. In the young woman who has bipolar disease -- this is pertinent to the work I have done over the years with thyroid hormone -- and who becomes hypothyroid, estrogen becomes much more available in the central nervous system, and you then see the malignant forms of bipolar illness. Almost all the individuals who have severe rapid cycling between the ages of about 20 and 40 are women -- high proportions, something like 85% to 90%. So this all suggests that there is an interesting modulation of whatever it is that permits severe affective illnesses in women by the fluxes of estrogen and progesterone. There is, of course, a whole other component of this, which is a social concern in regard to the way in which women are treated in our society compared with men. It's far different from when I was first a psychiatrist back in the 1960s and 1970s; women are much more independent now, but there is still some element of depression being driven in part by the social context of their lives, both in family and in the workplace, where they still do not enjoy absolute equality. Medscape: Why would the genotype for mood disorders persist in the human genome? What aspect of the phenotype is adaptive? Dr. Whybrow: I think you have to divide that question into 2. If we talk about bipolar disease and unipolar disease separately, it makes more sense. If we take bipolar disease first, I think there is much in the energy and excitement of what one considers hypomania that codes for excellence, or at least engagement, in day-to-day activities. One of the things that I've learned over the years is that if you find an individual who has severe manic depressive disease, and you look at the family, the family is very often of a higher socioeconomic level than one might anticipate. And again, if you look at a family that is socially successful, you very often find within it persons who have bipolar disease. So I think that there is a group of genes that codes for the way in which we are able to engage emotionally in life. I talk about this in one of my books called A Mood Apart [1] -- how emotion as the vehicle of expression and understanding of other people's expression is what goes wrong in depression and in mania. I think that those particular aspects of our expression are rooted in the same set of genes that codes for what we consider to be pathology in manic-depressive disease. But the interesting part is that if you have, let's say for sake of easy discussion, 5 or 6 genes that code for extra energy (in the dopamine pathway and receptors, and maybe in fundamental cellular activity), you turn out to be a person who sleeps rather little, who has a positive temperament, and so on. If you get another 1 or 2 of them, you end up in the insane asylum. So I think there is an extraordinary value to those particular genetic pools. So you might say that if you took the bipolar genes out of the human behavioral spectrum, then you would find that probably we would still be -- this is somewhat hyperbolic -- wandering around munching roots and so on. Medscape: What about unipolar disorder? Dr. Whybrow: Unipolar is different, I think. This was described in some detail in A Mood Apart .[1] I think that the way in which depression comes about is very much like the way in which vision fails, as an analogy. We can lose vision in all sorts of ways. We can lose it because of distortions of the cornea or the lens; the retina can be damaged; we can have a stroke in the back of our heads; or there can be a pituitary tumor. I think it's analogous in the way depression strikes: from white tract disease in old age to the difficulties you might have following a bout of influenza, plus the sensitivity we have to social rank and all other social interactions. Those things can precipitate a dysregulation of the emotional apparatus, much as you disturb the visual apparatus, and you end up with a person who has this depressive phenomenon. In some individuals, it repeats itself because of a particular biological predisposition. In 30% or 40% of individuals, it's a one-time event, which is tied to the circumstances under which they find themselves. So I think that's a very distinct phenomenon compared with bipolar illness. In its early forms, depression is a valuable adaptive mechanism because it does accurately focus on the fact that the world is not progressing positively, so the person is driven to do something about it. Sometimes the person is incapable of doing something about it, or the adaptive mechanisms are not sufficient, and then you get this phenomenon of depression. I know that there have been speculations about the fact that this then leads to the person going to the edge of the herd and dying because he or she doesn't eat, et cetera, and it relieves the others of the burden of caring for him or her. And that might have been true years ago, when we lived in small hunter-gatherer groups. But of course today we profess, not always with much success, to have a humanitarian slant, and we take care of people who have these phenomena, bringing them back into the herd as they get better. So I think that it's a bit of a stretch to say that this has evolutionary advantage because it allows people to go off and die, but I think that in the bipolar spectrum there are probably genes that code for extra activity, which we consider to have social value. Medscape: Let's go back to bipolar disorder. The current approach to finding new treatments for bipolar disorder is to try medications that were developed for other conditions, especially epilepsy. Do we know enough yet about this disease to attempt to develop specific treatments de novo? Dr. Whybrow: Well, we're getting there, but we're not really yet in that position. You're quite right, most of the treatments have come from either empirical observations, such a lithium, or because there is this peculiar association between especially temporal lobe epilepsy and bipolar disease, both in terms of phenomena and also conceptually. But we do know more and more about the inositol cycle, we do know something about some of the genes that code for bipolar illness, so I think we will eventually be able to untangle the pathophysiology of some of the common forms. I think the problem is that there are multiple genes that contribute to the way in which the cells dysregulate, so it's probably not that we'll find one cause of bipolar illness and therefore be able to find one medication as we've found for diabetes, for example. Medscape: Let's talk about your new book American Mania: When More Is Not Enough , in which you use mania as a metaphor to describe aspects of American culture.[2] Dr. Whybrow: The metaphor came because of the work I've done over the years with bipolar illness. In the late 1990s, when I first moved to California, I was struck by the extraordinary stock-market bubble and the excitement that went on. You may remember those days: people were convinced that this would go on forever, that we'd continue to wake up to the sweet smell of money and happiness for the rest of our days. This seemed to me to have much in common with the delusional systems one sees in mania. So the whole thing in my mind began to be an interesting metaphor for what was happening in the country, as one might see it through the eyes of a psychiatrist watching an individual patient. I began to investigate this, and what particularly appealed to me was that the activity that you see in mania eventually breaks, and of course this is exactly what happened with the bubble. Then all sorts of recriminations begin, and you enter into a whole new phase. The book takes off from there, but it has also within it a series of discussions about the way in which the economic model that we have adopted, which is, of course, Adam Smith's economic model, is based upon essentially a psychological theory. If you know anything about Adam Smith, you'll know that he was a professor of moral philosophy, which you can now translate into being a psychologist. And his theory was really quite simple. On one hand, he saw self-interest, which these days we might call survival, curiosity, and social ambition as the 3 engines of wealth creation. But at the same time, he recognized that without social constraints, without the wish we have, all of us, to be loved by other people (therefore we're mindful of not doing anything too outrageous), the self-interest would run away to greed. But he convinced himself and a lot of other people that if individuals were free to do what they wished and do it best, then the social context in which they lived would keep them from running away to greed. If you look at that model, which is what the book American Mania: When More Is Not Enough does, you can see that we now live in a much different environment from Smith's, and the natural forces to which he gave the interesting name "the invisible hand," and which made all this come out for the benefit of society as a whole, have changed dramatically. It's losing its grip, in fact, because we now live in a society that is extremely demand-driven, and we are constantly rewarded for individual endeavor or self-interest through our commercial success, but very little for the social investment that enables us to have strong unions with other people. This is particularly so in the United States. So you can see that things have shifted dramatically and have gone into, if you go back to the metaphor, what I believe is sort of a chronic frenzy, a manic-like state, in which most people are now working extremely hard. Many of them are driven by debt; other people are driven by social ambition, but to the destruction very often of their own personal lives and certainly to the fabric of the community in which they live. References 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its Disorders . New York, NY: HarperCollins; 1997. 2. Whybrow PC. American Mania: When More Is Not Enough . New York, NY: WW Norton; 2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Wed May 25 14:31:59 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 25 May 2005 07:31:59 -0700 Subject: [Paleopsych] Adaptiveness of depression Message-ID: <01C560FB.D72C7540.shovland@mindspring.com> Since this guy is an MD, one can assign a high probability to the possibility that he knows almost nothing about nutrition, including the importance of healthy fats in the diet. He mentions hormones without any consideration of what the body uses to build hormones. Steve Hovland www.stevehovland.net -----Original Message----- From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] Sent: Tuesday, May 24, 2005 9:27 PM To: The new improved paleopsych list Subject: [Paleopsych] Adaptiveness of depression Apropos of our recent discussion on the survival value of PTSD, here is an interesting expert interview from medscape psychiatry on depression. FYI, the 1925 birth cohort had a lifetime prevalance of 4% for depression; today it appears to be 17%; these guys say 25% but I think that is high. In any case, it is an epidemic. LJ http://www.medscape.com/viewarticle/503013_print (registration required) Expert Interview Mood Disorders at the Turn of the Century: An Expert Interview With Peter C. Whybrow, MD Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape Editor's Note: On behalf of Medscape, Randall F. White, MD, interviewed Peter C. Whybrow, MD, Director of the Semel Institute for Neuroscience & Human Behavior and Judson Braun Distinguished Professor and Executive Chair, Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California, Los Angeles. Medscape: The prevalence of mood disorders has risen in every generation since the early 20th century. In your opinion, what is behind this? Peter C. Whybrow, MD: I think that's a very interesting statistic. My own sense is that, especially in recent years, it can be explained by changes in the environment. The demand-driven way in which we live these days is tied to the increasing levels of anxiety and depression. You see that in the latest cohort, the one that was studied with the birth date of 1966, depression has grown quite dramatically compared with those who were born in cohorts before then. So anxiety now starts somewhere in the 20s or 30s, and depression is also rising, so the prevalence now for most people in America is somewhere around 25%. Medscape: Lifetime prevalence? Dr. Whybrow: Yes, lifetime prevalence. I think it's a socially driven phenomenon; obviously there's not a change in the genome. I think we've been diagnosing depression fairly accurately for a fair length of time now, since the 1960s, and the people who were born in the 1960s are now being diagnosed with depression at a higher rate than those who were born earlier and who were diagnosed in the 1960s, 1970s, and 1980s. Medscape: And is this true of both unipolar and bipolar mood disorders? Dr. Whybrow: It's particularly true of unipolar disorder. There has been a growth in interest in bipolar disorder, partly I think because of the zeal of certain authors who have seen cyclothymia and other oscillating mood states as part of a larger spectrum of manic-depressive illness, much as Kraepelin did. And I think that has expanded the prevalence of the bipolar spectrum to probably 5% or 6%, but the major increase in prevalence, I think, would be diagnosed as unipolar depression. Medscape: Do you think that unipolar and bipolar mood disorders are distinct, or do they lie on a continuum that includes all the mood disorders in our nosology? Dr. Whybrow: The way I see it is they are both phenotypes, but they have considerable overlap. If you think about them from the standpoint of the psychobiology of the illnesses, I think they are distinct. Medscape: Why are women more vulnerable than men to depression? Dr. Whybrow: My own take on that is that it is driven by the change in hormones that you see in women. Estrogen and progesterone, plus thyroid and steroids, are the most potent modulators of central nervous system activity. If you tie the onset of symptoms to menarche or the sexual differentiation in boys and girls, you find that prior to that age, which is now around 11 to 13, boys and girls have essentially the same depressive symptoms. As adolescence appears, you find this extraordinary increase in young women who complain of depressive symptoms of one sort or another. Boys tend to have other things, of course, particularly what some consider socially deviant behavior. The other interesting thing one sees quite starkly in bipolar illness is that, after the age of 50 or so, when menopause occurs, severe bipolar illness can actually improve. I've seen that on many occasions. Also interesting and relevant to the hormonal thesis is the way in which thyroid hormone and estrogen compete for each other at some of the promoter regions of various genes. In the young woman who has bipolar disease -- this is pertinent to the work I have done over the years with thyroid hormone -- and who becomes hypothyroid, estrogen becomes much more available in the central nervous system, and you then see the malignant forms of bipolar illness. Almost all the individuals who have severe rapid cycling between the ages of about 20 and 40 are women -- high proportions, something like 85% to 90%. So this all suggests that there is an interesting modulation of whatever it is that permits severe affective illnesses in women by the fluxes of estrogen and progesterone. There is, of course, a whole other component of this, which is a social concern in regard to the way in which women are treated in our society compared with men. It's far different from when I was first a psychiatrist back in the 1960s and 1970s; women are much more independent now, but there is still some element of depression being driven in part by the social context of their lives, both in family and in the workplace, where they still do not enjoy absolute equality. Medscape: Why would the genotype for mood disorders persist in the human genome? What aspect of the phenotype is adaptive? Dr. Whybrow: I think you have to divide that question into 2. If we talk about bipolar disease and unipolar disease separately, it makes more sense. If we take bipolar disease first, I think there is much in the energy and excitement of what one considers hypomania that codes for excellence, or at least engagement, in day-to-day activities. One of the things that I've learned over the years is that if you find an individual who has severe manic depressive disease, and you look at the family, the family is very often of a higher socioeconomic level than one might anticipate. And again, if you look at a family that is socially successful, you very often find within it persons who have bipolar disease. So I think that there is a group of genes that codes for the way in which we are able to engage emotionally in life. I talk about this in one of my books called A Mood Apart [1] -- how emotion as the vehicle of expression and understanding of other people's expression is what goes wrong in depression and in mania. I think that those particular aspects of our expression are rooted in the same set of genes that codes for what we consider to be pathology in manic-depressive disease. But the interesting part is that if you have, let's say for sake of easy discussion, 5 or 6 genes that code for extra energy (in the dopamine pathway and receptors, and maybe in fundamental cellular activity), you turn out to be a person who sleeps rather little, who has a positive temperament, and so on. If you get another 1 or 2 of them, you end up in the insane asylum. So I think there is an extraordinary value to those particular genetic pools. So you might say that if you took the bipolar genes out of the human behavioral spectrum, then you would find that probably we would still be -- this is somewhat hyperbolic -- wandering around munching roots and so on. Medscape: What about unipolar disorder? Dr. Whybrow: Unipolar is different, I think. This was described in some detail in A Mood Apart .[1] I think that the way in which depression comes about is very much like the way in which vision fails, as an analogy. We can lose vision in all sorts of ways. We can lose it because of distortions of the cornea or the lens; the retina can be damaged; we can have a stroke in the back of our heads; or there can be a pituitary tumor. I think it's analogous in the way depression strikes: from white tract disease in old age to the difficulties you might have following a bout of influenza, plus the sensitivity we have to social rank and all other social interactions. Those things can precipitate a dysregulation of the emotional apparatus, much as you disturb the visual apparatus, and you end up with a person who has this depressive phenomenon. In some individuals, it repeats itself because of a particular biological predisposition. In 30% or 40% of individuals, it's a one-time event, which is tied to the circumstances under which they find themselves. So I think that's a very distinct phenomenon compared with bipolar illness. In its early forms, depression is a valuable adaptive mechanism because it does accurately focus on the fact that the world is not progressing positively, so the person is driven to do something about it. Sometimes the person is incapable of doing something about it, or the adaptive mechanisms are not sufficient, and then you get this phenomenon of depression. I know that there have been speculations about the fact that this then leads to the person going to the edge of the herd and dying because he or she doesn't eat, et cetera, and it relieves the others of the burden of caring for him or her. And that might have been true years ago, when we lived in small hunter-gatherer groups. But of course today we profess, not always with much success, to have a humanitarian slant, and we take care of people who have these phenomena, bringing them back into the herd as they get better. So I think that it's a bit of a stretch to say that this has evolutionary advantage because it allows people to go off and die, but I think that in the bipolar spectrum there are probably genes that code for extra activity, which we consider to have social value. Medscape: Let's go back to bipolar disorder. The current approach to finding new treatments for bipolar disorder is to try medications that were developed for other conditions, especially epilepsy. Do we know enough yet about this disease to attempt to develop specific treatments de novo? Dr. Whybrow: Well, we're getting there, but we're not really yet in that position. You're quite right, most of the treatments have come from either empirical observations, such a lithium, or because there is this peculiar association between especially temporal lobe epilepsy and bipolar disease, both in terms of phenomena and also conceptually. But we do know more and more about the inositol cycle, we do know something about some of the genes that code for bipolar illness, so I think we will eventually be able to untangle the pathophysiology of some of the common forms. I think the problem is that there are multiple genes that contribute to the way in which the cells dysregulate, so it's probably not that we'll find one cause of bipolar illness and therefore be able to find one medication as we've found for diabetes, for example. Medscape: Let's talk about your new book American Mania: When More Is Not Enough , in which you use mania as a metaphor to describe aspects of American culture.[2] Dr. Whybrow: The metaphor came because of the work I've done over the years with bipolar illness. In the late 1990s, when I first moved to California, I was struck by the extraordinary stock-market bubble and the excitement that went on. You may remember those days: people were convinced that this would go on forever, that we'd continue to wake up to the sweet smell of money and happiness for the rest of our days. This seemed to me to have much in common with the delusional systems one sees in mania. So the whole thing in my mind began to be an interesting metaphor for what was happening in the country, as one might see it through the eyes of a psychiatrist watching an individual patient. I began to investigate this, and what particularly appealed to me was that the activity that you see in mania eventually breaks, and of course this is exactly what happened with the bubble. Then all sorts of recriminations begin, and you enter into a whole new phase. The book takes off from there, but it has also within it a series of discussions about the way in which the economic model that we have adopted, which is, of course, Adam Smith's economic model, is based upon essentially a psychological theory. If you know anything about Adam Smith, you'll know that he was a professor of moral philosophy, which you can now translate into being a psychologist. And his theory was really quite simple. On one hand, he saw self-interest, which these days we might call survival, curiosity, and social ambition as the 3 engines of wealth creation. But at the same time, he recognized that without social constraints, without the wish we have, all of us, to be loved by other people (therefore we're mindful of not doing anything too outrageous), the self-interest would run away to greed. But he convinced himself and a lot of other people that if individuals were free to do what they wished and do it best, then the social context in which they lived would keep them from running away to greed. If you look at that model, which is what the book American Mania: When More Is Not Enough does, you can see that we now live in a much different environment from Smith's, and the natural forces to which he gave the interesting name "the invisible hand," and which made all this come out for the benefit of society as a whole, have changed dramatically. It's losing its grip, in fact, because we now live in a society that is extremely demand-driven, and we are constantly rewarded for individual endeavor or self-interest through our commercial success, but very little for the social investment that enables us to have strong unions with other people. This is particularly so in the United States. So you can see that things have shifted dramatically and have gone into, if you go back to the metaphor, what I believe is sort of a chronic frenzy, a manic-like state, in which most people are now working extremely hard. Many of them are driven by debt; other people are driven by social ambition, but to the destruction very often of their own personal lives and certainly to the fabric of the community in which they live. References 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its Disorders . New York, NY: HarperCollins; 1997. 2. Whybrow PC. American Mania: When More Is Not Enough . New York, NY: WW Norton; 2005. << File: ATT00001.html >> << File: ATT00002.txt >> From shovland at mindspring.com Wed May 25 14:45:07 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 25 May 2005 07:45:07 -0700 Subject: [Paleopsych] Lipids, depression and suicide Message-ID: <01C560FD.AC38C080.shovland@mindspring.com> http://www.biopsychiatry.com/lipidsmood.htm Epidemiological data - The prevalence of depression seems to increase continuously since the beginning of the century. Though different factors most probably contribute to this evolution, it has been suggested that it could be related to an evolution of alimentary patterns in the Western world, in which polyunsatured w3 fatty acids contained in fish, game and vegetables have been largely replaced by polyunsatured w6 fatty acids of cereal oils. From ljohnson at solution-consulting.com Wed May 25 15:48:42 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Wed, 25 May 2005 09:48:42 -0600 Subject: [Paleopsych] Adaptiveness of depression In-Reply-To: <01C560FB.D72C7540.shovland@mindspring.com> References: <01C560FB.D72C7540.shovland@mindspring.com> Message-ID: <42949E5A.7040402@solution-consulting.com> I agree with Steve here; the issue of dietary change is ignored. He also downplays the social factors some, continuing to emphasize the medical approach to treatment. If diet and/or social change are implicated, then more Prozac is merely finger-in-the-dike. Lynn Steve Hovland wrote: >Since this guy is an MD, one can assign a high >probability to the possibility that he knows almost >nothing about nutrition, including the importance >of healthy fats in the diet. > >He mentions hormones without any consideration >of what the body uses to build hormones. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] >Sent: Tuesday, May 24, 2005 9:27 PM >To: The new improved paleopsych list >Subject: [Paleopsych] Adaptiveness of depression > >Apropos of our recent discussion on the survival value of PTSD, here is >an interesting expert interview from medscape psychiatry on depression. >FYI, the 1925 birth cohort had a lifetime prevalance of 4% for >depression; today it appears to be 17%; these guys say 25% but I think >that is high. In any case, it is an epidemic. >LJ > >http://www.medscape.com/viewarticle/503013_print >(registration required) > >Expert Interview > >Mood Disorders at the Turn of the Century: An Expert Interview With >Peter C. Whybrow, MD > >Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape > >Editor's Note: >On behalf of Medscape, Randall F. White, MD, interviewed Peter C. >Whybrow, MD, Director of the Semel Institute for Neuroscience & Human >Behavior and Judson Braun Distinguished Professor and Executive Chair, >Department of Psychiatry and Biobehavioral Sciences, David Geffen School >of Medicine, University of California, Los Angeles. > >Medscape: The prevalence of mood disorders has risen in every generation >since the early 20th century. In your opinion, what is behind this? > >Peter C. Whybrow, MD: I think that's a very interesting statistic. My >own sense is that, especially in recent years, it can be explained by >changes in the environment. The demand-driven way in which we live these >days is tied to the increasing levels of anxiety and depression. You see >that in the latest cohort, the one that was studied with the birth date >of 1966, depression has grown quite dramatically compared with those who >were born in cohorts before then. So anxiety now starts somewhere in the >20s or 30s, and depression is also rising, so the prevalence now for >most people in America is somewhere around 25%. > >Medscape: Lifetime prevalence? > >Dr. Whybrow: Yes, lifetime prevalence. > >I think it's a socially driven phenomenon; obviously there's not a >change in the genome. I think we've been diagnosing depression fairly >accurately for a fair length of time now, since the 1960s, and the >people who were born in the 1960s are now being diagnosed with >depression at a higher rate than those who were born earlier and who >were diagnosed in the 1960s, 1970s, and 1980s. > >Medscape: And is this true of both unipolar and bipolar mood disorders? > >Dr. Whybrow: It's particularly true of unipolar disorder. There has been >a growth in interest in bipolar disorder, partly I think because of the >zeal of certain authors who have seen cyclothymia and other oscillating >mood states as part of a larger spectrum of manic-depressive illness, >much as Kraepelin did. And I think that has expanded the prevalence of >the bipolar spectrum to probably 5% or 6%, but the major increase in >prevalence, I think, would be diagnosed as unipolar depression. > >Medscape: Do you think that unipolar and bipolar mood disorders are >distinct, or do they lie on a continuum that includes all the mood >disorders in our nosology? > >Dr. Whybrow: The way I see it is they are both phenotypes, but they have >considerable overlap. If you think about them from the standpoint of the >psychobiology of the illnesses, I think they are distinct. > >Medscape: Why are women more vulnerable than men to depression? > >Dr. Whybrow: My own take on that is that it is driven by the change in >hormones that you see in women. Estrogen and progesterone, plus thyroid >and steroids, are the most potent modulators of central nervous system >activity. If you tie the onset of symptoms to menarche or the sexual >differentiation in boys and girls, you find that prior to that age, >which is now around 11 to 13, boys and girls have essentially the same >depressive symptoms. As adolescence appears, you find this extraordinary >increase in young women who complain of depressive symptoms of one sort >or another. Boys tend to have other things, of course, particularly what >some consider socially deviant behavior. > >The other interesting thing one sees quite starkly in bipolar illness is >that, after the age of 50 or so, when menopause occurs, severe bipolar >illness can actually improve. I've seen that on many occasions. > >Also interesting and relevant to the hormonal thesis is the way in which >thyroid hormone and estrogen compete for each other at some of the >promoter regions of various genes. In the young woman who has bipolar >disease -- this is pertinent to the work I have done over the years with >thyroid hormone -- and who becomes hypothyroid, estrogen becomes much >more available in the central nervous system, and you then see the >malignant forms of bipolar illness. Almost all the individuals who have >severe rapid cycling between the ages of about 20 and 40 are women -- >high proportions, something like 85% to 90%. So this all suggests that >there is an interesting modulation of whatever it is that permits severe >affective illnesses in women by the fluxes of estrogen and progesterone. > >There is, of course, a whole other component of this, which is a social >concern in regard to the way in which women are treated in our society >compared with men. It's far different from when I was first a >psychiatrist back in the 1960s and 1970s; women are much more >independent now, but there is still some element of depression being >driven in part by the social context of their lives, both in family and >in the workplace, where they still do not enjoy absolute equality. > >Medscape: Why would the genotype for mood disorders persist in the human >genome? What aspect of the phenotype is adaptive? > >Dr. Whybrow: I think you have to divide that question into 2. If we talk >about bipolar disease and unipolar disease separately, it makes more sense. > >If we take bipolar disease first, I think there is much in the energy >and excitement of what one considers hypomania that codes for >excellence, or at least engagement, in day-to-day activities. One of the >things that I've learned over the years is that if you find an >individual who has severe manic depressive disease, and you look at the >family, the family is very often of a higher socioeconomic level than >one might anticipate. And again, if you look at a family that is >socially successful, you very often find within it persons who have >bipolar disease. > >So I think that there is a group of genes that codes for the way in >which we are able to engage emotionally in life. I talk about this in >one of my books called A Mood Apart [1] -- how emotion as the vehicle of >expression and understanding of other people's expression is what goes >wrong in depression and in mania. I think that those particular aspects >of our expression are rooted in the same set of genes that codes for >what we consider to be pathology in manic-depressive disease. But the >interesting part is that if you have, let's say for sake of easy >discussion, 5 or 6 genes that code for extra energy (in the dopamine >pathway and receptors, and maybe in fundamental cellular activity), you >turn out to be a person who sleeps rather little, who has a positive >temperament, and so on. If you get another 1 or 2 of them, you end up in >the insane asylum. > >So I think there is an extraordinary value to those particular genetic >pools. So you might say that if you took the bipolar genes out of the >human behavioral spectrum, then you would find that probably we would >still be -- this is somewhat hyperbolic -- wandering around munching >roots and so on. > >Medscape: What about unipolar disorder? > >Dr. Whybrow: Unipolar is different, I think. This was described in some >detail in A Mood Apart .[1] I think that the way in which depression >comes about is very much like the way in which vision fails, as an >analogy. We can lose vision in all sorts of ways. We can lose it because >of distortions of the cornea or the lens; the retina can be damaged; we >can have a stroke in the back of our heads; or there can be a pituitary >tumor. > >I think it's analogous in the way depression strikes: from white tract >disease in old age to the difficulties you might have following a bout >of influenza, plus the sensitivity we have to social rank and all other >social interactions. Those things can precipitate a dysregulation of the >emotional apparatus, much as you disturb the visual apparatus, and you >end up with a person who has this depressive phenomenon. In some >individuals, it repeats itself because of a particular biological >predisposition. In 30% or 40% of individuals, it's a one-time event, >which is tied to the circumstances under which they find themselves. So >I think that's a very distinct phenomenon compared with bipolar illness. > >In its early forms, depression is a valuable adaptive mechanism because >it does accurately focus on the fact that the world is not progressing >positively, so the person is driven to do something about it. Sometimes >the person is incapable of doing something about it, or the adaptive >mechanisms are not sufficient, and then you get this phenomenon of >depression. I know that there have been speculations about the fact that >this then leads to the person going to the edge of the herd and dying >because he or she doesn't eat, et cetera, and it relieves the others of >the burden of caring for him or her. And that might have been true years >ago, when we lived in small hunter-gatherer groups. But of course today >we profess, not always with much success, to have a humanitarian slant, >and we take care of people who have these phenomena, bringing them back >into the herd as they get better. > >So I think that it's a bit of a stretch to say that this has >evolutionary advantage because it allows people to go off and die, but I >think that in the bipolar spectrum there are probably genes that code >for extra activity, which we consider to have social value. > >Medscape: Let's go back to bipolar disorder. The current approach to >finding new treatments for bipolar disorder is to try medications that >were developed for other conditions, especially epilepsy. Do we know >enough yet about this disease to attempt to develop specific treatments >de novo? > >Dr. Whybrow: Well, we're getting there, but we're not really yet in that >position. You're quite right, most of the treatments have come from >either empirical observations, such a lithium, or because there is this >peculiar association between especially temporal lobe epilepsy and >bipolar disease, both in terms of phenomena and also conceptually. But >we do know more and more about the inositol cycle, we do know something >about some of the genes that code for bipolar illness, so I think we >will eventually be able to untangle the pathophysiology of some of the >common forms. > >I think the problem is that there are multiple genes that contribute to >the way in which the cells dysregulate, so it's probably not that we'll >find one cause of bipolar illness and therefore be able to find one >medication as we've found for diabetes, for example. > >Medscape: Let's talk about your new book American Mania: When More Is >Not Enough , in which you use mania as a metaphor to describe aspects of >American culture.[2] > >Dr. Whybrow: The metaphor came because of the work I've done over the >years with bipolar illness. In the late 1990s, when I first moved to >California, I was struck by the extraordinary stock-market bubble and >the excitement that went on. You may remember those days: people were >convinced that this would go on forever, that we'd continue to wake up >to the sweet smell of money and happiness for the rest of our days. This >seemed to me to have much in common with the delusional systems one sees >in mania. > >So the whole thing in my mind began to be an interesting metaphor for >what was happening in the country, as one might see it through the eyes >of a psychiatrist watching an individual patient. I began to investigate >this, and what particularly appealed to me was that the activity that >you see in mania eventually breaks, and of course this is exactly what >happened with the bubble. Then all sorts of recriminations begin, and >you enter into a whole new phase. > >The book takes off from there, but it has also within it a series of >discussions about the way in which the economic model that we have >adopted, which is, of course, Adam Smith's economic model, is based upon >essentially a psychological theory. If you know anything about Adam >Smith, you'll know that he was a professor of moral philosophy, which >you can now translate into being a psychologist. And his theory was >really quite simple. On one hand, he saw self-interest, which these days >we might call survival, curiosity, and social ambition as the 3 engines >of wealth creation. But at the same time, he recognized that without >social constraints, without the wish we have, all of us, to be loved by >other people (therefore we're mindful of not doing anything too >outrageous), the self-interest would run away to greed. But he convinced >himself and a lot of other people that if individuals were free to do >what they wished and do it best, then the social context in which they >lived would keep them from running away to greed. > >If you look at that model, which is what the book American Mania: When >More Is Not Enough does, you can see that we now live in a much >different environment from Smith's, and the natural forces to which he >gave the interesting name "the invisible hand," and which made all this >come out for the benefit of society as a whole, have changed >dramatically. It's losing its grip, in fact, because we now live in a >society that is extremely demand-driven, and we are constantly rewarded >for individual endeavor or self-interest through our commercial success, >but very little for the social investment that enables us to have strong >unions with other people. This is particularly so in the United States. > >So you can see that things have shifted dramatically and have gone into, >if you go back to the metaphor, what I believe is sort of a chronic >frenzy, a manic-like state, in which most people are now working >extremely hard. Many of them are driven by debt; other people are driven >by social ambition, but to the destruction very often of their own >personal lives and certainly to the fabric of the community in which >they live. > > > References > > 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its > Disorders . New York, NY: HarperCollins; 1997. > 2. Whybrow PC. American Mania: When More Is Not Enough . New York, > NY: WW Norton; 2005. > > > << File: ATT00001.html >> << File: ATT00002.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > From shovland at mindspring.com Wed May 25 16:03:11 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 25 May 2005 09:03:11 -0700 Subject: [Paleopsych] Adaptiveness of depression Message-ID: <01C56108.94552D90.shovland@mindspring.com> This is not to say that MD's are stupid, it's just that most are miseducated. Years ago I worked in Medical Records in a hospital and typed hundreds of History and Physical reports. If a patient did not present with the symptions of a classical nutritional disease such as beri beri she was considered to be well nourished. The craziest example I ever saw was when an endocrinologist gave the "well-nourished" label to a woman who stated that her diet consisted of 20 candy bars per day. A lack of training in nutrition is a problem, but the whole emphasis of their training is a bigger problem: the focus on treating symptoms. Nicholas Perricone MD read Adele Davis' "Let's Eat Right to Keep Fit" before he went to medical school because he was fresh out of the Army and suffering from fatigue and was seeking solutions. My library includes a number of books by MD's who have somehow gotten turned on to treating causes by using nutrition etc. Steve Hovland www.stevehovland.net -----Original Message----- From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] Sent: Wednesday, May 25, 2005 8:49 AM To: The new improved paleopsych list Subject: Re: [Paleopsych] Adaptiveness of depression I agree with Steve here; the issue of dietary change is ignored. He also downplays the social factors some, continuing to emphasize the medical approach to treatment. If diet and/or social change are implicated, then more Prozac is merely finger-in-the-dike. Lynn Steve Hovland wrote: >Since this guy is an MD, one can assign a high >probability to the possibility that he knows almost >nothing about nutrition, including the importance >of healthy fats in the diet. > >He mentions hormones without any consideration >of what the body uses to build hormones. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] >Sent: Tuesday, May 24, 2005 9:27 PM >To: The new improved paleopsych list >Subject: [Paleopsych] Adaptiveness of depression > >Apropos of our recent discussion on the survival value of PTSD, here is >an interesting expert interview from medscape psychiatry on depression. >FYI, the 1925 birth cohort had a lifetime prevalance of 4% for >depression; today it appears to be 17%; these guys say 25% but I think >that is high. In any case, it is an epidemic. >LJ > >http://www.medscape.com/viewarticle/503013_print >(registration required) > >Expert Interview > >Mood Disorders at the Turn of the Century: An Expert Interview With >Peter C. Whybrow, MD > >Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape > >Editor's Note: >On behalf of Medscape, Randall F. White, MD, interviewed Peter C. >Whybrow, MD, Director of the Semel Institute for Neuroscience & Human >Behavior and Judson Braun Distinguished Professor and Executive Chair, >Department of Psychiatry and Biobehavioral Sciences, David Geffen School >of Medicine, University of California, Los Angeles. > >Medscape: The prevalence of mood disorders has risen in every generation >since the early 20th century. In your opinion, what is behind this? > >Peter C. Whybrow, MD: I think that's a very interesting statistic. My >own sense is that, especially in recent years, it can be explained by >changes in the environment. The demand-driven way in which we live these >days is tied to the increasing levels of anxiety and depression. You see >that in the latest cohort, the one that was studied with the birth date >of 1966, depression has grown quite dramatically compared with those who >were born in cohorts before then. So anxiety now starts somewhere in the >20s or 30s, and depression is also rising, so the prevalence now for >most people in America is somewhere around 25%. > >Medscape: Lifetime prevalence? > >Dr. Whybrow: Yes, lifetime prevalence. > >I think it's a socially driven phenomenon; obviously there's not a >change in the genome. I think we've been diagnosing depression fairly >accurately for a fair length of time now, since the 1960s, and the >people who were born in the 1960s are now being diagnosed with >depression at a higher rate than those who were born earlier and who >were diagnosed in the 1960s, 1970s, and 1980s. > >Medscape: And is this true of both unipolar and bipolar mood disorders? > >Dr. Whybrow: It's particularly true of unipolar disorder. There has been >a growth in interest in bipolar disorder, partly I think because of the >zeal of certain authors who have seen cyclothymia and other oscillating >mood states as part of a larger spectrum of manic-depressive illness, >much as Kraepelin did. And I think that has expanded the prevalence of >the bipolar spectrum to probably 5% or 6%, but the major increase in >prevalence, I think, would be diagnosed as unipolar depression. > >Medscape: Do you think that unipolar and bipolar mood disorders are >distinct, or do they lie on a continuum that includes all the mood >disorders in our nosology? > >Dr. Whybrow: The way I see it is they are both phenotypes, but they have >considerable overlap. If you think about them from the standpoint of the >psychobiology of the illnesses, I think they are distinct. > >Medscape: Why are women more vulnerable than men to depression? > >Dr. Whybrow: My own take on that is that it is driven by the change in >hormones that you see in women. Estrogen and progesterone, plus thyroid >and steroids, are the most potent modulators of central nervous system >activity. If you tie the onset of symptoms to menarche or the sexual >differentiation in boys and girls, you find that prior to that age, >which is now around 11 to 13, boys and girls have essentially the same >depressive symptoms. As adolescence appears, you find this extraordinary >increase in young women who complain of depressive symptoms of one sort >or another. Boys tend to have other things, of course, particularly what >some consider socially deviant behavior. > >The other interesting thing one sees quite starkly in bipolar illness is >that, after the age of 50 or so, when menopause occurs, severe bipolar >illness can actually improve. I've seen that on many occasions. > >Also interesting and relevant to the hormonal thesis is the way in which >thyroid hormone and estrogen compete for each other at some of the >promoter regions of various genes. In the young woman who has bipolar >disease -- this is pertinent to the work I have done over the years with >thyroid hormone -- and who becomes hypothyroid, estrogen becomes much >more available in the central nervous system, and you then see the >malignant forms of bipolar illness. Almost all the individuals who have >severe rapid cycling between the ages of about 20 and 40 are women -- >high proportions, something like 85% to 90%. So this all suggests that >there is an interesting modulation of whatever it is that permits severe >affective illnesses in women by the fluxes of estrogen and progesterone. > >There is, of course, a whole other component of this, which is a social >concern in regard to the way in which women are treated in our society >compared with men. It's far different from when I was first a >psychiatrist back in the 1960s and 1970s; women are much more >independent now, but there is still some element of depression being >driven in part by the social context of their lives, both in family and >in the workplace, where they still do not enjoy absolute equality. > >Medscape: Why would the genotype for mood disorders persist in the human >genome? What aspect of the phenotype is adaptive? > >Dr. Whybrow: I think you have to divide that question into 2. If we talk >about bipolar disease and unipolar disease separately, it makes more sense. > >If we take bipolar disease first, I think there is much in the energy >and excitement of what one considers hypomania that codes for >excellence, or at least engagement, in day-to-day activities. One of the >things that I've learned over the years is that if you find an >individual who has severe manic depressive disease, and you look at the >family, the family is very often of a higher socioeconomic level than >one might anticipate. And again, if you look at a family that is >socially successful, you very often find within it persons who have >bipolar disease. > >So I think that there is a group of genes that codes for the way in >which we are able to engage emotionally in life. I talk about this in >one of my books called A Mood Apart [1] -- how emotion as the vehicle of >expression and understanding of other people's expression is what goes >wrong in depression and in mania. I think that those particular aspects >of our expression are rooted in the same set of genes that codes for >what we consider to be pathology in manic-depressive disease. But the >interesting part is that if you have, let's say for sake of easy >discussion, 5 or 6 genes that code for extra energy (in the dopamine >pathway and receptors, and maybe in fundamental cellular activity), you >turn out to be a person who sleeps rather little, who has a positive >temperament, and so on. If you get another 1 or 2 of them, you end up in >the insane asylum. > >So I think there is an extraordinary value to those particular genetic >pools. So you might say that if you took the bipolar genes out of the >human behavioral spectrum, then you would find that probably we would >still be -- this is somewhat hyperbolic -- wandering around munching >roots and so on. > >Medscape: What about unipolar disorder? > >Dr. Whybrow: Unipolar is different, I think. This was described in some >detail in A Mood Apart .[1] I think that the way in which depression >comes about is very much like the way in which vision fails, as an >analogy. We can lose vision in all sorts of ways. We can lose it because >of distortions of the cornea or the lens; the retina can be damaged; we >can have a stroke in the back of our heads; or there can be a pituitary >tumor. > >I think it's analogous in the way depression strikes: from white tract >disease in old age to the difficulties you might have following a bout >of influenza, plus the sensitivity we have to social rank and all other >social interactions. Those things can precipitate a dysregulation of the >emotional apparatus, much as you disturb the visual apparatus, and you >end up with a person who has this depressive phenomenon. In some >individuals, it repeats itself because of a particular biological >predisposition. In 30% or 40% of individuals, it's a one-time event, >which is tied to the circumstances under which they find themselves. So >I think that's a very distinct phenomenon compared with bipolar illness. > >In its early forms, depression is a valuable adaptive mechanism because >it does accurately focus on the fact that the world is not progressing >positively, so the person is driven to do something about it. Sometimes >the person is incapable of doing something about it, or the adaptive >mechanisms are not sufficient, and then you get this phenomenon of >depression. I know that there have been speculations about the fact that >this then leads to the person going to the edge of the herd and dying >because he or she doesn't eat, et cetera, and it relieves the others of >the burden of caring for him or her. And that might have been true years >ago, when we lived in small hunter-gatherer groups. But of course today >we profess, not always with much success, to have a humanitarian slant, >and we take care of people who have these phenomena, bringing them back >into the herd as they get better. > >So I think that it's a bit of a stretch to say that this has >evolutionary advantage because it allows people to go off and die, but I >think that in the bipolar spectrum there are probably genes that code >for extra activity, which we consider to have social value. > >Medscape: Let's go back to bipolar disorder. The current approach to >finding new treatments for bipolar disorder is to try medications that >were developed for other conditions, especially epilepsy. Do we know >enough yet about this disease to attempt to develop specific treatments >de novo? > >Dr. Whybrow: Well, we're getting there, but we're not really yet in that >position. You're quite right, most of the treatments have come from >either empirical observations, such a lithium, or because there is this >peculiar association between especially temporal lobe epilepsy and >bipolar disease, both in terms of phenomena and also conceptually. But >we do know more and more about the inositol cycle, we do know something >about some of the genes that code for bipolar illness, so I think we >will eventually be able to untangle the pathophysiology of some of the >common forms. > >I think the problem is that there are multiple genes that contribute to >the way in which the cells dysregulate, so it's probably not that we'll >find one cause of bipolar illness and therefore be able to find one >medication as we've found for diabetes, for example. > >Medscape: Let's talk about your new book American Mania: When More Is >Not Enough , in which you use mania as a metaphor to describe aspects of >American culture.[2] > >Dr. Whybrow: The metaphor came because of the work I've done over the >years with bipolar illness. In the late 1990s, when I first moved to >California, I was struck by the extraordinary stock-market bubble and >the excitement that went on. You may remember those days: people were >convinced that this would go on forever, that we'd continue to wake up >to the sweet smell of money and happiness for the rest of our days. This >seemed to me to have much in common with the delusional systems one sees >in mania. > >So the whole thing in my mind began to be an interesting metaphor for >what was happening in the country, as one might see it through the eyes >of a psychiatrist watching an individual patient. I began to investigate >this, and what particularly appealed to me was that the activity that >you see in mania eventually breaks, and of course this is exactly what >happened with the bubble. Then all sorts of recriminations begin, and >you enter into a whole new phase. > >The book takes off from there, but it has also within it a series of >discussions about the way in which the economic model that we have >adopted, which is, of course, Adam Smith's economic model, is based upon >essentially a psychological theory. If you know anything about Adam >Smith, you'll know that he was a professor of moral philosophy, which >you can now translate into being a psychologist. And his theory was >really quite simple. On one hand, he saw self-interest, which these days >we might call survival, curiosity, and social ambition as the 3 engines >of wealth creation. But at the same time, he recognized that without >social constraints, without the wish we have, all of us, to be loved by >other people (therefore we're mindful of not doing anything too >outrageous), the self-interest would run away to greed. But he convinced >himself and a lot of other people that if individuals were free to do >what they wished and do it best, then the social context in which they >lived would keep them from running away to greed. > >If you look at that model, which is what the book American Mania: When >More Is Not Enough does, you can see that we now live in a much >different environment from Smith's, and the natural forces to which he >gave the interesting name "the invisible hand," and which made all this >come out for the benefit of society as a whole, have changed >dramatically. It's losing its grip, in fact, because we now live in a >society that is extremely demand-driven, and we are constantly rewarded >for individual endeavor or self-interest through our commercial success, >but very little for the social investment that enables us to have strong >unions with other people. This is particularly so in the United States. > >So you can see that things have shifted dramatically and have gone into, >if you go back to the metaphor, what I believe is sort of a chronic >frenzy, a manic-like state, in which most people are now working >extremely hard. Many of them are driven by debt; other people are driven >by social ambition, but to the destruction very often of their own >personal lives and certainly to the fabric of the community in which >they live. > > > References > > 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its > Disorders . New York, NY: HarperCollins; 1997. > 2. Whybrow PC. American Mania: When More Is Not Enough . New York, > NY: WW Norton; 2005. > > > << File: ATT00001.html >> << File: ATT00002.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Wed May 25 18:44:29 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:44:29 -0400 (EDT) Subject: [Paleopsych] SW: On Hereditary Disease Risks Message-ID: Medical Ethics: On Hereditary Disease Risks http://scienceweek.com/2004/sa041105-4.htm The following points are made by K. Offit et al (J. Am. Med. Assoc. 2004 292:1469): 1) Genetic tests for specific adult-onset disorders (eg, breast and colon cancer) are now commercially available, and results of research studies for genetic polymorphisms that predict drug effects, for example, response to statin therapy, have recently been published.(1) The failure to warn family members about their hereditary disease risks has resulted in three malpractice suits against physicians in the US.(2-4) 2) This past year, the obligation, if any, to warn family members of identification of a cancer gene mutation was the topic of discussion among professional societies and advocacy groups. Concerns have been raised regarding the conflict between the physician's ethical obligations to respect the privacy of genetic information vs the potential legal liabilities resulting from the physician's failure to notify at-risk relatives. In many cases, state and federal statutes that bear on the issue of "duty to warn" of inherited health risk are also in conflict. 3) Consider the following case: A 40-year-old woman presents for a follow-up consultation. She has a family history of breast cancer, heart disease, and Alzheimer disease. At her first visit, the physician had counseled her and provided genetic testing and now tells the patient that she was found to have an inherited BRCA2 mutation that markedly increases her risk for developing breast cancer and/or ovarian cancer. The testing laboratory has also suggested a "genomic profile" that will predict risk for Alzheimer disease as well as sensitivity to a variety of drugs. The patient's sister, who is sitting in the waiting room, has a 50% chance of inheriting this same BRCA2 mutation. Although the physician had discussed the importance of familial risk notification before testing, the patient declines the strong recommendation that she share the results of her genetic tests with her sister and asks that this information be kept completely confidential. Does this physician have an obligation to tell the patient's sister that she, too, may have inherited these genetic predispositions? 4) An expanded national discussion of the ethical and legal implications of genetic risk notification is required to guide practitioners of "molecular medicine". Fear of loss of privacy among susceptible populations could discourage families from seeking access to potentially life-saving genetic testing. In the genomic era, clinical testing will be offered to predict disease occurrence, as well as sensitivities to drugs or environmental exposures. Because the laws of Mendel will continue to apply to these new markers of genetic risk, the issues surrounding familial notification will loom even larger. The increasing availability of DNA testing will require greater emphasis on informed consent as a process of communication and education, so as to better facilitate the translation of genomic medicine to clinical practice. 5) The authors conclude: While the findings of case law and the state and federal statutes that bear on the issue of "duty to warn" of inherited health risk are still being defined, health care professionals have a responsibility to encourage but not to coerce the sharing of genetic information in families, while respecting the boundaries imposed by the law and by the ethical practice of medicine.(5) References (abridged): 1. Chasman DI, Posada D, Subrahmanyan L, et al. Pharmacogenetic study of statin therapy and cholesterol reduction. JAMA. 2004;291:2821-2827 2. Pate v Threlkel, 661 So 2d 278 (Fla 1995) 3. Safer v Estate of Pack, 677 A2d 1188 (NJ App), appeal denied, 683 A2d 1163 (NJ 1996) 4. Molloy v Meier, Nos. C9-02-1821, C9-02-1837 (Minn 2004) 5. Beauchamp TL, Childress, JF. Principles of Biomedical Ethics. New York, NY: Oxford University Press; 1994 J. Am. Med. Assoc. http://www.jama.com -------------------------------- Related Material: SCIENCE POLICY: ON THE REGULATION OF HUMAN GENETIC TESTS Notes by ScienceWeek: As more and more human genes related to diseases are identified, the commercialization of tests designed to detect the presence of such genes in individuals gathers momentum and introduces a spectrum of problems that will most likely be of considerable importance in the coming decades. The following points are made by Neil A. Holtzman (Science 1999 286:409): 1) The Human Genome Project has engendered "genohype", from early pronouncements that our destiny is in our genes to recent declarations that new discoveries will minimize or prevent the appearance of diseases in individuals altogether. As a result of these claims, commercial enterprises have sprung up to identify the presence of susceptibility-conferring genes in individuals. As early as 1995, over 50 biotechnology companies were developing or providing tests to diagnose genetic disorders or to predict the risk of their future occurrence. Common complex disorders, usually disorders of adult onset such as Alzheimer's disease and breast and colon cancer, make up the single largest category for which tests are under commercial development. 2) The "educational" materials prepared by companies for physicians and patients considering genetic tests frequently make exaggerated claims for predictive tests for common complex disorders. In particular are exaggerated claims for a) clinical validity (i.e., the probability of a detectable susceptibility-conferring gene occurring in those who would get the disease, and the probability that those with a susceptibility-conferring gene would actually get the disease); and b) claims for utility (i.e., how a positive test result could help people cope with future disease). 3) This situation has arisen because of the double standard which the US Food and Drug Administration (FDA) uses to regulate in vitro clinical diagnostic devices: If a genetic test is to be marketed as a kit, the manufacturer of the test kit must first demonstrate its clinical validity to the satisfaction of the FDA, and scrutiny by the FDA of the labeling of the test kit can ensure the utility of the test is not exaggerated. But if, on the other hand, a test is marketed as a clinical laboratory service, the laboratory providing the service is not even required to notify the FDA. The author states that the FDA admits it has the authority to regulate clinical laboratory tests marketed as services, but (according to the author) the FDA says it does not have the resources to carry out such regulation. 4) With respect to statements of clinical validity and utility, the FDA regulation of genetic tests marketed as services should be as stringent as the regulation of tests marketed as kits. Science http://www.sciencemag.org -------------------------------- Notes by ScienceWeek: During the past 2 decades in the US, one relatively new feature of the scientific enterprise has been a mushrooming of the number of prominent academic researchers in molecular biology who have become high-level corporate research managers. Often these high-level research managers maintain ties to the universities that originally hosted their research, ties, for example, that may involve patent partnerships. Where patents are involved, in many cases, the research underlying the patents was financed by US federal funds, while the patents are now the basis for extensive private commercial ventures. This situation was made possible by explicit US Congressional legislation in the 1980s. An example of the various controversies follows: In May 1998, in a long article, the journal *Science* presented a detailed profile of Allen Roses, a neurologist at Duke University (US), who in 1997 became head of genetics research at Glaxo Wellcome, Roses overseeing a US$50 million genetics research budget that is part of the Glaxo Wellcome US$2 billion annual research and development effort. Prior to his move to Glaxo Wellcome, Roses achieved prominence as the head of a research group at Duke University that discovered a gene variant that apparently increases a carrier's risk of developing the common late-onset form of Alzheimer's disease (the most common form) -- a discovery that was initially ignored by many researchers in the field but is now considered to be of some importance. After assuming his new position at Glaxo Wellcome, Roses apparently set about creating an international "network of clinicians" to provide data and clinical material to Glaxo in its "hunt for disease-related genes", with the evident interest of Glaxo Wellcome that of patenting key discoveries (including genes) and manufacturing drugs based on the new discoveries. The focus is on pathologies such as asthma, cardiovascular disease, mental depression, schizophrenia, inflammatory bowel disease, dermatitis, and susceptibility to infectious agents -- in other words, a wide array of human diseases with possible genetic involvements. The essential idea is apparently to build up detailed indexes of variations in human genes and use these indexes to scan the genomes of patients or volunteers. Concerning Alzheimer's disease, the approach of Roses and his group has been criticized because their marker for Alzheimer's disease, a gene variant (called _APOE4_) of an *apolipoprotein gene called _APOE_, does not appear to cause the disease directly but appears to only increase the risk. Many researchers believe that other genes and other proteins, particularly so-called *beta-amyloid proteins, are involved in Alzheimer's disease. The Allen Roses profile in *Science* appeared 15 May 1998. On 28 August 1998, J.F. Merz et al, in a letter to the journal *Science*, pointed out that the article about Roses and his advocacy of wide genetic testing for Alzheimer's disease did not mention that Roses is named as an inventor on a patent claiming exclusive rights to the detection of the _APOE_ *allele, a patent now held in exclusive license from Duke University and Roses by a company called AthenaDiagnostics, and that AthenaDiagnostics has attempted to stop anyone anywhere from performing _APOE_ genotyping for the purpose of diagnosing Alzheimer's disease. In other words, AthenaDiagnostics effectively owns the gene that may be one of the causes of Alzheimer's disease, and no one can use that gene (which when present appears as part of human chromosome 19) for diagnostic purposes without paying a royalty fee. Considering the advocacy by Roses of genetic testing for Alzheimer's disease, J.F Merz et al stated: "This situation raises ethical concerns, not the least of which is that those who benefit financially from the performance of genetic testing and screening could be said to have a conflict of interest that might lead to aggressive promotion of those tests." On 18 September 1998, Allen Roses responded to the J.F. Merz et al letter in *Science*, and also to other related commentary by J.F. Merz et al in *Nature Medicine*. In summary, Roses criticized his critics for "incorrect notions and opinions", stated that it is not true that he receives 50% of the licensing fees for the _APOE_ gene, stated that he was being attacked personally without relevant facts, and that he had not been able to respond to the criticisms in *Nature Medicine* because that journal does not entertain responses. On 9 October 1998, A.J. Ivinson, the editor of *Nature Medicine*, published a letter in *Science* in response to the Roses letter, Ivinson stating that *Nature Medicine* does sometimes invite responses, and that during a face-to-face discussion Roses was specifically invited to respond to the *Nature Medicine* text and he failed to do so, "making his comments regarding our policy on responses all the more surprising." Finally, we return to 19 February 1998, to a paper published in the *New England Journal of Medicine* by a large research group that included the Roses research team, in which study the authors reviewed clinical and autopsy _APOE_ data on 2188 patients at various installations referred for evaluation of dementia, and in which paper the authors (Allen Roses among them) conclude: "APOE genotyping does not provide sufficient sensitivity or specificity to be used alone as a diagnostic test for Alzheimer's disease, but when used in combination with clinical criteria, it improves the specificity of the diagnosis." All the backbiting and considerations of conflict of interest aside, the last paragraph is the essence of this brouhaha: Given that the direct and unique involvement of the _APOE_ gene in Alzheimer's disease has not been demonstrated, should _APOE_ genotyping (and consequent labeling of people as "Alzheimer's prone") be widely used? The bioethicists say no, that given the uncertainties in diagnosis, the social dangers are too great; while Allen Roses, Glaxo Wellcome, and others say yes, that genotyping can substantially improve clinical diagnostics. Science 1998 282:239 Science 1998 281:1805 Science 1998 281:1288 Science 1998 280:1001 New England J. Med. 11998 338:505 Science http://www.sciencemag.org New Engl. J. Med. http://www.nejm.org -------------------------------- Notes by ScienceWeek: apolipoprotein gene: An apolipoprotein is the protein component of a lipoprotein (lipid + protein) complex. In its non-pathological form, the apolipoprotein gene is involved in the metabolism of fats. Concerning the pathological form of the gene, apparently confirmed data indicate that white persons 60 to 80 years old with two copies of the variant allele are 9 times more likely to get Alzheimer's disease than those who do not carry the variant. But almost everything else about the gene is in controversy. beta-amyloid proteins: Post-mortem tissue analysis of Alzheimer's disease patients and Down syndrome patients reveals anomalous protein deposits (beta-amyloid protein) in brain nerve cells. Many researchers believe these deposits are in some way related to the etiology of both of these disease entities. allele: An allele is one of two or more forms of a given gene that control a particular characteristic, with the alternative forms occupying corresponding loci on homologous chromosomes. From checker at panix.com Wed May 25 18:44:38 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:44:38 -0400 (EDT) Subject: [Paleopsych] SW: On Bipolar Disorder Message-ID: Medical Biology: On Bipolar Disorder http://scienceweek.com/2004/sb041029-5.htm The following points are made by R.H. Belmaker (New Engl. J. Med. 2004 351:476): 1) Bipolar disorder is one of the most distinct syndromes in psychiatry and has been described in numerous cultures over the course of history.(1) The unique hallmark of the illness is mania. Mania is, in many ways, the opposite of depression. It is characterized by elevated mood or euphoria, overactivity with a lack of need for sleep, and an increased optimism that usually becomes so extreme that the patient's judgment is impaired. For example, a person with mania may decide to purchase 500 television sets if he or she believes that their price will go up. Drives such as sexual desire are also enhanced; manic patients are disinhibited in their speech about sexual matters, joking or talking about subjects not normally allowed in their culture. Manic patients are sometimes disinhibited in their sexual actions as well, and they may endanger their marriage or relationship as a result. 2) A key point is that manic behavior is distinct from a patient's usual personality, but its onset may be gradual with weeks or months passing before the syndrome becomes full-blown. In the absence of effective treatment, a manic episode, although ultimately self-limited, could last months or years.(2) Before effective treatment was available, even after a long manic episode, patients were known to recover to a state closely approximating, if not identical with, their personality before the illness developed.(3) 3) The depression that alternates with manic episodes (bipolar depression) is characterized by more familiar symptoms. A single manic episode is sufficient for the diagnosis of bipolar illness, as long as the manic symptoms are not due to a general medical condition such as amphetamine abuse or pheochromocytoma.(4) Some patients may have one manic episode at a young age and frequent depressive episodes thereafter, others may have alternating episodes of mania and depression on a yearly basis, and still others may have a manic episode every five years but never have a depressive episode. 4) Approximately 50 percent of patients with bipolar illness have a family history of the disorder, and in some families, known as multiplex families, there are many members with the disease across several generations. Studies of twins suggest that the concordance for bipolar illness is between 40 percent and 80 percent in monozygotic twins and is lower (10 to 20 percent) in dizygotic twins, a difference that suggests a genetic component to the disorder. There is no mendelian pattern, however, and statistical analysis suggests polygenic inheritance. 5) The advent of molecular genetics opened a new era in genetic studies of bipolar disorder. DNA markers have been sought throughout the genome in large pedigrees in which many family members have the illness and, with the use of the transmission disequilibrium test, in patients with bipolar disorder and their parents. Linkage studies have identified markers, which have been replicated in more than one study, particularly on chromosomes 18 and 22. However, no single locus has been consistently replicated, and the contribution of any identified locus appears small. Progress in genomic medicine offers the hope that specific genes that confer an elevated risk of bipolar illness will be found. References (abridged): 1. Clinical description. In: Goodwin FK, Jamison KR. Manic-depressive illness. New York: Oxford University Press, 1990:15-55 2. Beers C. A mind that found itself. Garden City, N.Y.: Doubleday, 1953 3. Kraepelin E. Manic-depressive insanity and paranoia. Chicago: University of Chicago Press, 2002 4. Diagnostic and statistical manual of mental disorders, 4th ed.: DSM-IV. Washington, D.C.: American Psychiatric Association, 1994 5. Baldessarini RJ. A plea for integrity of the bipolar disorder concept. Bipolar Disord 2000;2:3-7 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ON ADOLESCENT DEPRESSION The following points are made by D.A. Brent and B. Birmaher (New Engl. J. Med. 2002 347:667): 1) In children and adolescents, depression is not always characterized by sadness, but instead by irritability, boredom, or an inability to experience pleasure. Depression is a chronic, recurrent, and often familial illness that frequently first occurs in childhood or adolescence. Any child can be sad, but depression is characterized by a persistent irritable, sad, or bored mood and difficulty with familial relationships, school, and work(1). In the absence of treatment, a major depressive episode lasts an average of eight months. The risk of recurrence is approximately 40 percent at two years and 72 percent at five years.(2) Longer depressive episodes occur in patients who have a dysthymic disorder (a milder, but chronic and insidious form of depression) that gradually evolves into major depression. More prolonged episodes are also associated with coexisting psychiatric conditions, parental depression, and parent-child discord.(2) 2) At least 20 percent of those with early-onset depressive disorders (those beginning in childhood or adolescence) are at risk for bipolar disorder, particularly if they have a family history of bipolar disorder, psychotic symptoms, or a manic response to antidepressant treatment.(2,3) Bipolar disorder is characterized by depressive episodes that alternate with periods of mania, defined by a decreased need for sleep, increased energy, grandiosity, euphoria, and an increased propensity for risk-taking behavior. Often in children and adolescents, mania and depression occur as "mixed states", in which the lability of mania is combined with depression, or there is rapid cycling between depression and mania over a period of days or even hours.(4) 3) Suicidal behavior is closely associated with depression. Risk factors for suicide during a depressive episode include chronic depression, coexisting substance abuse, impulsivity and aggression, a history of physical or sexual abuse, same-sex attraction and sexual activity, a personal or family history of a suicide attempt, and access to an effective means of suicide, such as a gun.(5) Girls are more likely to attempt suicide, and boys to complete suicide. Among adolescents, the annual rate of suicide attempts requiring medical attention is 2.6 percent. Completed suicide is much rarer: among 15-to-19-year-olds, the rates in 1998 were 14.6 per 100,000 in boys and 2.9 per 100,000 in girls. 4) Depression is present in about 1 percent of children and 5 percent of adolescents at any given time. Before puberty, boys and girls are at equal risk for depression, whereas after the onset of puberty, the rate of depression is about twice as high in girls. Having a parent with a history of depression increases a child's risk of a depressive episode by a factor of 2 to 4.7 Anxiety, particularly social phobia, may be a precursor of depression. References (abridged): 1. Diagnostic and statistical manual of mental disorders, 4th ed.: DSM-IV. Washington, D.C.: American Psychiatric Association, 1994. 2. Birmaher B, Ryan ND, Williamson DE, et al. Child and adolescent depression: a review of the past 10 years. J Am Acad Child Adolesc Psychiatry 1996;35:1427-1439. 3. Geller B, Zimerman B, Williams M, Bolhofner K, Craney JL. Bipolar disorder at prospective follow-up of adults who had prepubertal major depressive disorder. Am J Psychiatry 2001;158:125-127. 4. Geller B, Zimerman B, Williams M, et al. Diagnostic characteristics of 93 cases of a prepubertal and early adolescent bipolar disorder phenotype by gender, puberty and comorbid attention deficit hyperactivity disorder. J Child Adolesc Psychopharmacol 2000;10:157-164. 5. Brent DA. Mood disorders and suicide. In: Green M, Haggerty RJ, eds. Ambulatory pediatrics. 5th ed. Philadelphia: W.B. Saunders, 1999:447-54. New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: MEDICAL BIOLOGY: DEPRESSION IN CHILDREN: CHEMICAL TREATMENT The following points are made by Christopher K. Varley (J. Am. Med. Assoc. 2003 290:1091): 1) An increasing body of knowledge confirms that depression is a common and serious illness in youth, affecting 3% to 8% of children and adolescents. Moreover, rates of depression increase dramatically as children move into adolescence. An estimated 20% of adolescents have had at least 1 episode of major depressive disorder (MDD) by age 18 years, while 65% report transient, less severe depressive symptoms. 2) Depression compromises the developmental process; feelings of worthlessness, low self-esteem, and thoughts of suicide are common, as are difficulties with concentration and motivation. As many as 20% of adolescents each year have suicide ideation and 5% to 8% attempt suicide. While the majority of attempts are not lethal, suicide is a leading cause of death in adolescents and is a major health care concern. One of the major risk factors associated with suicide is depression. 3) Depressive disorders in children and adolescents can be chronic and recurrent. The mean length of a major depressive episode in youth aged 6 to 17 years is 7 to 9 months, with remittance commonly occurring over a 1- to 2-year period. Longitudinal studies suggest a strong potential for recurrence; 48% to 60% of this age group have recurrence of major depression after an initial MDD episode within 5 years. 4) Although depression in youth is now recognized as a significant health concern, identification of safe and effective treatment has been challenging. The recent study by Wagner et al (2003) is the fourth published double-blind, placebo-controlled study demonstrating efficacy in the treatment of MDD in children and adolescents; all studies included selective serotonin uptake inhibitors (SSRIs). A number of psychotropic medications established as safe and effective in the treatment of MDD in adults have been investigated in youth but may not be effective, including tricyclic antidepressants, monoamine oxidase inhibitors, and venlafaxine. There are also safety concerns regarding the use of tricyclic antidepressants in children and adolescents, including lethality in overdose and cardiac conduction delays (and possibly increased risk of sudden death) in therapeutic dosages. J. Am. Med. Assoc. http://www.jama.com From checker at panix.com Wed May 25 18:44:49 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:44:49 -0400 (EDT) Subject: [Paleopsych] SW: On Generalized Anxiety Disorder Message-ID: Medical Biology: On Generalized Anxiety Disorder http://scienceweek.com/2004/sa041022-3.htm The following points are made by Gregory Fricchione (New Engl. J. Med. 2004 351:675): 1) Anxiety disorders are the most prevalent psychiatric conditions in the US aside from disorders involving substance abuse.(1) Generalized anxiety disorder has a lifetime prevalence of 5 percent. The onset is usually before the age of 25 years, and the incidence in men is half that in women. Untreated, the typical course is chronic, with a low rate of remission and a moderate recurrence rate.(3) 2) Risk factors for generalized anxiety disorder include a family history of the condition, an increase in stress, and a history of physical or emotional trauma.(4,5) An association has also been reported between smoking and anxiety, and the risk of generalized anxiety disorder among adolescents who smoke heavily is five to six times the risk among nonsmokers. Traits such as nervousness and social discomfort may predispose people to both nicotine dependence and anxiety. Medical illnesses are often associated with anxiety. For example, generalized anxiety disorder occurs in 14 percent of patients with diabetes. 3) Major depression is the most common coexisting psychiatric illness in patients with generalized anxiety disorder, occurring in almost two thirds of such patients. Panic disorder occurs in a quarter of patients with generalized anxiety disorder, and alcohol abuse in more than a third.(1) Studies of twins suggest a shared genetic propensity to both generalized anxiety disorder and major depression, and a recent report suggests that a genetic variant of the serotonin-transporter gene may predispose people to both conditions. In prospective studies, anxiety almost always appears to be the primary disorder and to increase the risk of depression. Patients who have generalized anxiety disorder along with coexisting psychiatric illnesses have more impairment, seek more medical attention, and have a poorer response to treatment than those without coexisting illnesses. 4) Patients with generalized anxiety disorder often have physical symptoms, and it may be difficult to distinguish the symptoms from those of medical illnesses that are associated with anxiety. Factors suggesting that anxiety is the symptom of a medical disorder include an onset of the symptoms after the age of 35 years, no personal or family history of anxiety, no increase in stress, little or no avoidance of anxiety-provoking situations, and a poor response to antianxiety medication. A physical cause should be suspected when anxiety follows recent changes in medication or accompanies signs and symptoms of a new disease.(2) References (abridged): 1. Kessler RC, McGonagle KA, Zhao S, et al. Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: results from the National Comorbidity Survey. Arch Gen Psychiatry 1994;51:8-19 2. Diagnostic and statistical manual of mental disorders, 4th ed.: DSM-IV. Washington, D.C.: American Psychiatric Association, 1994:435-6 3. Wittchen HU, Carter RM, Pfister H, Montgomery SA, Kessler RC. Disabilities and quality of life in pure and comorbid generalized anxiety disorder and major depression in a national survey. Int Clin Psychopharmacol 2000;15:319-328 4. Brantley PJ, Mehan DJ Jr, Ames SC, Jones GN. Minor stressors and generalized anxiety disorders among low-income patients attending primary care clinics. J Nerv Ment Dis 1999;187:435-440 5. Brown ES, Fulton MK, Wilkeson A, Petty F. The psychiatric sequelae of civilian trauma. Compr Psychiatry 2000;41:19-23 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ON SEROTONIN AND ANXIETY The following points are made by Solomon H. Snyder (Nature 2002 416:377): 1) Perhaps one of the best known of neurotransmitters, serotonin has a role in many different neurobiological processes. For example, it helps to regulate our moods -- a fact that has been well established since the 1950s, with the discovery that drugs that deplete serotonin precipitate depression whereas increasing serotonin levels has antidepressant effects. The idea that serotonin might also affect anxiety was first suspected in the 1980s following the serendipitous finding that buspirone, a drug developed to treat psychotic patients, is also useful for treating anxiety disorders, and stimulates a type of serotonin-detecting molecule in the body, the serotonin-1A receptor. Later came the discovery that mice that have been genetically engineered to lack this receptor, and so cannot respond normally to serotonin, show increased "anxiety-like" behavior(2-4). 2) But the underlying mechanisms have been elusive. For instance, the relevant brain regions have not been delineated. Moreover, the findings in receptor-deficient mice appear to contradict observations that compounds that block serotonin-1A receptors do not cause anxiety in adult mice. Gross et al.(1) have substantially clarified these issues: By using mice in which the serotonin-1A receptor can be knocked out at will, they have shown that the absence of the receptor in newborns does indeed lead to anxiety-like behavior, whereas its knockout during adult life has no effect. Gross et al. also discriminate between the role of the receptors in the hindbrain and in forebrain structures such as the hippocampus and cerebral cortex. 3) Conventional gene-knockout techniques are powerful tools for working out what a protein does. But they have major limitations compared with using drugs (which might, for example, activate or inhibit the protein of interest). Genes tend to be knocked out during embryonic life, generally affecting the whole organism throughout its lifetime. By contrast, a drug can be administered at any time and, in the brain, can be injected into specific areas. The approach adopted by Gross et al.1 is an ingenious way of addressing the shortcomings of gene knockouts, providing time-and tissue-specific deletion and restoration of serotonin-1A receptors. To achieve time-specific knockouts, Gross et al. produced mice in which expression of the serotonin-1A-receptor gene was under the control of the antibiotic doxycycline. The gene could be switched off -- with a certain time lag -- simply by feeding mice the antibiotic. References (abridged): 1. Gross, C. et al. Nature 416, 396-400 (2002) 2. Parks, C. L., Robinson, P. S., Sibille, E., Shenk, T. & Toth, M. Proc. Natl Acad. Sci. USA 95, 10734-10739 (1998) 3. Ramboz, S. et al. Proc. Natl Acad. Sci. USA 95, 14476-14481 (1998) 4. Heisler, L. K. et al. Proc. Natl Acad. Sci. USA 95, 15049-15054 (1998) 5. D'Amato, R. J. et al. Proc. Natl Acad. Sci. USA 84, 4322-4326 (1987) Nature http://www.nature.com/nature -------------------------------- Related Material: EMOTION, COGNITION, AND BEHAVIOR The following points are made by R. J. Dolan (Science 2002 298:1191): 1) An ability to ascribe value to events in the world, a product of evolutionary selective processes, is evident across phylogeny (1). Value in this sense refers to an organism's facility to sense whether events in its environment are more or less desirable. Within this framework, emotions represent complex psychological and physiological states that, to a greater or lesser degree, index occurrences of value. It follows that the range of emotions to which an organism is susceptible will, to a high degree, reflect on the complexity of its adaptive niche. In higher order primates, in particular humans, this involves adaptive demands of physical, socio-cultural, and interpersonal contexts. 2) The importance of emotion to the variety of human experience is evident in that what we notice and remember is not the mundane but events that evoke feelings of joy, sorrow, pleasure, and pain. Emotion provides the principal currency in human relationships as well as the motivational force for what is best and worst in human behavior. Emotion exerts a powerful influence on reason and, in ways neither understood nor systematically researched, contributes to the fixation of belief. A lack of emotional equilibrium underpins most human unhappiness and is a common denominator across the entire range of mental disorders from neuroses to psychoses, as seen, for example, in obsessive-compulsive disorder (OCD) and schizophrenia. More than any other species, we are beneficiaries and victims of a wealth of emotional experience. 3) Progress in emotion research mirrors wider advances in cognitive neurosciences where the idea of the brain as an information processing system provides a highly influential metaphor. An observation by the 19th-century psychologist, William James (1842-1910), questions the ultimate utility of a purely mind-based approach to human emotion. James surmised that "if we fancy some strong emotion, and then try to abstract from our consciousness of it all the feelings of its bodily symptoms, we find we have nothing left behind, no mind-stuff out of which the emotion can be constituted, and that a cold and neutral state of intellectual perception is all that remains" (2). This quotation highlights the fact that emotions as psychological experiences have unique qualities, and it is worth considering what these are. First, unlike most psychological states emotions are embodied and manifest in uniquely recognizable, and stereotyped, behavioral patterns of facial expression, comportment, and autonomic arousal. Second, they are less susceptible to our intentions than other psychological states insofar as they are often triggered, in the words of James, "in advance of, and often in direct opposition of our deliberate reason concerning them" (2). Finally, and most importantly, emotions are less encapsulated than other psychological states as evident in their global effects on virtually all aspects of cognition. This is exemplified in the fact that when we are sad the world seems less bright, we struggle to concentrate, and we become selective in what we recall. These latter aspects of emotion and their influences on other psychological functions are addressed here. 4) In summary: Emotion is central to the quality and range of everyday human experience. The neurobiological substrates of human emotion are now attracting increasing interest within the neurosciences, motivated to a considerable extent by advances in functional neuroimaging techniques. An emerging theme is the question of how emotion interacts with and influences other domains of cognition, in particular attention, memory, and reasoning. The psychological consequences and mechanisms underlying the emotional modulation of cognition provide the focus of much new research.(3-5) References (abridged): 1. K. J. Friston, et al., Neuroscience 59, 229 (1994) 2. W. James, The Principles of Psychology (Holt, New York, 1890) 3. A. Ohman, et al., J. Exp. Psychol. Gen. 130, 466 (2001) 4. J. L. Armony, et al., Neuropsychologia 40, 817 (2002) 5. K. Mogg, et al., Behav. Res. Ther. 35, 297 (1997) Science http://www.sciencemag.org From checker at panix.com Wed May 25 18:44:58 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:44:58 -0400 (EDT) Subject: [Paleopsych] SW: Class and National Health Message-ID: Public Health: Class and National Health http://scienceweek.com/2004/sb041015-6.htm The following points are made by S.L. Isaacs and S.A. Schroeder (New Engl. J. Med. 2004 351:1137): 1) The health of the American public has never been better. Infectious diseases that caused terror in families less than 100 years ago are now largely under control. With the important exception of AIDS and occasional outbreaks of new diseases such as the severe acute respiratory syndrome (SARS) or of old ones such as tuberculosis, infectious diseases no longer constitute much of a public health threat. Mortality rates from heart disease and stroke -- two of the nation's three major killers --have plummeted.(1) 2) But any celebration of these victories must be tempered by the realization that these gains are not shared fairly by all members of our society. People in upper classes -- those who have a good education, hold high-paying jobs, and live in comfortable neighborhoods -- live longer and healthier lives than do people in lower classes, many of whom are black or members of ethnic minorities. And the gap is widening. 3) A great deal of attention is being given to racial and ethnic disparities in health care.(2-5) At the same time, the wide differences in health between the haves and the have-nots are largely ignored. Race and class are both independently associated with health status, although it is often difficult to disentangle the individual effects of the two factors. 4) The authors contend that increased attention should be given to the reality of class and its effect on the nation's health. Clearly, to bring about a fair and just society, every effort should be made to eliminate prejudice, racism, and discrimination. In terms of health, however, differences in rates of premature death, illness, and disability are closely tied to socioeconomic status. Concentrating mainly on race as a way of eliminating these problems downplays the importance of socioeconomic status on health. 5) The focus on reducing racial inequality is understandable since this disparity, the result of a long history of racism and discrimination, is patently unfair. Because of the nation's history and heritage, Americans are acutely conscious of race. In contrast, class disparities draw little attention, perhaps because they are seen as an inevitable consequence of market forces or the fact that life is unfair. As a nation, we are uncomfortable with the concept of class. Americans like to believe that they live in a society with such potential for upward mobility that every citizen's socioeconomic status is fluid. The concept of class smacks of Marxism and economic warfare. Moreover, class is difficult to define. There are many ways of measuring it, the most widely accepted being in terms of income, wealth, education, and employment. 6) Although there are far fewer data on class than on race, what data exist show a consistent inverse and stepwise relationship between class and premature death. On the whole, people in lower classes die earlier than do people at higher socioeconomic levels, a pattern that holds true in a progressive fashion from the poorest to the richest. At the extremes, people who were earning $15,000 or less per year from 1972 to 1989 (in 1993 dollars) were three times as likely to die prematurely as were people earning more than $70,000 per year. The same pattern exists whether one looks at education or occupation. With few exceptions, health status is also associated with class. References (abridged): 1. Institute of Medicine. The future of the public's health in the 21st century. Washington, D.C.: National Academies Press, 2003:20. 2. Smedley BD, Stith AY, Nelson AR, eds. Unequal treatment: confronting racial and ethnic disparities in health care. Washington, D.C.: National Academy Press, 2003 3. Steinbrook R. Disparities in health care -- from politics to policy. N Engl J Med 2004;350:1486-1488 4. Burchard EG, Ziv E, Coyle N, et al. The importance of race and ethnic background in biomedical research and clinical practice. N Engl J Med 2003;348:1170-1175 5. Winslow R. Aetna is collecting racial data to monitor medical disparities. Wall Street Journal. March 5, 2003:A1 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: SCIENCE POLICY: ON HEALTH CARE DISPARITIES AND POLITICS The following points are made by M. Gregg Bloche (New Engl. J. Med. 2004 350:1568): 1) Do members of disadvantaged minority groups receive poorer health care than whites? Overwhelming evidence shows that they do.(1) Among national policymakers, there is bipartisan acknowledgment of this bitter truth. Department of Health and Human Services (DHHS) Secretary Tommy Thompson has said that health disparities are a national priority, and congressional Democrats and Republicans are advocating competing remedies.(2,3) 2) So why did the DHHS issue a report last year, just days before Christmas, dismissing the "implication" that racial differences in care "result in adverse health outcomes" or "imply moral error... in any way"?(4) And why did top officials tell DHHS researchers to drop their conclusion that racial disparities are "pervasive in our health care system" and to remove findings of disparity in care for cancer, cardiac disease, AIDS, asthma, and other illnesses?(5) Secretary Thompson now says it was a "mistake". "Some individuals," Thompson told a congressional hearing in February, "wanted to be more positive." 3) But when word that DHHS officials had ordered a rewrite first surfaced in January, the department credited Thompson for the optimism. "That's just the way Secretary Thompson wants to create change," a spokesman told the Washington Post. "The idea is not to say, `We failed, we failed, we failed,' but to say, `We improved, we improved, we improved.'" According to DHHS sources and internal correspondence, Thompson's office twice refused to approve drafts by department researchers that emphasized detailed findings of racial disparity.(5) In July and September, top officials within the offices of the assistant secretary for health and the assistant secretary for planning and evaluation asked for rewrites, resulting in the more upbeat version released before Christmas. 4) After unhappy DHHS staff members leaked drafts from June and July to congressional Democrats (and to the author), Thompson released the July version. For all who are concerned about equity in American medicine, issuance of the July draft was an important step forward. The researchers who prepared it showed that disparate treatment is pervasive, created benchmarks for monitoring gaps in care and outcomes, and thereby made it more difficult for those who deny disparities to resist action to remedy the problem. And therein lies the key to how the rewrite came about -- and to why the episode is so troubling. References (abridged): 1. Smedley BD, Stith AY, Nelson AR, eds. Unequal treatment: confronting racial and ethnic disparities in health care. Washington, D.C.: National Academies Press, 2003 2. Health Care Equality and Accountability Act, S. 1833, 108th Cong. (2003) (introduced by Sen. Daschle) 3. Closing the Health Care Gap Act of 2004, S. 2091, 108th Cong. (2004) (introduced by Sen. Frist) 4. National health care disparities report. Rockville, Md.: Agency for Health care Research and Quality, December 23, 2003 5. Bloche MG. Erasing racial data erased report's truth. Los Angeles Times. February 15, 2004:M1 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ON THE COSTS OF DENYING HEALTH-CARE SCARCITY The following points are made by G.C. Alexander et al (Arch Intern Med. 2004;164:593-596): 1) Scarcity is increasingly common in health care, yet many physicians may be reluctant to acknowledge the ways that limited health care resources influence their decisions. Reasons for this denial include that physicians are unaccustomed to thinking in terms of scarcity, uncomfortable with the role that limited resources play in poor outcomes, and hesitant to acknowledge the influence of financial incentives and restrictions on their practice. However, the denial of scarcity serves as a barrier to containing costs, alleviating avoidable scarcity, limiting the financial burden of health care on patients, and developing fair allocation systems. 2) Almost two decades ago, Aaron and Schwartz(1) published The Painful Prescription: Rationing Hospital Care, in which they examined the dramatic differences in health care expenditures between the US and Great Britain. Their examination highlighted the role of rationing within the British system and explored the difficult choices that must be made when trying to weigh the costs and benefits of many health care services. They noted that British physicians appeared to rationalize or redefine health care standards to deal more comfortably with resource limitations over which they had little control. 3) Since that time, physicians in the US have been under increasing pressure to acknowledge and respond to scarcity.(2-4) To begin to learn more about how they respond to these pressures, the authors conducted exploratory interviews with physicians faced with scarcity on a daily basis: transplant cardiologists involved in making decisions about which patients to place on the organ waiting list; pediatricians who frequently prescribe intravenous immunoglobulin (IVIg), a safe and effective medical treatment that has been in short supply(2); and general internists who make cost-quality trade-offs on a daily basis. The interviews were conducted in confidential settings, included open-ended and directed questions, and were recorded and transcribed for subsequent analysis. During these interviews, the authors were struck by the vehemence with which the physicians they interviewed denied scarcity or, more commonly, the constraints that scarcity imposes on their practice. The authors were left with the impression that physicians' awareness of scarcity and its consequences lies under the surface. 4) The authors conclude: Physicians' limited time and energy will never suffice to fulfill the almost limitless needs of their patients. Similarly, the limited resources available to health care in the US guarantee that difficult choices must and will be made regarding the distribution of health care. Physicians are in a privileged position to help develop policies that promote fair allocation of health care resources. However, to do so, they must examine their own practices and those of the health care systems in which they work. Denial of the impact of scarcity limits physicians' abilities to play an active role in reshaping policies on a local and national level. References (abridged): 1. Aaron HJ, Schwartz WB. The Painful Prescription: Rationing Hospital Care. Washington, DC: Brookings Institution; 1984 2. Tarlach GM. Globulin goblins: shortfall in immune globulin supplies looms. Drug Topics. 1998;142:16 3. Pear R. States ration low supplies of 5 vaccines for children. New York Times. September 17, 2002:A26 4. Morreim EH. Fiscal scarcity and the inevitability of bedside budget balancing. Arch Intern Med. 1989;149:1012-1015 5. United Network for Organ Sharing. Data. Available at: http://www.unos.org/data/default.asp?displayType=USData. Archives of Internal Medicine http://pubs.ama-assn.org -------------------------------- Related Material: HEALTH CARE AND RURAL AMERICA The following points are made by S.J. Blumenthal and J. Kagen (J. Am. Med. Assoc. 2002 287:109): 1) Poverty, a major risk factor for poor health outcomes, is more prevalent in inner-city and rural areas than in suburban areas. In 1999, 14.3 percent of rural Americans lived in poverty compared to 11.2 percent of urban Americans. Irrespective of where they live, persons with lower incomes and less education are more likely to report unmet health needs, less likely to have health insurance coverage, and less likely to receive preventive health care. When combined, these variables raise the risk of death across all demographic populations. 2) Many of the ills associated with poverty, including lower total household income and a higher number of uninsured residents, are magnified in rural areas. In addition, rural communities have fewer hospital beds, physicians, nurses, and specialists per capita as compared to urban residents, as well as increased transportation barriers to access health care. 3) The highest death rates for children and young adults are found in the most rural counties, and rural residents see physicians less often and usually later in the course of an illness. People in rural America experience higher rates of chronic disease and the health-damaging behaviors associated with them. They are more likely to smoke, to lose teeth, and to experience limitations from chronic health conditions. While death rates from homicides are greater in urban areas, mortality rates from unintentional injuries and motor vehicle crashes are disproportionately more common in rural America. J. Am. Med. Assoc. http://www.jama.com -------------------------------- Related Material: ON HEALTH OF THE GLOBAL POOR The following points are made by P. Jha et al (Science 2002 295:2036): 1) Improvements in global health in the 2nd half of the 20th century have been enormous but remain incomplete. Between 1960 and 1995, life-expectancy in low-income countries improved by 22 years as opposed to 9 years in high-income countries. Mortality of children under 5 years of age in low-income countries has been halved since 1960. Even so, 10 million child deaths occur annually, and other enormous health burdens remain. 2) In 1998, almost a third of deaths in low- and middle-income countries were due to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies: a death toll of 16 million, equivalent to the population of Florida. Of those deaths, 1.6 million were from measles, tetanus, and diphtheria, diseases routinely vaccinated against in wealthy countries. 3) Of the half million women who die annually due to pregnancy or childbirth, 99 percent do so in low- and middle-income countries. Approximately 2.4 billion people live at risk of malaria, and at least 1 million died from malaria in 1998. There are 8 million new cases of tuberculosis every year, and 1.5 million deaths from tuberculosis. 4) On the basis of current smoking trends, tobacco-attributable disease will kill approximately 500 million people over the next 5 decades. Over 20 million people have died already of HIV?AIDS, 40 million people are infected currently, and its spread continues unabated in many countries. The burden falls most heavily on poor countries and on the poorest of the people within those countries. 5) Of the 30 million children not receiving basic immunizations, 27 million live in countries with GNP below $1200 per capita. In India, the prevalence of childhood mortality, smoking, and tuberculosis is three times higher among the lowest income or educated groups than among the highest. Science http://www.sciencemag.org From checker at panix.com Wed May 25 18:47:31 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:47:31 -0400 (EDT) Subject: [Paleopsych] SW: On Communication with Extraterrestrials Message-ID: Astrobiology: On Communication with Extraterrestrials http://scienceweek.com/2004/sa041008-5.htm The following points are made by Woodruff T. Sullivan III (Nature 2004 431:27): 1) Although the Search for Extraterrestrial Intelligence (SETI) has yet to detect a signal, the efforts continue because so little of the possible parameter space has been searched so far. These projects have almost all followed the dominant paradigm --launched 45 years ago by Cocconi and Morrison(1) -- of using radio telescopes to look for signs of extraterrestrial life. This focus on electromagnetic waves (primarily at radio wavelengths, but also at optical ones) was based on various arguments for their efficiency as a means of interstellar communication. However, Rose and Wright(2) have made the case that if speedy delivery is not required, long messages are in fact more efficiently sent in the form of material objects -- effectively messages in a bottle. Although the suggestion itself is not new(3,4), it had never before been backed up by quantitative analysis. 2) A fundamental problem in searching for extraterrestrial intelligence is to guess the communications set-up of the extraterrestrials who might be trying to contact us. In which direction should we look for their transmitter? At which frequencies? How might the message be coded? How often is it broadcast? (For this discussion I am assuming that the signals are intentional, setting aside the a priori equally likely possibility that the first signal found could be merely leakage arising from their normal activities.) Conventional wisdom holds that they would set up a beam of electromagnetic waves, just as we could do with, for example, the 305-meter Arecibo radio telescope in Puerto Rico, Earth's most powerful radio transmitter, or a pulsed laser on the 10-meter Keck optical telescope in Hawaii. Rose and Wright(2) conclude, however, that the better choice would be to send packages laced with information. 3) Unless the messages are short or the extraterrestrials are nearby, this "write" strategy requires less energy per bit of transmitted information than the "radiate" strategy does. Cone-shaped beams of radiation necessarily grow in size as they travel outwards, meaning that the great majority of the energy is wasted, even if some of it hits the intended target. A package, on the other hand, is not "diluted" as it travels across space, presuming that it's correctly aimed at its desired destination. For short messages, however, electromagnetic waves win out because of the overheads involved in launching, shielding and then decelerating a package, no matter how small it is. For a two-way conversation with extraterrestrials, the light-speed of electromagnetic waves is far superior. 4) As an example of a large message, consider all of the written and electronic information now existing on Earth: it's estimated(5) to amount to about one exabyte (10^(18) bytes). Rose and Wright(2) calculate that, using scanning tunnelling microscopy, these bits could be inscribed (in nanometer squares) within one gram of material! But this precious package would still require a cocoon of 10,000 kilograms to accelerate it from our planet to a speed of 0.1% of the speed of light, protect it from radiation damage along a 10,000-light-year route, and then decelerate it upon arrival. References: 1. Cocconi, G. & Morrison, P. Nature 184, 844-846 (1959) 2. Rose, C. & Wright, G. Nature 431, 47-49 (2004) 3. Bracewell, R. Nature 187, 670-671 (1960) 4. Papagiannis, M. Q. J. R. Astron. Soc. 19, 277-281 (1978) 5. Murphy, C. Atlantic 277, No. 5, 20-22 (1996) Nature http://www.nature.com/nature -------------------------------- Related Material: ASTROBIOLOGY: ON INTELLIGENT LIFE IN THE UNIVERSE The following points are made by J. Cohen and I. Stewart ((Nature 22 Feb 01 409:1119): 1) The authors point out that it is possible to imagine the existence of forms of life very different from those found on Earth, occupying habitats that are unsuitable for our kind of life. Some of those aliens might be technological, because technology is an autocatalytic process, and it follows that some aliens might possess technology well in advance of our own, including interstellar transportation. So much is clear, but this train of logic begs the obvious question of where these intelligent non-humanoid aliens might be. 2) The authors point out that the subject area of this discussion is often called "astrobiology", although in science fiction circles (where the topic has arguably been thought through more carefully than it has been in academic circles) the term "xenobiology" is favored. The authors suggest the difference is significant: Astrobiology is a mixture of astronomy and biology, and the tendency is to assume that the field must be assembled from contemporary astronomy and biology; in contrast, xenobiology is the biology of the strange, and the name inevitably involves the idea of extending contemporary biology into new and alien realms. 3) The authors ask: Upon what science should xenobiology be based? The authors suggest that the history of science indicates that any discussion of alien life will be misleading if it is based on the presumption that contemporary science is the ultimate in human understanding. Consider the position of science a century ago. We believed then that we inhabited a newtonian clockwork Universe with absolute space and absolute time; that time was independent of space; that both were of infinite extent; and that the Universe had always existed, always would exist, and was essentially static. We knew about the biological cell, but we had a strong feeling that life possessed properties that could not be reduced to conventional physics; we had barely begun to appreciate the role of natural selection in evolution; and we had no idea about genetics beyond mendelian numerical patterns. Our technology was equally primitive: cars were inferior to the horse, and there was no radio, television, computers, biotechnology or mobile phones. Space travel was the stuff of fantasy. If the past is any guide, then almost everything we now think we know will be substantially qualified or proven wrong within the next 25 years, let alone another century. Biology, in particular, will not persist in its current primitive form. At present, biology is at a stage roughly analogous to physics when Newton (1642-1727) discovered his law of gravity. "There is an awfully long way to go." 4) The authors point out that evolution on Earth has been in progress for at least 3.8 billion years. "This is deep time --too deep for scenarios expressed in human terms to make much sense. A hundred years is the blink of an eye compared with the time that humans have existed on Earth. The lifespan of the human race is similarly short when compared with the time that life has existed on Earth. It is ridiculous to imagine that somehow, in a single century of human development, we have suddenly worked out the truth about life. After all, we do not really understand how a light switch works at a fundamental level, let alone a mitochondrion." Nature http://www.nature.com/nature -------------------------------- Related Material: PROSPECTS FOR THE SEARCH FOR EXTRATERRESTRIAL INTELLIGENCE Notes by ScienceWeek: The conjured image is poignant: intelligent life sprinkled throughout our Galaxy, each sprinkle separated from the others by 1000 light years, each sprinkle searching for the others with radio transmitters and receivers, small robotic spacecraft sent beeping into empty space between the stars, the beeping like a faint bleating in the dark as the sprinkles search for each other. Of course, the conjured image may be wrong: there may be intelligent life dense in the Galaxy; or we may be alone. It does not matter. For the human species on this planet Earth, the quest is part of our destiny, part of what we do as a species, and it will go on as long as we remain civilized. J.C. Tarter and C.F. Chyba (SETI Institute, US) present a review of current and future efforts in the search for extraterrestrial intelligence, the authors making the following points: 1) During the past 40 years, researchers have conducted searches for radio signals from an extraterrestrial technology, sent spacecraft to all but one of the planets in our Solar System, and expanded our knowledge of the conditions in which living systems can survive. The public perception is that we have looked extensively for signs of life elsewhere. But in reality, we have hardly begun to search. Assuming our current, comparatively robust space program continues, by 2050 we may finally know whether there is, or ever was, life elsewhere in our Solar System. At a minimum, we will have thoroughly explored the most likely candidates, a task not yet accomplished. We will have discovered whether life exists on Jupiter's moon Europa, or on Mars. And we will have undertaken the systematic exobiological exploration of planetary systems around other stars, seeking traces of life in the spectra of planetary atmospheres. These surveys will be complemented by expanded searches for intelligent signals. 2) The authors point out that although the current language is that of a "search for extraterrestrial intelligence" (SETI), what is being sought is evidence of extraterrestrial technologies. Until now, researchers have concentrated on only one specific technology -- radio transmissions at wavelengths with weak natural backgrounds and little absorption. No verified evidence of a distant technology has been found, but the null result may have more to do with limitations in range and sensitivity than with actual lack of civilization. The most distant star probed directly is still less than 1 percent of the distance across our Galaxy. 3) The authors conclude: "If by 2050 we have found no evidence of an extraterrestrial technology, it may be because technical intelligence almost never evolves, or because technical civilizations rapidly bring about their own destruction, or because we have not yet conducted an adequate search using the right strategy. If humankind is still here in 2050 and still capable of doing SETI searches, it will mean that our technology has not yet been our own undoing -- a hopeful sign for life generally. By then we may begin considering the active transmission of a signal for someone else to find, at which point we will have to tackle the difficult questions of who will speak for Earth and what they will say." Scientific American 1999 December From checker at panix.com Wed May 25 18:47:43 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:47:43 -0400 (EDT) Subject: [Paleopsych] NYT: China, New Land of Shoppers, Builds Malls on Gigantic Scale Message-ID: China, New Land of Shoppers, Builds Malls on Gigantic Scale http://www.nytimes.com/2005/05/25/business/worldbusiness/25mall.html By DAVID BARBOZA DONGGUAN, China - After construction workers finish plastering a replica of the Arc de Triomphe and buffing the imitation streets of Hollywood, Paris and Amsterdam, a giant new shopping theme park here will proclaim itself the world's largest shopping mall. The South China Mall - a jumble of Disneyland and Las Vegas, a shoppers' version of paradise and hell all wrapped in one - will be nearly three times the size of the massive Mall of America in Minnesota. It is part of yet another astonishing new consequence of the quarter-century economic boom here: the great malls of China. Not long ago, shopping in China consisted mostly of lining up to entreat surly clerks to accept cash in exchange for ugly merchandise that did not fit. But now, Chinese have started to embrace America's modern "shop till you drop" ethos and are in the midst of a buy-at-the-mall frenzy. Already, four shopping malls in China are larger than the Mall of America. Two, including the South China Mall, are bigger than the West Edmonton Mall in Alberta, which just surrendered its status as the world's largest to an enormous retail center in Beijing. And by 2010, China is expected to be home to at least 7 of the world's 10 largest malls. Chinese are swarming into malls, which usually have many levels that rise up rather than out in the sprawling two-level style typical in much of the United States. Chinese consumers arrive by bus and train, and growing numbers are driving there. On busy days, one mall in the southern city of Guangzhou attracts about 600,000 shoppers. For years, the Chinese missed out on the fruits of their labor, stitching shoes, purses or dresses that were exported around the world. Now, China's growing consumerism means that its people may be a step or two closer to buying the billion Cokes, [2]Revlon lipsticks, [3]Kodak cameras and the like that foreign companies have long dreamed they could sell. "Forget the idea that consumers in China don't have enough money to spend," said David Hand, a real estate and retailing expert at [4]Jones Lang LaSalle in Beijing. "There are people with a lot of money here. And that's driving the development of these shopping malls." For sale are a wide range of consumer favorites - cellphones, DVD players, jeans, sofas and closets to assemble yourself. There is food from many regions of China and franchises with familiar names - KFC, [5]McDonald's and [6]IMAX theaters. Stores without Western pedigree sell Gucci and Louis Vuitton goods. While peasants and poor workers may only window-shop, they have joined a regular pilgrimage to the mall that has set builders and developers afire. The developers are spending billions of dollars to create these supersize shopping centers in the country's fastest-growing cities - betting that a nation of savers is on the verge of also becoming a nation of tireless shoppers. For the moment, the world's biggest mall is the six-million-square-foot Golden Resources Mall, which opened last October in northwestern Beijing. It has already sparked envy and competitive ambition among the world's big mall builders, who outwardly scoff at the Chinese ascent to mall-dom, even as they plot their own path to build on such scale in China. How big is six million square feet? That mall, which is expected to cost $1.3 billion when completed, spans the length of six football fields and easily exceeds the floor space of the Pentagon, which at 3.7 million square feet is the world's largest office building. It is a single, colossal five-story building - with rows and rows of shops stacked on top of more rows and rows of shops - so large that it is hard to navigate among the 1,000 stores and the thousands of shoppers. The shopping-mall building spree, like much economic activity in China these days, is so aggressive that some economists and officials have started to worry that it may be another sign of an overheated economy, and that the country's building frenzy may be lurching toward a fall. So far, though, there is no end in sight - and no evidence that China's long boom is likely to suffer anything more than a modest slowdown. "These shopping centers are just huge," said Radha Chadha, who runs Chadha Strategy Consulting in Hong Kong, which tracks shopping malls and the sales of luxury goods in Asia. "China likes to do things big. They like to make an impact." Retail sales in China have jumped nearly 50 percent in the last four years, as measured by the nation's biggest retailers, government data says. And with rising incomes, Chinese are spending their money on shoes, bags, clothing and even theme-park-style rides. "We like this place a lot," said Ruth Tong, 27, an early visitor to the South China Mall here in Dongguan with her husband and 5-year-old son. "They have a lot of fun things to do. They have shopping and even rides. So we like it and yes, we'll come back again." The central government recently ordered state-controlled banks to tighten lending to huge shopping mall projects. But that has not yet tempered the plans of aggressive developers and local government officials for transforming vast tracts of land into huge shopping centers. After all, the demand is certainly growing. Income per person in China has reached the equivalent of about $1,100 a year, up 50 percent since 2000. China is still a land of disparity, though it has a growing middle class that has swelled to as many as 70 million. And as the country rapidly urbanizes and modernizes, open-air food markets and old department stores are being replaced by giant supermarkets and big-box retailers. Ikea and Carrefour, the French supermarket chain, are mobbed with customers. And China's increasingly affluent young people are adopting the American teenager's habit of hanging out at the mall. Big enclosed shopping malls, which came of age in America in the late 1970's and Europe in the late 80's, are sprouting up all over China. According to retail analysts, more than 400 large malls have been built in China in the last six years. And at a time when the biggest malls under construction in the United States measure about a million square feet, developers here are creating malls that are six, seven and eight million square feet. The current titleholder, the Golden Resources Mall, where 20,000 employees work, is the creation of Huang Rulun, an entrepreneur who made a fortune selling real estate in coastal Fujian Province. Six years ago, Mr. Huang acquired a 440-acre tract of land outside Beijing to create a virtual satellite city, which will soon have 110 new apartment buildings, along with schools and offices planted like potted trees around his neon-lighted mall. Perhaps the most aggressive mall building is taking place in Guangdong Province in the south, the seat of China's flourishing Pearl River Delta region. In January, more than 400,000 people showed up in the principal city, Guangzhou, for the opening of the Grandview Mall, which also calls itself the world's largest mall, with three million square feet. It even says it has the tallest indoor fountain. Exactly who has the world's largest shopping mall appears to be in dispute. Some Chinese malls claim the largest floor size; others count leased space. Still others say that what counts is that there is only one roof. Indeed, the Triple Five Group, which owns the Mall of America (2.5 million square feet of leased shopping space) and the West Edmonton Mall in Canada (3.2 million square feet), has not conceded defeat. "They are just shops, like a bazaar in the Middle East," Nader Ghermezian, one of the company's principals, said dismissively - and mistakenly - about the Golden Resources Mall, which is under one roof. "They shouldn't be considered. We are still the largest in the world." But that raises another question: Are the malls in this country too big? "It's not so easy to shop at these locations," Mr. Hand of Jones Lang LaSalle said. "Most shopping centers survive on repeat customers. To go to a shopping mall so big and so congested, it may be difficult to have repeat customers." The developers beg to differ. "Shopping malls are a new concept in China, and we are trying to find our own way to do it," said Cai Xunshan, vice president of the Golden Resources Mall. "We don't think we can just copy the format from the U.S." In Dongguan, the developers of the South China Mall say they traveled around the world for two years in search of the right model. The result is a $400 million fantasy land: 150 acres of palm-tree-lined shopping plazas, theme parks, hotels, water fountains, pyramids, bridges and giant windmills. Trying to exceed even some of the over-the-top casino extravaganzas in Las Vegas, it has a 1.3-mile artificial river circling the complex, which includes districts modeled on the world's seven "famous water cities," and an 85-foot replica of the Arc de Triomphe. "We have outstanding architecture from around the world," Tong Rui, vice chief executive at Sanyuan Yinhui Investment and Development, the mall's developer, said as he toured a section modeled on Paris. "You can't see this architecture anywhere else in shopping malls." Hu Guirong, the man behind the development, made his fortune selling noodles and biscuits in China. His aides say he built his mall in Dongguan, a fast-growing city whose population is estimated as high as eight million, with one of the highest car-to-household ratios in the country, because it is situated at a crossroads of two bustling South China metropolises, Shenzhen and Guangzhou. "We wanted to do something groundbreaking," Mr. Tong said, referring to his boss. "We wanted to leave our mark on history." But just to keep a seven-million-square-foot shopping center from looking deserted, some retailing specialists say, requires 50,000 to 70,000 visitors a day. Officials of the South China Mall say they will easily surpass those figures. But before the mall is fully open, the Triple Five Group is working to reclaim the world title, with three megamalls in the planning stages that will expand its operations from its base in North America into China. Two of them, the Mall of China and the Triple Five Wenzhou Mall, are each projected to be 10 million square feet. "You'll see," Mr. Ghermezian of Triple Five said. "We are also expanding the Mall of America. There's going to be a Phase 2." From checker at panix.com Wed May 25 18:47:54 2005 From: checker at panix.com (Premise Checker) Date: Wed, 25 May 2005 14:47:54 -0400 (EDT) Subject: [Paleopsych] Internet News: Earth-to-Virtual Earth Message-ID: Earth-to-Virtual Earth http://www.internetnews.com/xSP/print.php/3507236 By [5]Susan Kuchinskas May 24, 2005 Microsoft ([6]Quote, [7]Chart) gave a sneak preview of a future MSN service called Virtual Earth that's designed to be a deeply immersive local search experience. Tom Bailey, director of sales and marketing for Microsoft's MapPoint division, said it's an addition to traditional local search where you type a query and get a response. "Somebody can orient around a location and dive into that location -- discover, explore and plan activities related to that location." The new service, which will be offered free as part of MSN beginning later this summer, combines features of search and MapPoint. Users will be able to map a particular location and then search local listings for businesses nearby. Eventually, according to the demonstration, theyll be able to click on a listing and get more information about the business. Search results appear in a box on the left of the map; they contain the top 10 results based on proximity to the location. If a searcher scrolls along the map, the results change dynamically to match the new location. Users can add multiple searches to the map; for example, after they've found the nearest restaurant, they can search for an ATM to get the cash to pay for the meal. "You may know where your favorite restaurant is, but you may not know what's around there to allow you to do other activities while you're in the area," Bailey explained. Eventually, Bailey said, the service will be supported by sponsored listings provided by Overture Services, Yahoo's ([8]Quote, [9]Chart) pay-per-click advertising service. Listings come from MSN search, and over time, Bailey said, the company will incorporate its MapPoint service that powers the "find the nearest location" services of corporations including Starbucks and Marriott. The MSN Virtual Earth announcement followed a preview of Google Earth last Friday. During a press event at Google headquarters, Keyhole General Manager John Hanke demonstrated Google Earth, the next iteration of the Keyhole technology, slated to launch in a few weeks. Google Earth will add a new global database and new data sources, such as NASA terrain maps and integrate with Google Local and Google Maps. The demo was very similar to the MSN Virtual Earth demonstration. Four things differentiate MSN Virtual Earth from Google Earth, according to Bailey. First, MSN will roll out the features free to all users. Google Earth will be made available in beta only to paying Keyhole subscribers, according to Google spokeswoman Eileen Rodriguez, although they won't be charged extra for the new functionality. Second, while both offer the ability to print maps and driving directions, MSN Virtual Earth also lets users save listings and details to a scratch pad, which can be e-mailed, blogged, saved or used to create driving itineraries between different locations and businesses. Third, while Google Maps employs Asynchronous JavaScript + XML, or AJAX, to create a rich HTML application without any downloads, Google Earth would require users to download client software. MSN's Bailey noted that the AJAX functionality was created by Microsoft and implemented in the Internet Explorer browser. "We think requiring a software download is one thing that's kept Keyhole usage fairly modest," Bailey said, adding that Virtual Earth also will take advantage of AJAX. Fourth, while Google Earth users can toggle between map view and a zoomable streaming aerial view, MSN adds a third way of seeing the terrain, called "eagle eye view," thanks to a partnership with Pictometry. With Pictometry's patented method, planes fly over locations at 5000 feet and 2500 feet, photographing the landscape from four directions. The result, according to Chief Marketing Officer Dante Pennachia, is "twelve different views of every square foot of everything we fly." These include geo-referenced oblique angle shots, MSN's so-called eagle view. "Every image is correctly referenced with longitude and latitude, which satellite images also provide," Pennachia said. "But because of the 45-degree angle, we have the Z coordinate, the height of anything, as well." The oblique images show buildings and other landscape elements at about a 45 degree angle, rather than from directly overhead, as satellite images do, making visible building and land attributes such as doors, windows, the number of floors, building composition, roads and trees. The company says the oblique view is easier for most people to understand than the aerial view. Many of Pictometry's customers are county or municipal governments, public safety and law enforcement organizations. For example, a fire chief might use its images to measure the height of an elevator shaft for placement of ladders and hoses. Pictometry images don't include potentially invasive details, the company is careful to point out. Users can zoom, but the resolution deteriorates before such things as auto license plate numbers, building addresses or people's faces can be recognized. Pictometry only photographs regions for which there's a paying customer; it charges municipalities based on the square mileage to be covered. To date, the company has imagery for 132 counties in the United States, including the entire State of Massachusetts; it has a contract to document the State of Rhode Island as well. The five-year contract with MSN, for an unspecified amount, licenses Pictometry's existing images for non-commercial use only. "We expect Microsoft will want some additional imagery," Pennachia said. "We think it will be much more useful, giving people a better visual reference as to where something is in a particular building," MSN's Bailey said. Bailey said the timing of MSN's announcement was not influenced by Google's news. Microsoft has [10]previewed the underlying technology before. Virtual Earth is based on Terra Server, a Microsoft Research project led by researchers Jim Gray and Tom Barclay and is designed to offer public access the massive amounts of online data generated by astronomers and the U.S. Geological Survey. The project showcases the scalability of SQL Server 2000 and Windows 2000 Datacenter Server. Amazon.com ([11]Quote, [12]Chart) is building its own database of street-level photos of businesses as an enhancement to local search in its A9 search service. When it [13]launched in January 2005, A9 had around 20 million photos of businesses in 10 major United States cities. JupiterWeb networks: [15]internet.com [16]earthweb.com [17]Devx.com [18]ClickZ [19]Graphics.com References 1. http://www.internetnews.com/RealMedia/ads/click_lx.cgi/intm/news/www.internetnews.com/xSP/print/1942104357/house_ribbon/OasDefault/House_Ribbon_1dddd/jupiterimages_tr.gif/61363534303130333432393438623430 2. http://view.atdmt.com/GSP/iview/ntroiads0080000111gsp/direct/01&93233988?click= 3. http://clk.atdmt.com/GSP/go/ntroiads0080000111gsp/direct/01/93233988 4. http://www.internetnews.com/xSP/article.php/3507236 5. http://www.internetnews.com/feedback.php/http://www.internetnews.com/xSP/article.php/3507236 6. http://www.internetnews.com/stocks/quotes/quote.php/MSFT 7. http://www.internetnews.com/stocks/quotes/chart.php/MSFT/chart 8. http://www.internetnews.com/stocks/quotes/quote.php/YHOO 9. http://www.internetnews.com/stocks/quotes/chart.php/YHOO/chart 10. http://www.internetnews.com/dev-news/article.php/3366551 11. http://www.internetnews.com/stocks/quotes/quote.php/AMZN 12. http://www.internetnews.com/stocks/quotes/chart.php/AMZN/chart 13. http://www.internetnews.com/xSP/article.php/3465211 14. http://www.internetnews.com/staff 15. http://www.internet.com/ 16. http://www.earthweb.com/ 17. http://www.devx.com/ 18. http://www.clickz.com/ 19. http://www.graphics.com/ 20. http://www.jupitermedia.com/ From waluk at earthlink.net Thu May 26 02:43:52 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Wed, 25 May 2005 19:43:52 -0700 Subject: [Paleopsych] Adaptiveness of depression In-Reply-To: <42949E5A.7040402@solution-consulting.com> References: <01C560FB.D72C7540.shovland@mindspring.com> <42949E5A.7040402@solution-consulting.com> Message-ID: <429537E8.4060708@earthlink.net> Yes, the effects of diet has a profound influence on what causes Homo erectus to evolve into Homo sapiens. Several anthropologists (including Wrangham) claim that cooked food is what made Homo human. Prozac can alter our psychological effects in a way similar to "cooked meat" resulting in another sub species of Homo. Prozac presently is only a finger in the dike yet the results should be measurable. Could be that a society on Prozac is without violent crime, murder, etc. That could cause a violent Homo sapiens to evolve into something less hostile and more agreeable to diplomatic resolution. Regards, Gerry Reinhart-Waller Lynn D. Johnson, Ph.D. wrote: > I agree with Steve here; the issue of dietary change is ignored. He > also downplays the social factors some, continuing to emphasize the > medical approach to treatment. If diet and/or social change are > implicated, then more Prozac is merely finger-in-the-dike. > Lynn > > Steve Hovland wrote: > >> Since this guy is an MD, one can assign a high >> probability to the possibility that he knows almost >> nothing about nutrition, including the importance >> of healthy fats in the diet. >> >> He mentions hormones without any consideration >> of what the body uses to build hormones. >> >> Steve Hovland >> www.stevehovland.net >> >> >> -----Original Message----- >> From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] >> Sent: Tuesday, May 24, 2005 9:27 PM >> To: The new improved paleopsych list >> Subject: [Paleopsych] Adaptiveness of depression >> >> Apropos of our recent discussion on the survival value of PTSD, here >> is an interesting expert interview from medscape psychiatry on >> depression. FYI, the 1925 birth cohort had a lifetime prevalance of >> 4% for depression; today it appears to be 17%; these guys say 25% but >> I think that is high. In any case, it is an epidemic. >> LJ >> >> http://www.medscape.com/viewarticle/503013_print >> (registration required) >> >> Expert Interview >> >> >> Mood Disorders at the Turn of the Century: An Expert Interview With >> Peter C. Whybrow, MD >> >> Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape >> >> Editor's Note: >> On behalf of Medscape, Randall F. White, MD, interviewed Peter C. >> Whybrow, MD, Director of the Semel Institute for Neuroscience & Human >> Behavior and Judson Braun Distinguished Professor and Executive >> Chair, Department of Psychiatry and Biobehavioral Sciences, David >> Geffen School of Medicine, University of California, Los Angeles. >> >> Medscape: The prevalence of mood disorders has risen in every >> generation since the early 20th century. In your opinion, what is >> behind this? >> >> Peter C. Whybrow, MD: I think that's a very interesting statistic. My >> own sense is that, especially in recent years, it can be explained by >> changes in the environment. The demand-driven way in which we live >> these days is tied to the increasing levels of anxiety and >> depression. You see that in the latest cohort, the one that was >> studied with the birth date of 1966, depression has grown quite >> dramatically compared with those who were born in cohorts before >> then. So anxiety now starts somewhere in the 20s or 30s, and >> depression is also rising, so the prevalence now for most people in >> America is somewhere around 25%. >> >> Medscape: Lifetime prevalence? >> >> Dr. Whybrow: Yes, lifetime prevalence. >> >> I think it's a socially driven phenomenon; obviously there's not a >> change in the genome. I think we've been diagnosing depression fairly >> accurately for a fair length of time now, since the 1960s, and the >> people who were born in the 1960s are now being diagnosed with >> depression at a higher rate than those who were born earlier and who >> were diagnosed in the 1960s, 1970s, and 1980s. >> >> Medscape: And is this true of both unipolar and bipolar mood disorders? >> >> Dr. Whybrow: It's particularly true of unipolar disorder. There has >> been a growth in interest in bipolar disorder, partly I think because >> of the zeal of certain authors who have seen cyclothymia and other >> oscillating mood states as part of a larger spectrum of >> manic-depressive illness, much as Kraepelin did. And I think that has >> expanded the prevalence of the bipolar spectrum to probably 5% or 6%, >> but the major increase in prevalence, I think, would be diagnosed as >> unipolar depression. >> >> Medscape: Do you think that unipolar and bipolar mood disorders are >> distinct, or do they lie on a continuum that includes all the mood >> disorders in our nosology? >> >> Dr. Whybrow: The way I see it is they are both phenotypes, but they >> have considerable overlap. If you think about them from the >> standpoint of the psychobiology of the illnesses, I think they are >> distinct. >> >> Medscape: Why are women more vulnerable than men to depression? >> >> Dr. Whybrow: My own take on that is that it is driven by the change >> in hormones that you see in women. Estrogen and progesterone, plus >> thyroid and steroids, are the most potent modulators of central >> nervous system activity. If you tie the onset of symptoms to menarche >> or the sexual differentiation in boys and girls, you find that prior >> to that age, which is now around 11 to 13, boys and girls have >> essentially the same depressive symptoms. As adolescence appears, you >> find this extraordinary increase in young women who complain of >> depressive symptoms of one sort or another. Boys tend to have other >> things, of course, particularly what some consider socially deviant >> behavior. >> >> The other interesting thing one sees quite starkly in bipolar illness >> is that, after the age of 50 or so, when menopause occurs, severe >> bipolar illness can actually improve. I've seen that on many occasions. >> >> Also interesting and relevant to the hormonal thesis is the way in >> which thyroid hormone and estrogen compete for each other at some of >> the promoter regions of various genes. In the young woman who has >> bipolar disease -- this is pertinent to the work I have done over the >> years with thyroid hormone -- and who becomes hypothyroid, estrogen >> becomes much more available in the central nervous system, and you >> then see the malignant forms of bipolar illness. Almost all the >> individuals who have severe rapid cycling between the ages of about >> 20 and 40 are women -- high proportions, something like 85% to 90%. >> So this all suggests that there is an interesting modulation of >> whatever it is that permits severe affective illnesses in women by >> the fluxes of estrogen and progesterone. >> >> There is, of course, a whole other component of this, which is a >> social concern in regard to the way in which women are treated in our >> society compared with men. It's far different from when I was first a >> psychiatrist back in the 1960s and 1970s; women are much more >> independent now, but there is still some element of depression being >> driven in part by the social context of their lives, both in family >> and in the workplace, where they still do not enjoy absolute equality. >> >> Medscape: Why would the genotype for mood disorders persist in the >> human genome? What aspect of the phenotype is adaptive? >> >> Dr. Whybrow: I think you have to divide that question into 2. If we >> talk about bipolar disease and unipolar disease separately, it makes >> more sense. >> >> If we take bipolar disease first, I think there is much in the energy >> and excitement of what one considers hypomania that codes for >> excellence, or at least engagement, in day-to-day activities. One of >> the things that I've learned over the years is that if you find an >> individual who has severe manic depressive disease, and you look at >> the family, the family is very often of a higher socioeconomic level >> than one might anticipate. And again, if you look at a family that is >> socially successful, you very often find within it persons who have >> bipolar disease. >> >> So I think that there is a group of genes that codes for the way in >> which we are able to engage emotionally in life. I talk about this in >> one of my books called A Mood Apart [1] -- how emotion as the vehicle >> of expression and understanding of other people's expression is what >> goes wrong in depression and in mania. I think that those particular >> aspects of our expression are rooted in the same set of genes that >> codes for what we consider to be pathology in manic-depressive >> disease. But the interesting part is that if you have, let's say for >> sake of easy discussion, 5 or 6 genes that code for extra energy (in >> the dopamine pathway and receptors, and maybe in fundamental cellular >> activity), you turn out to be a person who sleeps rather little, who >> has a positive temperament, and so on. If you get another 1 or 2 of >> them, you end up in the insane asylum. >> >> So I think there is an extraordinary value to those particular >> genetic pools. So you might say that if you took the bipolar genes >> out of the human behavioral spectrum, then you would find that >> probably we would still be -- this is somewhat hyperbolic -- >> wandering around munching roots and so on. >> >> Medscape: What about unipolar disorder? >> >> Dr. Whybrow: Unipolar is different, I think. This was described in >> some detail in A Mood Apart .[1] I think that the way in which >> depression comes about is very much like the way in which vision >> fails, as an analogy. We can lose vision in all sorts of ways. We can >> lose it because of distortions of the cornea or the lens; the retina >> can be damaged; we can have a stroke in the back of our heads; or >> there can be a pituitary tumor. >> >> I think it's analogous in the way depression strikes: from white >> tract disease in old age to the difficulties you might have following >> a bout of influenza, plus the sensitivity we have to social rank and >> all other social interactions. Those things can precipitate a >> dysregulation of the emotional apparatus, much as you disturb the >> visual apparatus, and you end up with a person who has this >> depressive phenomenon. In some individuals, it repeats itself because >> of a particular biological predisposition. In 30% or 40% of >> individuals, it's a one-time event, which is tied to the >> circumstances under which they find themselves. So I think that's a >> very distinct phenomenon compared with bipolar illness. >> >> In its early forms, depression is a valuable adaptive mechanism >> because it does accurately focus on the fact that the world is not >> progressing positively, so the person is driven to do something about >> it. Sometimes the person is incapable of doing something about it, or >> the adaptive mechanisms are not sufficient, and then you get this >> phenomenon of depression. I know that there have been speculations >> about the fact that this then leads to the person going to the edge >> of the herd and dying because he or she doesn't eat, et cetera, and >> it relieves the others of the burden of caring for him or her. And >> that might have been true years ago, when we lived in small >> hunter-gatherer groups. But of course today we profess, not always >> with much success, to have a humanitarian slant, and we take care of >> people who have these phenomena, bringing them back into the herd as >> they get better. >> >> So I think that it's a bit of a stretch to say that this has >> evolutionary advantage because it allows people to go off and die, >> but I think that in the bipolar spectrum there are probably genes >> that code for extra activity, which we consider to have social value. >> >> Medscape: Let's go back to bipolar disorder. The current approach to >> finding new treatments for bipolar disorder is to try medications >> that were developed for other conditions, especially epilepsy. Do we >> know enough yet about this disease to attempt to develop specific >> treatments de novo? >> >> Dr. Whybrow: Well, we're getting there, but we're not really yet in >> that position. You're quite right, most of the treatments have come >> from either empirical observations, such a lithium, or because there >> is this peculiar association between especially temporal lobe >> epilepsy and bipolar disease, both in terms of phenomena and also >> conceptually. But we do know more and more about the inositol cycle, >> we do know something about some of the genes that code for bipolar >> illness, so I think we will eventually be able to untangle the >> pathophysiology of some of the common forms. >> >> I think the problem is that there are multiple genes that contribute >> to the way in which the cells dysregulate, so it's probably not that >> we'll find one cause of bipolar illness and therefore be able to find >> one medication as we've found for diabetes, for example. >> >> Medscape: Let's talk about your new book American Mania: When More Is >> Not Enough , in which you use mania as a metaphor to describe aspects >> of American culture.[2] >> >> Dr. Whybrow: The metaphor came because of the work I've done over the >> years with bipolar illness. In the late 1990s, when I first moved to >> California, I was struck by the extraordinary stock-market bubble and >> the excitement that went on. You may remember those days: people were >> convinced that this would go on forever, that we'd continue to wake >> up to the sweet smell of money and happiness for the rest of our >> days. This seemed to me to have much in common with the delusional >> systems one sees in mania. >> >> So the whole thing in my mind began to be an interesting metaphor for >> what was happening in the country, as one might see it through the >> eyes of a psychiatrist watching an individual patient. I began to >> investigate this, and what particularly appealed to me was that the >> activity that you see in mania eventually breaks, and of course this >> is exactly what happened with the bubble. Then all sorts of >> recriminations begin, and you enter into a whole new phase. >> >> The book takes off from there, but it has also within it a series of >> discussions about the way in which the economic model that we have >> adopted, which is, of course, Adam Smith's economic model, is based >> upon essentially a psychological theory. If you know anything about >> Adam Smith, you'll know that he was a professor of moral philosophy, >> which you can now translate into being a psychologist. And his theory >> was really quite simple. On one hand, he saw self-interest, which >> these days we might call survival, curiosity, and social ambition as >> the 3 engines of wealth creation. But at the same time, he recognized >> that without social constraints, without the wish we have, all of us, >> to be loved by other people (therefore we're mindful of not doing >> anything too outrageous), the self-interest would run away to greed. >> But he convinced himself and a lot of other people that if >> individuals were free to do what they wished and do it best, then the >> social context in which they lived would keep them from running away >> to greed. >> >> If you look at that model, which is what the book American Mania: >> When More Is Not Enough does, you can see that we now live in a much >> different environment from Smith's, and the natural forces to which >> he gave the interesting name "the invisible hand," and which made all >> this come out for the benefit of society as a whole, have changed >> dramatically. It's losing its grip, in fact, because we now live in a >> society that is extremely demand-driven, and we are constantly >> rewarded for individual endeavor or self-interest through our >> commercial success, but very little for the social investment that >> enables us to have strong unions with other people. This is >> particularly so in the United States. >> >> So you can see that things have shifted dramatically and have gone >> into, if you go back to the metaphor, what I believe is sort of a >> chronic frenzy, a manic-like state, in which most people are now >> working extremely hard. Many of them are driven by debt; other people >> are driven by social ambition, but to the destruction very often of >> their own personal lives and certainly to the fabric of the community >> in which they live. >> >> >> References >> >> 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its >> Disorders . New York, NY: HarperCollins; 1997. >> 2. Whybrow PC. American Mania: When More Is Not Enough . New York, >> NY: WW Norton; 2005. >> >> >> << File: ATT00001.html >> << File: ATT00002.txt >> >> _______________________________________________ >> paleopsych mailing list >> paleopsych at paleopsych.org >> http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > From shovland at mindspring.com Thu May 26 04:03:19 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 25 May 2005 21:03:19 -0700 Subject: [Paleopsych] Adaptiveness of depression Message-ID: <01C5616D.2E9A2270.shovland@mindspring.com> Recent research indicates that SSRI's are not much more effective than sugar pills. The April 10, 2002 issue of JAMA, Vol. 287 No. 14, reported on a study conducted to determine the efficacy of St John's Wort (Hypericum perforatum) in major depressive disorder. The study was a double-blind, randomized, placebo-controlled trial conducted in 12 academic and community psychiatric research clinics in the United States. Patients were randomly assigned to receive H perforatum, placebo, or sertraline (Zoloft - as an active comparator) for 8 weeks. Based on clinical response, the daily dose of H perforatum could range from 900 to 1500 mg and that of sertraline from 50 to 100 mg. Responders at week 8 could continue blinded treatment for another 18 weeks. The findings of the study reported that "Full response occurred in 31.9% of the placebo-treated patients vs 23.9% of the (St. John's Wort) H perforatum-treated patients and 24.8% of (Zoloft) sertraline-treated patients." http://www.alternativementalhealth.com/ezine/Ezine23.htm Steve Hovland www.stevehovland.net -----Original Message----- From: G. Reinhart-Waller [SMTP:waluk at earthlink.net] Sent: Wednesday, May 25, 2005 7:44 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] Adaptiveness of depression Yes, the effects of diet has a profound influence on what causes Homo erectus to evolve into Homo sapiens. Several anthropologists (including Wrangham) claim that cooked food is what made Homo human. Prozac can alter our psychological effects in a way similar to "cooked meat" resulting in another sub species of Homo. Prozac presently is only a finger in the dike yet the results should be measurable. Could be that a society on Prozac is without violent crime, murder, etc. That could cause a violent Homo sapiens to evolve into something less hostile and more agreeable to diplomatic resolution. Regards, Gerry Reinhart-Waller Lynn D. Johnson, Ph.D. wrote: > I agree with Steve here; the issue of dietary change is ignored. He > also downplays the social factors some, continuing to emphasize the > medical approach to treatment. If diet and/or social change are > implicated, then more Prozac is merely finger-in-the-dike. > Lynn > > Steve Hovland wrote: > >> Since this guy is an MD, one can assign a high >> probability to the possibility that he knows almost >> nothing about nutrition, including the importance >> of healthy fats in the diet. >> >> He mentions hormones without any consideration >> of what the body uses to build hormones. >> >> Steve Hovland >> www.stevehovland.net >> >> >> -----Original Message----- >> From: Lynn D. Johnson, Ph.D. [SMTP:ljohnson at solution-consulting.com] >> Sent: Tuesday, May 24, 2005 9:27 PM >> To: The new improved paleopsych list >> Subject: [Paleopsych] Adaptiveness of depression >> >> Apropos of our recent discussion on the survival value of PTSD, here >> is an interesting expert interview from medscape psychiatry on >> depression. FYI, the 1925 birth cohort had a lifetime prevalance of >> 4% for depression; today it appears to be 17%; these guys say 25% but >> I think that is high. In any case, it is an epidemic. >> LJ >> >> http://www.medscape.com/viewarticle/503013_print >> (registration required) >> >> Expert Interview >> >> >> Mood Disorders at the Turn of the Century: An Expert Interview With >> Peter C. Whybrow, MD >> >> Medscape Psychiatry & Mental Health. 2005; 10 (1): ?2005 Medscape >> >> Editor's Note: >> On behalf of Medscape, Randall F. White, MD, interviewed Peter C. >> Whybrow, MD, Director of the Semel Institute for Neuroscience & Human >> Behavior and Judson Braun Distinguished Professor and Executive >> Chair, Department of Psychiatry and Biobehavioral Sciences, David >> Geffen School of Medicine, University of California, Los Angeles. >> >> Medscape: The prevalence of mood disorders has risen in every >> generation since the early 20th century. In your opinion, what is >> behind this? >> >> Peter C. Whybrow, MD: I think that's a very interesting statistic. My >> own sense is that, especially in recent years, it can be explained by >> changes in the environment. The demand-driven way in which we live >> these days is tied to the increasing levels of anxiety and >> depression. You see that in the latest cohort, the one that was >> studied with the birth date of 1966, depression has grown quite >> dramatically compared with those who were born in cohorts before >> then. So anxiety now starts somewhere in the 20s or 30s, and >> depression is also rising, so the prevalence now for most people in >> America is somewhere around 25%. >> >> Medscape: Lifetime prevalence? >> >> Dr. Whybrow: Yes, lifetime prevalence. >> >> I think it's a socially driven phenomenon; obviously there's not a >> change in the genome. I think we've been diagnosing depression fairly >> accurately for a fair length of time now, since the 1960s, and the >> people who were born in the 1960s are now being diagnosed with >> depression at a higher rate than those who were born earlier and who >> were diagnosed in the 1960s, 1970s, and 1980s. >> >> Medscape: And is this true of both unipolar and bipolar mood disorders? >> >> Dr. Whybrow: It's particularly true of unipolar disorder. There has >> been a growth in interest in bipolar disorder, partly I think because >> of the zeal of certain authors who have seen cyclothymia and other >> oscillating mood states as part of a larger spectrum of >> manic-depressive illness, much as Kraepelin did. And I think that has >> expanded the prevalence of the bipolar spectrum to probably 5% or 6%, >> but the major increase in prevalence, I think, would be diagnosed as >> unipolar depression. >> >> Medscape: Do you think that unipolar and bipolar mood disorders are >> distinct, or do they lie on a continuum that includes all the mood >> disorders in our nosology? >> >> Dr. Whybrow: The way I see it is they are both phenotypes, but they >> have considerable overlap. If you think about them from the >> standpoint of the psychobiology of the illnesses, I think they are >> distinct. >> >> Medscape: Why are women more vulnerable than men to depression? >> >> Dr. Whybrow: My own take on that is that it is driven by the change >> in hormones that you see in women. Estrogen and progesterone, plus >> thyroid and steroids, are the most potent modulators of central >> nervous system activity. If you tie the onset of symptoms to menarche >> or the sexual differentiation in boys and girls, you find that prior >> to that age, which is now around 11 to 13, boys and girls have >> essentially the same depressive symptoms. As adolescence appears, you >> find this extraordinary increase in young women who complain of >> depressive symptoms of one sort or another. Boys tend to have other >> things, of course, particularly what some consider socially deviant >> behavior. >> >> The other interesting thing one sees quite starkly in bipolar illness >> is that, after the age of 50 or so, when menopause occurs, severe >> bipolar illness can actually improve. I've seen that on many occasions. >> >> Also interesting and relevant to the hormonal thesis is the way in >> which thyroid hormone and estrogen compete for each other at some of >> the promoter regions of various genes. In the young woman who has >> bipolar disease -- this is pertinent to the work I have done over the >> years with thyroid hormone -- and who becomes hypothyroid, estrogen >> becomes much more available in the central nervous system, and you >> then see the malignant forms of bipolar illness. Almost all the >> individuals who have severe rapid cycling between the ages of about >> 20 and 40 are women -- high proportions, something like 85% to 90%. >> So this all suggests that there is an interesting modulation of >> whatever it is that permits severe affective illnesses in women by >> the fluxes of estrogen and progesterone. >> >> There is, of course, a whole other component of this, which is a >> social concern in regard to the way in which women are treated in our >> society compared with men. It's far different from when I was first a >> psychiatrist back in the 1960s and 1970s; women are much more >> independent now, but there is still some element of depression being >> driven in part by the social context of their lives, both in family >> and in the workplace, where they still do not enjoy absolute equality. >> >> Medscape: Why would the genotype for mood disorders persist in the >> human genome? What aspect of the phenotype is adaptive? >> >> Dr. Whybrow: I think you have to divide that question into 2. If we >> talk about bipolar disease and unipolar disease separately, it makes >> more sense. >> >> If we take bipolar disease first, I think there is much in the energy >> and excitement of what one considers hypomania that codes for >> excellence, or at least engagement, in day-to-day activities. One of >> the things that I've learned over the years is that if you find an >> individual who has severe manic depressive disease, and you look at >> the family, the family is very often of a higher socioeconomic level >> than one might anticipate. And again, if you look at a family that is >> socially successful, you very often find within it persons who have >> bipolar disease. >> >> So I think that there is a group of genes that codes for the way in >> which we are able to engage emotionally in life. I talk about this in >> one of my books called A Mood Apart [1] -- how emotion as the vehicle >> of expression and understanding of other people's expression is what >> goes wrong in depression and in mania. I think that those particular >> aspects of our expression are rooted in the same set of genes that >> codes for what we consider to be pathology in manic-depressive >> disease. But the interesting part is that if you have, let's say for >> sake of easy discussion, 5 or 6 genes that code for extra energy (in >> the dopamine pathway and receptors, and maybe in fundamental cellular >> activity), you turn out to be a person who sleeps rather little, who >> has a positive temperament, and so on. If you get another 1 or 2 of >> them, you end up in the insane asylum. >> >> So I think there is an extraordinary value to those particular >> genetic pools. So you might say that if you took the bipolar genes >> out of the human behavioral spectrum, then you would find that >> probably we would still be -- this is somewhat hyperbolic -- >> wandering around munching roots and so on. >> >> Medscape: What about unipolar disorder? >> >> Dr. Whybrow: Unipolar is different, I think. This was described in >> some detail in A Mood Apart .[1] I think that the way in which >> depression comes about is very much like the way in which vision >> fails, as an analogy. We can lose vision in all sorts of ways. We can >> lose it because of distortions of the cornea or the lens; the retina >> can be damaged; we can have a stroke in the back of our heads; or >> there can be a pituitary tumor. >> >> I think it's analogous in the way depression strikes: from white >> tract disease in old age to the difficulties you might have following >> a bout of influenza, plus the sensitivity we have to social rank and >> all other social interactions. Those things can precipitate a >> dysregulation of the emotional apparatus, much as you disturb the >> visual apparatus, and you end up with a person who has this >> depressive phenomenon. In some individuals, it repeats itself because >> of a particular biological predisposition. In 30% or 40% of >> individuals, it's a one-time event, which is tied to the >> circumstances under which they find themselves. So I think that's a >> very distinct phenomenon compared with bipolar illness. >> >> In its early forms, depression is a valuable adaptive mechanism >> because it does accurately focus on the fact that the world is not >> progressing positively, so the person is driven to do something about >> it. Sometimes the person is incapable of doing something about it, or >> the adaptive mechanisms are not sufficient, and then you get this >> phenomenon of depression. I know that there have been speculations >> about the fact that this then leads to the person going to the edge >> of the herd and dying because he or she doesn't eat, et cetera, and >> it relieves the others of the burden of caring for him or her. And >> that might have been true years ago, when we lived in small >> hunter-gatherer groups. But of course today we profess, not always >> with much success, to have a humanitarian slant, and we take care of >> people who have these phenomena, bringing them back into the herd as >> they get better. >> >> So I think that it's a bit of a stretch to say that this has >> evolutionary advantage because it allows people to go off and die, >> but I think that in the bipolar spectrum there are probably genes >> that code for extra activity, which we consider to have social value. >> >> Medscape: Let's go back to bipolar disorder. The current approach to >> finding new treatments for bipolar disorder is to try medications >> that were developed for other conditions, especially epilepsy. Do we >> know enough yet about this disease to attempt to develop specific >> treatments de novo? >> >> Dr. Whybrow: Well, we're getting there, but we're not really yet in >> that position. You're quite right, most of the treatments have come >> from either empirical observations, such a lithium, or because there >> is this peculiar association between especially temporal lobe >> epilepsy and bipolar disease, both in terms of phenomena and also >> conceptually. But we do know more and more about the inositol cycle, >> we do know something about some of the genes that code for bipolar >> illness, so I think we will eventually be able to untangle the >> pathophysiology of some of the common forms. >> >> I think the problem is that there are multiple genes that contribute >> to the way in which the cells dysregulate, so it's probably not that >> we'll find one cause of bipolar illness and therefore be able to find >> one medication as we've found for diabetes, for example. >> >> Medscape: Let's talk about your new book American Mania: When More Is >> Not Enough , in which you use mania as a metaphor to describe aspects >> of American culture.[2] >> >> Dr. Whybrow: The metaphor came because of the work I've done over the >> years with bipolar illness. In the late 1990s, when I first moved to >> California, I was struck by the extraordinary stock-market bubble and >> the excitement that went on. You may remember those days: people were >> convinced that this would go on forever, that we'd continue to wake >> up to the sweet smell of money and happiness for the rest of our >> days. This seemed to me to have much in common with the delusional >> systems one sees in mania. >> >> So the whole thing in my mind began to be an interesting metaphor for >> what was happening in the country, as one might see it through the >> eyes of a psychiatrist watching an individual patient. I began to >> investigate this, and what particularly appealed to me was that the >> activity that you see in mania eventually breaks, and of course this >> is exactly what happened with the bubble. Then all sorts of >> recriminations begin, and you enter into a whole new phase. >> >> The book takes off from there, but it has also within it a series of >> discussions about the way in which the economic model that we have >> adopted, which is, of course, Adam Smith's economic model, is based >> upon essentially a psychological theory. If you know anything about >> Adam Smith, you'll know that he was a professor of moral philosophy, >> which you can now translate into being a psychologist. And his theory >> was really quite simple. On one hand, he saw self-interest, which >> these days we might call survival, curiosity, and social ambition as >> the 3 engines of wealth creation. But at the same time, he recognized >> that without social constraints, without the wish we have, all of us, >> to be loved by other people (therefore we're mindful of not doing >> anything too outrageous), the self-interest would run away to greed. >> But he convinced himself and a lot of other people that if >> individuals were free to do what they wished and do it best, then the >> social context in which they lived would keep them from running away >> to greed. >> >> If you look at that model, which is what the book American Mania: >> When More Is Not Enough does, you can see that we now live in a much >> different environment from Smith's, and the natural forces to which >> he gave the interesting name "the invisible hand," and which made all >> this come out for the benefit of society as a whole, have changed >> dramatically. It's losing its grip, in fact, because we now live in a >> society that is extremely demand-driven, and we are constantly >> rewarded for individual endeavor or self-interest through our >> commercial success, but very little for the social investment that >> enables us to have strong unions with other people. This is >> particularly so in the United States. >> >> So you can see that things have shifted dramatically and have gone >> into, if you go back to the metaphor, what I believe is sort of a >> chronic frenzy, a manic-like state, in which most people are now >> working extremely hard. Many of them are driven by debt; other people >> are driven by social ambition, but to the destruction very often of >> their own personal lives and certainly to the fabric of the community >> in which they live. >> >> >> References >> >> 1. Whybrow PC. A Mood Apart: The Thinker's Guide to Emotions and Its >> Disorders . New York, NY: HarperCollins; 1997. >> 2. Whybrow PC. American Mania: When More Is Not Enough . New York, >> NY: WW Norton; 2005. >> >> >> << File: ATT00001.html >> << File: ATT00002.txt >> >> _______________________________________________ >> paleopsych mailing list >> paleopsych at paleopsych.org >> http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From HowlBloom at aol.com Thu May 26 05:10:28 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Thu, 26 May 2005 01:10:28 EDT Subject: [Paleopsych] the gut, the heart, and the self--a quick note on the wandering vagus nerve Message-ID: <1fe.25b066c.2fc6b444@aol.com> The vagal nerve's role in enteric processes, cardiac operation, speech, and hearing have grabbed my attention. A single nerve connecting the gut, the heart, and the speaker and listener we call the self? It's potentially an amazing social integrator. I've been searching for the social centers of the brain for years and these seem to be a part of that complex. Howard --- vagus nerve n. Either of the tenth and longest of the cranial nerves, passing through the neck and thorax into the abdomen and supplying sensation to part of the ear, the tongue, the larynx, and the pharynx, motor impulses to the vocal cords, and motor and secretory impulses to the abdominal and thoracic viscera. Also called pneumogastric nerve. The American Heritage? Dictionary of the English Language, Fourth Edition Copyright ? 2004, 2000 by _Houghton Mifflin Company_ (http://www.eref-trade.hmco.com/) . Published by Houghton Mifflin Company. All rights reserved. ---------- Retrieved May 26, 2005, from the World Wide Web http://www.stressrelease.info/polyvagal_eng.html The polyvagal theory: phylogenetic substrates of a social nervous system Stephen W. Porges, Ph.D. Abstract The evolution of the autonomic nervous system provides an organizing principle to interpret the adaptive significance of physiological responses in promoting social behavior. According to the polyvagal theory, the well-documented phylogenetic shift in neural regulation of the autonomic nervous system passes through three global stages, each with an associated behavioral strategy. The first stage is characterized by a primitive unmyelinated visceral vagus that fosters digestion and responds to threat by depressing metabolic activity. Behaviorally, the first stage is associated with immobilization behaviors. The second stage is characterized by the sympathetic nervous system that is capable of increasing metabolic output and inhibiting the visceral vagus to foster mobilization behaviors necessary for ?fight or flight? . The third stage, unique to mammals, is characterized by a myelinated vagus that can rapidly regulate cardiac output to foster engagement and disengagement with the environment. The mammalian vagus is neuroanatomically linked to the cranial nerves that regulate social engagement via facial expression and vocalization. As the autonomic nervous system changed through the process of evolution, so did the interplay between the autonomic nervous system and the other physiological systems that respond to stress, including the cortex, the hypothalamic-pituitary-adrenal axis, the neuropeptides of oxytocin and vasopressin, and the immune system. From this phylogenetic orientation, the polyvagal theory proposes a biological basis for social behavior and an intervention strategy to enhance positive social behavior. Copyright 2001 Elsevier Science B.V. All rights reserved. Keywords: Vagus; Respiratory sinus arrhythmia; Evolution; Autonomic nervous system; Cortisol; Oxytocin; Vasopressin; Polyvagal theory; Social behavior Read the entire paper: PDF file (470 Kb) Stanley Rosenberg Institut ? Nygade 22 B II, 8600 Silkeborg ? Tel: +45 86 82 04 00 ? Fax: +45 86 82 03 44 ? E-mail: institut at stanleyrosenberg.com ________ Retrieved April 11, 2005, from the World Wide Web http://www.hosppract.com/issues/1999/07/gershon.htm (http://www.hosppract.com/index.htm) (http://www.hosppract.com/toc.htm) (http://www.hosppract.com/past.htm) (http://www.hosppract.com/past.htm) (http://www.hosppract.com/genetics.htm) (http://www.hosppract.com/cc.htm) (http://www.hosppract.com/cme.htm) (http://www.hosppract.com/adver.htm) (http://www.hosppract.com/about.htm) The Enteric Nervous System: A Second Brain MICHAEL D. GERSHON Columbia University Once dismissed as a simple collection of relay ganglia, the enteric nervous system is now recognized as a complex, integrative brain in its own right. Although we still are unable to relate complex behaviors such as gut motility and secretion to the activity of individual neurons, work in that area is proceeding briskly--and will lead to rapid advances in the management of functional bowel disease. ____________________________________ Dr. Gershon is Professor and Chair, Department of Anatomy and Cell Biology, Columbia University College of Physicians and Surgeons, New York. In addition to numerous scientific publications, he is the author of The Second Brain (Harper Collins, New York, 1998). ____________________________________ Structurally and neurochemically, the enteric nervous system (ENS) is a brain unto itself. Within those yards of tubing lies a complex web of microcircuitry driven by more neurotransmitters and neuromodulators than can be found anywhere else in the peripheral nervous system. These allow the ENS to perform many of its tasks in the absence of central nervous system (CNS) control--a unique endowment that has permitted enteric neurobiologists to investigate nerve cell ontogeny and chemical mediation of reflex behavior in a laboratory setting. Recognition of the importance of this work as a basis for developing effective therapies for functional bowel disease, coupled with the recent, unexpected discovery of major enteric defects following the knockout of murine genes not previously known to affect the gut, has produced a groundswell of interest that has attracted some of the best investigators to the field. Add to this that the ENS provides the closest thing we have to a window on the b rain, and one begins to understand why the bowel--the second brain--is finally receiving the attention it deserves. Discovery of the ENS The field of neurogastroenterology dates back to the nineteenth-century English investigators William M. Bayliss and Ernest H. Starling, who demonstrated that application of pressure to the intestinal lumen of anesthetized dogs resulted in oral contraction and anal relaxation, followed by a propulsive wave (which they referred to as the "law of the intestine" and we now call the peristaltic reflex) of sufficient strength to propel food through the digestive tract. Because the reflex persisted even after all of the extrinsic nerves to the gut had been severed, Bayliss and Starling deduced--correctly--that the ENS was a self-contained hub of neuronal activity that operated largely independent of CNS input. Eighteen years later, the German scientist Paul Trendelenburg confirmed these findings by demonstrating that the peristaltic reflex could be elicited in vitro in the isolated gut of a guinea pig, without participation of the brain, spinal cord, dorsal root, or cranial ganglia. Trendelenburg knew this finding was unique; no other peripheral organ had such a highly developed intrinsic neural apparatus. Cut the connection linking the bladder or the skeletal muscles to the CNS, and all motor activity ceases. Cut the connection to the gut, however, and function persists. Trendelenburg's results were published in 1917. That they were accepted by at least some of his contemporaries is evident from the description of the ENS contained in John N. Langley's classic textbook, The Autonomic Nervous System, published in 1921. Like Trendelenburg, Langley knew that intestinal function must involve not only excitatory and inhibitory motor neurons to innervate the smooth muscle, glands, and blood vessels but also primary afferent neurons to detect increases in pressure, as well as interneurons to coordinate the wave of activity down the length of the bowel. The brain could not perform these complex functions alone, he reasoned, because the gut is innervated by only a few thousand motor fibers. Logic dictated that the nerve cells in the bowel--which Langley suspected, and we now know, number in the millions--had to have their own separate innervation. Thus, when he described the autonomic nervous system, it was as three distinct parts: the sympathetic, the parasympathetic, and the enteric. Unfortunately, Langley, who was owner and editor of the Journal of Physiology, alienated many of his colleagues. After his death, editorship of the Journal passed to the Physiological Society, whose members reclassified the enteric neurons as parasympathetic relay ganglia, part of the vagal supply that directs gut motility. To an extent, of course, they were right. The vagus nerve is normally responsible for commanding the vast microcircuits of the ENS to carry out their appointed tasks. What it cannot do, as Langley and his predecessors intuitively grasped, is tell them how to carry them out. That is strictly an inside job, and one that the gut is marvelously capable of performing. In addition to propulsion, the ENS bears primary responsibility for self-cleaning, regulating the luminal environment, working with the immune system to defend the bowel, and modifying the rate of proliferation and growth of mucosal cells. Neurons emanating from the gut also innervate ganglia in neighboring organs, such as the gallbladder and pancreas (Figure 1). After Langley's death, however, the concept of an independent ENS fell by the wayside, as investigators turned their attention to new developments in chemical neurotransmission. Epinephrine and acetylcholine had been identified as the sympathetic and parasympathetic transmitters, respectively (although the true sympathetic transmitter was later revealed to be norepinephrine), and neuroscientists were taken with the idea of a neatly matched set of chemical modulators, one for each pathway. The "two neurotransmitters, two pathways" theory remained essentially unchallenged until 1965-1967, when I proposed in a series of papers in Science and the Journal of Physiology that there existed a third neurotransmitter, namely serotonin (5-hydroxytryptamine, 5-HT), that was both produced in and targeted to the ENS. A Third Neurotransmitter Since serotonin was already known to possess neurotransmitterlike qualities, the storm of protest that greeted this suggestion came as quite a shock to me. At the time, of course, there was no scientific proof that enteric neurons contain endogenous serotonin or can synthesize it from its amino acid precursor, L-tryptophan. By the early 1980s, however, enough evidence had accumulated--not only about serotonin but also about dozens of other previously unknown neurotransmitters--that most investigators agreed that the old "two and two" hypothesis no longer seemed credible. It is now generally recognized that at least 30 chemicals of different classes transmit instructions in the brain, and that all of these classes are also represented in the ENS. Recent attempts to determine which peptides and small-molecule neurotransmitters are stored (often collectively) in various enteric neurons have begun to shed light on this remarkable confluence and have provided a more detailed picture of the functional anatomy of the bowel. By assigning a chemical code to each combination of neurotransmitters and then matching the code with the placement of lesions in animal models, investigators have been able to determine the location of specific, chemically defined subsets of enteric neurons. This work provides ample evidence that the ENS is no simple collection of relay ganglia but rather a complex integrative brain in its own right. However, a number of serious questions must be addressed before we can state with assurance that we understand how the neurons of the gut mediate behavior. Species differences have been found in the chemical coding of enteric nerve cells, so observations made in guinea pigs cannot be directly applied to rodents and certainly not to humans. This is somewhat baffling, because if there are patterns of enteric behavior that are common to all mammalian species, then the neurons responsible for those behavior patterns ought to be fairly uniform. Another issue concerns the degree to which the various neurotransmitters identified in the bowel are physiologically relevant. The criteria are stringent: in addition to being present in the appropriate cells and synapses, the substance being tested should 1) have demonstrable biosynthesis, 2) be released on nerve stimulation, 3) mimic the activities of the endogenously secreted transmitter, 4) have an adequate means of endogenous inactivation, and 5) be antagonized by the same drugs in the laboratory and in vivo. Acetylcholine, norepinephrine, nitric oxide, serotonin, and vasoactive intestinal peptide meet the criteria, but less is known about other candidate molecules. Anatomy of the ENS The ENS is remarkably brainlike, both structurally and functionally. Its neuronal elements are not supported by collagen and Schwann cells, like those in the rest of the peripheral nervous system, but by glia that resemble the astrocytes of the CNS (Figure 2). These qlia do not wrap individual axons in single membranous invaginations; rather, entire bundles of axons are fitted into the invaginations of enteric glia. The axons thus abut one another in much the same manner as those of the olfactory nerve. The ENS is also vulnerable to what are generally thought of as brain lesions: Both the Lewy bodies associated with Parkinson's disease and the amyloid plaques and neurofibrillary tangles identified with Alzheimer's disease have been found in the bowels of patients with these conditions. It is conceivable that Alzheimer's disease, so difficult to diagnose in the absence of autopsy data, may some day be routinely identified by rectal biopsy. In addition to the neurons and glia of the ENS, the gut contains interstitial cells of Cajal (ICC), which do not display neural and glial markers such as neurofilaments or glial fibrillary acidic proteins and are therefore believed to be a distinct cell type. Because ICC tend to be located between nerve terminals and smooth muscle cells, Ram?n y Cajal believed that they were intermediaries that transmitted signals from nerve fibers to smooth muscle. For a while, this concept was abandoned, because it was thought that no intermediaries were required. Now, however, Cajal's concept is being reconsidered. ICC are also thought to act as pacemakers, establishing the rhythm of bowel contractions through their influence on electrical slow-wave activity. This assumption is supported by 1) the location of ICC in regions of smooth muscle where electrical slow waves are generated, 2) the spontaneous pacemakerlike activity displayed by ICC when they are isolated from the colon, and 3) the disappearance or disruption of electrical slow-wave activity when ICC are removed or uncoupled from gut smooth muscle. The entire structure of the ENS is arranged into two ganglionated plexuses (Figure 3). The larger, myenteric (Auerbach's) plexus, situated between the muscle layers of the muscularis externa, contains the neurons responsible for motility and for mediating the enzyme output of adjacent organs. The smaller, submucosal (Meissner's) plexus contains sensory cells that "talk" to the neurons of the myenteric plexus, as well as motor fibers that stimulate secretion from epithelial crypt cells into the gut lumen. The submucosal plexus contains fewer neurons and thinner interganglionic connectives than does the myenteric plexus, and has fewer neurons per ganglion. Electrical coupling between smooth muscle cells enables signals to rapidly alter the membrane potential of even those cells that have no direct contact with neurons and ensures that large regions of bowel--rather than small groups of muscle cells--will respond to nerve stimulation. The Serotonin Model Some sensory neurons are directly activated by the stimuli to which they respond, making them both sensory receptors and primary afferent neurons. Other sensory neurons, such as the auditory and vestibular ganglia, do not respond to sensory stimuli but are driven by other, nonneuronal cells that act as sensory receptors. It has not yet been conclusively shown to which of these categories the primary afferent neurons of the submucosal plexus belong. They could be mechanoreceptors that become excited when their processes in the intestinal mucosa are deformed. Or they could be stimulated secondarily to the activation of a mechanosensitive mucosal cell. Such cells do exist in the gut--they are the enterochromaffin cells of the gastrointestinal epithelium, and they contain over 95% of the serotonin found in the body. (A small amount of serotonin is also secreted by ENS interneurons.) The serotonin in enterochromaffin cells is stored in subcellular granules that spontaneously release the amine into the adjacent lamina propria, which is endowed with at least 15 distinct serotonin receptor subtypes (Figure 4). Additional serotonin is released when the cells are stimulated either by increased intraluminal pressure, vagal stimulation, anaphylaxis, acidification of the duodenal lumen, or exposure to norepinephrine, acetylcholine, cholera toxin, or a variety of other chemical substances. In a patient receiving radiation therapy for cancer, for example, excess serotonin leaking out of enterochromaffin cells activates receptor subtype 5-HT3, located on the extrinsic nerves, rapidly leading to nausea and vomiting. The symptoms can be blocked by giving an antagonist like ondansetron that is specific for 5-HT3 receptors. The antagonist does not interfere with other serotonin-mediated functions, such as peristalsis or self-cleaning activities, because they involve other 5-HT receptor subtypes. We now have extensive data (from studies of the serotonin antagonist 5-HTP-DP and anti-idiotypic antibodies that recognize 5-HT receptors) confirming that 1) serotonin stimulates the peristaltic reflex when it is applied to the mucosal surface of the bowel, 2) serotonin is released whenever the peristaltic reflex is initiated, and 3) the reflex is diminished when the mucosal source of serotonin is removed. Consequently, there is wide support for the hypothesis, first proposed by Edith B?lbring in 1958, that enterochromaffin cells act as pressure transducers and that the serotonin they secrete acts as a mediator to excite the mucosal afferent nerves, initiating the peristaltic reflex (Figure 5). The serotonergic neurons of the ENS probably inactivate the amine by a rapid reuptake process similar to that described for the CNS. A specific 5-HT plasma membrane transporter protein has recently been cloned; it is expressed in epithelial cells scattered throughout the gut as well as in the brain. Serotonin blockade can also be achieved by other means, such as removal of acetylcholine or calcitonin gene-related peptide. Uptake of serotonin can be blocked in both the ENS and CNS by antidepressant drugs such as chlorimipramine, fluoxetine, and zimelidine that have an affinity for the transporter. In the absence of rapid uptake, serotonin continues to flow outward in the direction of neighboring nerve cells, which become excited in turn. Eventually, however, desensitization takes place, and the process grinds to a halt. In the guinea pig model, insertion of an artificial fecal pellet into the distal colon was followed by rapid proximal-to-distal propulsion. The same effect was achieved even when the experiment was repeated for eight hours. Addition of a low dose of fluoxetine accelerated propulsion, while at higher doses the 5-HT receptors became desensitized and intestinal motility slowed and eventually stopped. Clinical Implications. Obviously, these data have important implications for physicians who regularly prescribe mood-altering drugs. Because the neurotransmitters and neuromodulators present in the brain are nearly always present in the bowel as well, drugs designed to act at central synapses are likely to have enteric effects. Early in the course of antidepressant therapy, about 25% of patients report some nausea or diarrhea. With higher dosages or longer duration of therapy, serotonin receptors become desensitized, and constipation may occur. (Presumably, the 75% of patients who do not complain of gastrointestinal disturbance either are not taking enough of the antidepressant or have compensatory mechanisms that reduce the impact of prolonged serotonin availability.) If these effects--which are not side effects per se but predictable consequences of transporter protein blockade--are not anticipated and carefully explained to the patient, they are likely to reduce adherence and limit the value of treatment. On the other hand, the same drugs that tend to cause difficulty for patients who take them for emotional illness may be a godsend to those with functional bowel disease. Moreover, because the ENS reacts promptly to changes in serotonin availability, patients with chronic bowel problems often find their symptoms relieved at pharmacologic concentrations far below those used in conventional antidepressant therapy. More About the Brain-Gut Connection Provided that the vagus nerve is intact, a steady stream of messages flows back and forth between the brain and the gut. We all experience situations in which our brains cause our bowels to go into overdrive. But in fact, messages departing the gut outnumber the opposing traffic on the order of about nine to one. Satiety, nausea, the urge to vomit, abdominal pain--all are the gut's way of warning the brain of danger from ingested food or infectious pathogens. And while the brain normally responds with appropriate signals, the ENS can take over when necessary, as for example when vagal input has been surgically severed. Naturally, the balance of power between the two nervous systems is a topic of considerable scientific interest. It has been proposed that vagal motor axons innervate specialized command neurons in the myenteric plexus, which are responsible for regulating the intrinsic microcircuits of the ENS. In favor of this concept is the observation that vagal motor fibers appear to synapse preferentially on certain types of enteric neuron (Figure 6). For example, vagal efferent axons preferentially innervate neurons in the myenteric plexus of the stomach that express serotonin or vasoactive intestinal peptide. Other recent studies have suggested that vagal input may be more widely dispersed than the command-neuron hypothesis would imply, especially in the stomach. The interplay between the two systems is thus still a bit unclear. Correlation or Causation? Whatever the exact connection, the relationship between the cerebral and enteric brains is so close that it is easy to become confused about which is doing the talking. Until peptic ulcer was found to be an infectious disease, for example, physicians regarded anxiety as the chief cause. Now that we recognize Helicobacter pylori as the cause, it seems clear that the physical sensation of burning epigastric pain is generally responsible for the emotional symptoms, rather than the other way around. But because most ulcer patients, if questioned, will admit to feeling anxious, the misunderstanding persisted for decades. Another illustration is ulcerative colitis, which was considered the prototypic psychosomatic disease when I was in medical school. There were even lectures on the "ulcerative colitis personality." The ulcerative colitis personality, if indeed there is one, is a consequence of living with a disabling autoimmune disease that prevents patients from feeling relaxed and comfortable in social situations. It is altogether possible that with passage of time, many of the ailments currently labeled as functional bowel diseases will prove to have similarly identifiable physiologic causes. Embryonic Development: New Insights In order to better appreciate ENS functioning, it is helpful to know something about its embryonic development. Which sites in the embryo give rise to the precursors of enteric neurons and glia? What impels these precursors to migrate to the bowel? And what features of the enteric microenvironment ultimately cause these incipient nerve cells to arrest their journey and undergo phenotypic differentiation? The neural and glial precursor cells of the ENS are the descendants of ?migr?s from the vagal, rostral-truncal, and sacral levels of the neural crest. Of these three, the vagal crest is the most influential, because its cells colonize the entire gut. The rostral-truncal crest colonizes only the esophagus and adjacent stomach, whereas the sacral crest colonizes the postumbilical bowel. It might be assumed that premigratory cells in each of these regions are already programmed to locate their appropriate portion of the gut and differentiate as enteric neurons or glia. However, that idea has been shown to be incorrect. The premigratory crest population is multipotent--so much so that whole regions of the crest can be interchanged in avian embryos without interfering with ENS formation. Furthermore, even the group of crest-derived cells that are destined to colonize the bowel contains pluripotent precursors with a number of "career" options. Terminal differentiation does not take place until the ?migr?s have reached the gut wall and interacted with the enteric microenvironment via a number of specific chemical growth factor-receptor combinations. If these molecules are unavailable, the migration of the crest-derived cells will be cut short, and aganglionosis of the remaining bowel will result. Nerve cell lineages are defined by their common dependence on particular growth factors or genes. For example, there is a very large lineage defined by dependence on stimulation of the Ret receptor by glial cell-derived neurotrophin factor (GDNF) and its binding molecule, GFR-alpha1. This so-called first precursor gives rise to essentially all of the neurons of the bowel, with the exception of those of the rostral foregut. Partial loss of GDNF-Ret may result in a precursor pool that is too small to colonize the entire gut, while complete loss of either GDNF or Ret eliminates the possibility of nerve cells below the level of the esophagus. A second lineage depends on Mash-1, a member of the basic helix-loop-helix family of transcriptional regulators. These neurons, which include those of the rostral foregut as well as a subset of cells in the remainder of the bowel, are transiently catechola-minergic, develop early (enteric neurons develop in successive waves), and generate the entire set of enteric serotonergic neurons. A third lineage is independent of Mash-1, develops later, and gives rise to peptidergic neurons such as those that contain calcitonin gene-related peptide. Sublineages of enteric neurons include those dependent on neurotrophin 3 (NT-3) and endothelin 3 (ET-3). The peptide-receptor combination ET-3-ETB is particularly interesting because it appears to act as a brake that prevents migrating cells from differentiating prematurely--before colonization of the gastrointestinal tract has been completed. Absence of ET-3 results in loss of nerve cells in the terminal portion of the bowel. In humans, this condition, known as Hirsch-sprung's disease (congenital megacolon), occurs in roughly one in 5,000 live births. Without innervation, intestinal traffic is blocked, and the colon becomes enormously dilated above the blockage. Surgery is extremely difficult because the aganglionic portion of the infant's intestine must be removed without damaging functioning ganglionic tissue. One experimental model for this disease, the lethal spotted mouse, lacks ET-3, while another laboratory strain, the piebald mouse, lacks the endothelin receptor ETB. In either case, the result is a mouse with the equivalent of Hirschsprung's disease. (The link between ET-3 deficiency and aganglionosis was discovered quite by accident, when Masashi Yanagisawa knocked out genes coding for ET-3 and ETB to study their effect on blood pressure regulation. The animals had such severe bowel abnormalities that they did not live long enough to manifest cardiovascular problems.) Our laboratory is currently attempting to define exactly where the endothelins are expressed, as well as to clarify the role of another putative factor in the pathogenesis of Hirschsprung's disease, laminin-1. This is an extracellular matrix protein excreted by smooth muscle precursors that both encourages adhesion of migrating cells and promotes their differentiation into neurons (Figure 7). We are trying to produce a transgenic mouse that overexpresses laminin in the gut, and anticipate that Hirschsprung's disease equivalent will result. We also are studying an interesting group of molecules called netrins, which are expressed in both gut epithelium and the CNS. Netrins are attraction molecules that appear to guide migrating axons in the developing CNS and neuronal precursors in the bowel and may be especially important in forming the submucosal plexus. The attraction they create is so powerful that if netrin-expressing cells are placed next to the gut, neuronal precursors will migrate out of the bowel in search of the netrin-expressing cells. Two potential receptors for the netrins have been identified, neogenin and DCC (deleted in colorectal cancer). Antibodies to DCC will counter the attraction of netrins and cause nerve cell precursors to suspend their migration. Other teams are studying avoidance molecules called sema-phorins that are the opposite of the attraction molecules (i.e., they repel the enteric precursors). Mention should also be made of the important role that technology has played in accelerating scientific progress in this area. In particular, the ability to isolate crest-derived cell populations by magnetic immunoselection and then to culture them in defined media has made it possible to test the direct effects of putative growth factors on the precursors of neurons and glia, as well as to analyze cell receptors, transcription factors, and other developmentally relevant molecules (Figure 8). The alternative--carrying out experiments with mixed populations of enteric precursor cells or cells cultured in serum-containing media--would have produced unreliable results because of the uncontrolled interaction of crest-derived and non-crest-derived cells in media of unknown content. Future Directions Clearly, much has been accomplished since the days when the ENS was dismissed as an inconsequential collection of relay ganglia. Although we still are unable to relate such complex behaviors as gut motility and secretion to the activity of individual neurons, work in that area is proceeding briskly. Similarly, we are moving toward an overarching picture of how the CNS interacts with the microcircuits of the bowel to produce coordinated responses. Finally, it seems inevitable that advancement of basic knowledge about the ENS will be followed by related clinical applications, so that the next generation of medical practitioners and patients will find fewer ailments listed under the catch-all heading of functional bowel disease. Selected Reading Costa M et al: Neurochemical classification of myenteric neurons in the guinea-pig ileum. Neuroscience 75:949, 1996 Furness JB et al: Intrinsic primary afferent neurons of the intestine. Prog Neurobiol 54:1, 1998 Gershon MD: Genes, lineages, and tissue interactions in the development of the enteric nervous system. Am J Physiol 275:G869, 1998 Gershon MD: The Second Brain. Harper Collins, New York, 1998 Gershon MD, Chalazonitis A, Rothman TP: From neural crest to bowel: Development of the enteric nervous system. J Neurobiol 24:199, 1993 Gershon MD, Erde SM: The nervous system of the gut. Gastroenterology 80:1571, 1981 Gershon MD, Kirchgessner AL, Wade PR: Functional anatomy of the enteric nervous system. In Physiology of the Gastrointestinal Tract, 3rd ed, vol 1, Johnson LR et al (Eds). Raven Press, New York, 1994, pp 381-422 Kirchgessner AL, Gershon MD: Identification of vagal efferent fibers and putative target neurons in the enteric nervous system of the rat. J Comp Neurol 285:38, 1989 Kirchgessner AL, Gershon MD: Innervation of the pancreas by neurons in the gut. J Neurosci 10:1626, 1990 Pomeranz HD et al: Expression of a neurally related laminin binding protein by neural crest-derived cells that colonize the gut: Relationship to the formation of enteric ganglia. J Comp Neurol 313:625, 1991 Rosenthal A: The GDNF protein family: Gene ablation studies reveal what they really do and how. Neuron 22:201, 1999 ____________________________________ _HOME_ (http://www.hosppract.com/index.htm) | _CURRENT ISSUE_ (http://www.hosppract.com/toc.htm) | _PAST ARTICLES_ (http://www.hosppract.com/past.htm) | _SEARCH_ (http://www.hosppract.com/past.htm) | _GENETICS SERIES_ (http://www.hosppract.com/genetics.htm) _CAPSULE & COMMENT_ (http://www.hosppract.com/cc.htm) | _CME_ (http://www.hosppract.com/cme.htm) | _ADVERTISER SERVICES_ (http://www.hosppract.com/adver.htm) | _ABOUT US_ (http://www.hosppract.com/about.htm) ____________________________________ Copyright (C) 2001. The McGraw-Hill Companies. All Rights Reserved _Privacy Policy._ (http://www.mcgraw-hill.com/privacy.html) _Privacy Notice._ (http://www.physsportsmed.com/eprivacy.htm) ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Youthactivism.org; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 2712 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 644 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 658 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 522 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 735 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 438 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 821 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 826 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 34017 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 62178 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 64757 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 63536 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 48264 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 21852 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 27242 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 41361 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1141 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 795 bytes Desc: not available URL: From shovland at mindspring.com Thu May 26 13:52:59 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 26 May 2005 06:52:59 -0700 Subject: [Paleopsych] SW: Class and National Health Message-ID: <01C561BF.8E922F40.shovland@mindspring.com> 65% of Americans now favor national health care. I recently saw Tony Blair on CSPAN telling Parliament about past improvements in their health care system and goals for future improvements. We have been given an inaccurate picture of what they are doing. Steve Hovland www.stevehovland.net -----Original Message----- From: Premise Checker [SMTP:checker at panix.com] Sent: Wednesday, May 25, 2005 11:45 AM To: paleopsych at paleopsych.org Subject: [Paleopsych] SW: Class and National Health Public Health: Class and National Health http://scienceweek.com/2004/sb041015-6.htm The following points are made by S.L. Isaacs and S.A. Schroeder (New Engl. J. Med. 2004 351:1137): 1) The health of the American public has never been better. Infectious diseases that caused terror in families less than 100 years ago are now largely under control. With the important exception of AIDS and occasional outbreaks of new diseases such as the severe acute respiratory syndrome (SARS) or of old ones such as tuberculosis, infectious diseases no longer constitute much of a public health threat. Mortality rates from heart disease and stroke -- two of the nation's three major killers --have plummeted.(1) 2) But any celebration of these victories must be tempered by the realization that these gains are not shared fairly by all members of our society. People in upper classes -- those who have a good education, hold high-paying jobs, and live in comfortable neighborhoods -- live longer and healthier lives than do people in lower classes, many of whom are black or members of ethnic minorities. And the gap is widening. 3) A great deal of attention is being given to racial and ethnic disparities in health care.(2-5) At the same time, the wide differences in health between the haves and the have-nots are largely ignored. Race and class are both independently associated with health status, although it is often difficult to disentangle the individual effects of the two factors. 4) The authors contend that increased attention should be given to the reality of class and its effect on the nation's health. Clearly, to bring about a fair and just society, every effort should be made to eliminate prejudice, racism, and discrimination. In terms of health, however, differences in rates of premature death, illness, and disability are closely tied to socioeconomic status. Concentrating mainly on race as a way of eliminating these problems downplays the importance of socioeconomic status on health. 5) The focus on reducing racial inequality is understandable since this disparity, the result of a long history of racism and discrimination, is patently unfair. Because of the nation's history and heritage, Americans are acutely conscious of race. In contrast, class disparities draw little attention, perhaps because they are seen as an inevitable consequence of market forces or the fact that life is unfair. As a nation, we are uncomfortable with the concept of class. Americans like to believe that they live in a society with such potential for upward mobility that every citizen's socioeconomic status is fluid. The concept of class smacks of Marxism and economic warfare. Moreover, class is difficult to define. There are many ways of measuring it, the most widely accepted being in terms of income, wealth, education, and employment. 6) Although there are far fewer data on class than on race, what data exist show a consistent inverse and stepwise relationship between class and premature death. On the whole, people in lower classes die earlier than do people at higher socioeconomic levels, a pattern that holds true in a progressive fashion from the poorest to the richest. At the extremes, people who were earning $15,000 or less per year from 1972 to 1989 (in 1993 dollars) were three times as likely to die prematurely as were people earning more than $70,000 per year. The same pattern exists whether one looks at education or occupation. With few exceptions, health status is also associated with class. References (abridged): 1. Institute of Medicine. The future of the public's health in the 21st century. Washington, D.C.: National Academies Press, 2003:20. 2. Smedley BD, Stith AY, Nelson AR, eds. Unequal treatment: confronting racial and ethnic disparities in health care. Washington, D.C.: National Academy Press, 2003 3. Steinbrook R. Disparities in health care -- from politics to policy. N Engl J Med 2004;350:1486-1488 4. Burchard EG, Ziv E, Coyle N, et al. The importance of race and ethnic background in biomedical research and clinical practice. N Engl J Med 2003;348:1170-1175 5. Winslow R. Aetna is collecting racial data to monitor medical disparities. Wall Street Journal. March 5, 2003:A1 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: SCIENCE POLICY: ON HEALTH CARE DISPARITIES AND POLITICS The following points are made by M. Gregg Bloche (New Engl. J. Med. 2004 350:1568): 1) Do members of disadvantaged minority groups receive poorer health care than whites? Overwhelming evidence shows that they do.(1) Among national policymakers, there is bipartisan acknowledgment of this bitter truth. Department of Health and Human Services (DHHS) Secretary Tommy Thompson has said that health disparities are a national priority, and congressional Democrats and Republicans are advocating competing remedies.(2,3) 2) So why did the DHHS issue a report last year, just days before Christmas, dismissing the "implication" that racial differences in care "result in adverse health outcomes" or "imply moral error... in any way"?(4) And why did top officials tell DHHS researchers to drop their conclusion that racial disparities are "pervasive in our health care system" and to remove findings of disparity in care for cancer, cardiac disease, AIDS, asthma, and other illnesses?(5) Secretary Thompson now says it was a "mistake". "Some individuals," Thompson told a congressional hearing in February, "wanted to be more positive." 3) But when word that DHHS officials had ordered a rewrite first surfaced in January, the department credited Thompson for the optimism. "That's just the way Secretary Thompson wants to create change," a spokesman told the Washington Post. "The idea is not to say, `We failed, we failed, we failed,' but to say, `We improved, we improved, we improved.'" According to DHHS sources and internal correspondence, Thompson's office twice refused to approve drafts by department researchers that emphasized detailed findings of racial disparity.(5) In July and September, top officials within the offices of the assistant secretary for health and the assistant secretary for planning and evaluation asked for rewrites, resulting in the more upbeat version released before Christmas. 4) After unhappy DHHS staff members leaked drafts from June and July to congressional Democrats (and to the author), Thompson released the July version. For all who are concerned about equity in American medicine, issuance of the July draft was an important step forward. The researchers who prepared it showed that disparate treatment is pervasive, created benchmarks for monitoring gaps in care and outcomes, and thereby made it more difficult for those who deny disparities to resist action to remedy the problem. And therein lies the key to how the rewrite came about -- and to why the episode is so troubling. References (abridged): 1. Smedley BD, Stith AY, Nelson AR, eds. Unequal treatment: confronting racial and ethnic disparities in health care. Washington, D.C.: National Academies Press, 2003 2. Health Care Equality and Accountability Act, S. 1833, 108th Cong. (2003) (introduced by Sen. Daschle) 3. Closing the Health Care Gap Act of 2004, S. 2091, 108th Cong. (2004) (introduced by Sen. Frist) 4. National health care disparities report. Rockville, Md.: Agency for Health care Research and Quality, December 23, 2003 5. Bloche MG. Erasing racial data erased report's truth. Los Angeles Times. February 15, 2004:M1 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: ON THE COSTS OF DENYING HEALTH-CARE SCARCITY The following points are made by G.C. Alexander et al (Arch Intern Med. 2004;164:593-596): 1) Scarcity is increasingly common in health care, yet many physicians may be reluctant to acknowledge the ways that limited health care resources influence their decisions. Reasons for this denial include that physicians are unaccustomed to thinking in terms of scarcity, uncomfortable with the role that limited resources play in poor outcomes, and hesitant to acknowledge the influence of financial incentives and restrictions on their practice. However, the denial of scarcity serves as a barrier to containing costs, alleviating avoidable scarcity, limiting the financial burden of health care on patients, and developing fair allocation systems. 2) Almost two decades ago, Aaron and Schwartz(1) published The Painful Prescription: Rationing Hospital Care, in which they examined the dramatic differences in health care expenditures between the US and Great Britain. Their examination highlighted the role of rationing within the British system and explored the difficult choices that must be made when trying to weigh the costs and benefits of many health care services. They noted that British physicians appeared to rationalize or redefine health care standards to deal more comfortably with resource limitations over which they had little control. 3) Since that time, physicians in the US have been under increasing pressure to acknowledge and respond to scarcity.(2-4) To begin to learn more about how they respond to these pressures, the authors conducted exploratory interviews with physicians faced with scarcity on a daily basis: transplant cardiologists involved in making decisions about which patients to place on the organ waiting list; pediatricians who frequently prescribe intravenous immunoglobulin (IVIg), a safe and effective medical treatment that has been in short supply(2); and general internists who make cost-quality trade-offs on a daily basis. The interviews were conducted in confidential settings, included open-ended and directed questions, and were recorded and transcribed for subsequent analysis. During these interviews, the authors were struck by the vehemence with which the physicians they interviewed denied scarcity or, more commonly, the constraints that scarcity imposes on their practice. The authors were left with the impression that physicians' awareness of scarcity and its consequences lies under the surface. 4) The authors conclude: Physicians' limited time and energy will never suffice to fulfill the almost limitless needs of their patients. Similarly, the limited resources available to health care in the US guarantee that difficult choices must and will be made regarding the distribution of health care. Physicians are in a privileged position to help develop policies that promote fair allocation of health care resources. However, to do so, they must examine their own practices and those of the health care systems in which they work. Denial of the impact of scarcity limits physicians' abilities to play an active role in reshaping policies on a local and national level. References (abridged): 1. Aaron HJ, Schwartz WB. The Painful Prescription: Rationing Hospital Care. Washington, DC: Brookings Institution; 1984 2. Tarlach GM. Globulin goblins: shortfall in immune globulin supplies looms. Drug Topics. 1998;142:16 3. Pear R. States ration low supplies of 5 vaccines for children. New York Times. September 17, 2002:A26 4. Morreim EH. Fiscal scarcity and the inevitability of bedside budget balancing. Arch Intern Med. 1989;149:1012-1015 5. United Network for Organ Sharing. Data. Available at: http://www.unos.org/data/default.asp?displayType=USData. Archives of Internal Medicine http://pubs.ama-assn.org -------------------------------- Related Material: HEALTH CARE AND RURAL AMERICA The following points are made by S.J. Blumenthal and J. Kagen (J. Am. Med. Assoc. 2002 287:109): 1) Poverty, a major risk factor for poor health outcomes, is more prevalent in inner-city and rural areas than in suburban areas. In 1999, 14.3 percent of rural Americans lived in poverty compared to 11.2 percent of urban Americans. Irrespective of where they live, persons with lower incomes and less education are more likely to report unmet health needs, less likely to have health insurance coverage, and less likely to receive preventive health care. When combined, these variables raise the risk of death across all demographic populations. 2) Many of the ills associated with poverty, including lower total household income and a higher number of uninsured residents, are magnified in rural areas. In addition, rural communities have fewer hospital beds, physicians, nurses, and specialists per capita as compared to urban residents, as well as increased transportation barriers to access health care. 3) The highest death rates for children and young adults are found in the most rural counties, and rural residents see physicians less often and usually later in the course of an illness. People in rural America experience higher rates of chronic disease and the health-damaging behaviors associated with them. They are more likely to smoke, to lose teeth, and to experience limitations from chronic health conditions. While death rates from homicides are greater in urban areas, mortality rates from unintentional injuries and motor vehicle crashes are disproportionately more common in rural America. J. Am. Med. Assoc. http://www.jama.com -------------------------------- Related Material: ON HEALTH OF THE GLOBAL POOR The following points are made by P. Jha et al (Science 2002 295:2036): 1) Improvements in global health in the 2nd half of the 20th century have been enormous but remain incomplete. Between 1960 and 1995, life-expectancy in low-income countries improved by 22 years as opposed to 9 years in high-income countries. Mortality of children under 5 years of age in low-income countries has been halved since 1960. Even so, 10 million child deaths occur annually, and other enormous health burdens remain. 2) In 1998, almost a third of deaths in low- and middle-income countries were due to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies: a death toll of 16 million, equivalent to the population of Florida. Of those deaths, 1.6 million were from measles, tetanus, and diphtheria, diseases routinely vaccinated against in wealthy countries. 3) Of the half million women who die annually due to pregnancy or childbirth, 99 percent do so in low- and middle-income countries. Approximately 2.4 billion people live at risk of malaria, and at least 1 million died from malaria in 1998. There are 8 million new cases of tuberculosis every year, and 1.5 million deaths from tuberculosis. 4) On the basis of current smoking trends, tobacco-attributable disease will kill approximately 500 million people over the next 5 decades. Over 20 million people have died already of HIV?AIDS, 40 million people are infected currently, and its spread continues unabated in many countries. The burden falls most heavily on poor countries and on the poorest of the people within those countries. 5) Of the 30 million children not receiving basic immunizations, 27 million live in countries with GNP below $1200 per capita. In India, the prevalence of childhood mortality, smoking, and tuberculosis is three times higher among the lowest income or educated groups than among the highest. Science http://www.sciencemag.org _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Thu May 26 18:55:09 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:55:09 -0400 (EDT) Subject: [Paleopsych] NYT: (Class) 15 Years on the Bottom Rung Message-ID: 15 Years on the Bottom Rung Class Matters - Social Class and Immigration in the United States of America http://www.nytimes.com/2005/05/26/national/class/MEXICANS-FINAL.html [Sixth in a series.] By ANTHONY DePALMA In the dark before dawn, when Madison Avenue was all but deserted and its pricey boutiques were still locked up tight, several Mexicans slipped quietly into 3 Guys, a restaurant that the [2]Zagat guide once called "the most expensive coffee shop in New York." For the next 10 hours they would fry eggs, grill burgers, pour coffee and wash dishes for a stream of customers from the Upper East Side of Manhattan. By 7:35 a.m., Eliot Spitzer, attorney general of New York, was holding a power breakfast back near the polished granite counter. In the same burgundy booth a few hours later, Michael A. Wiener, co-founder of the multibillion-dollar Infinity Broadcasting, grabbed a bite with his wife, Zena. Just the day before, [3]Uma Thurman slipped in for a quiet lunch with her children, but the paparazzi found her and she left. More Mexicans filed in to begin their shifts throughout the morning, and by the time John Zannikos, one of the restaurant's three Greek owners, drove in from the North Jersey suburbs to work the lunch crowd, Madison Avenue was buzzing. So was 3 Guys. "You got to wait a little bit," Mr. Zannikos said to a pride of elegant women who had spent the morning at the [4]Whitney Museum of American Art, across Madison Avenue at 75th Street. For an illiterate immigrant who came to New York years ago with nothing but $100 in his pocket and a willingness to work etched on his heart, could any words have been sweeter to say? With its wealthy clientele, middle-class owners and low-income work force, 3 Guys is a template of the class divisions in America. But it is also the setting for two starkly different tales about breaching those divides. The familiar story is Mr. Zannikos's. For him, the restaurant - don't dare call it a diner - with its $20 salads and elegant d?cor represents the American promise of upward mobility, one that has been fulfilled countless times for generations of hard-working immigrants. But for Juan Manuel Peralta, a 34-year-old illegal immigrant who worked there for five years until he was fired last May, and for many of the other illegal Mexican immigrants in the back, restaurant work today is more like a dead end. They are finding the American dream of moving up far more elusive than it was for Mr. Zannikos. Despite his efforts to help them, they risk becoming stuck in a permanent underclass of the poor, the unskilled and the uneducated. That is not to suggest that the nearly five million Mexicans who, like Mr. Peralta, are living in the United States illegally will never emerge from the shadows. Many have, and undoubtedly many more will. But the sheer size of the influx - over 400,000 a year, with no end in sight - creates a problem all its own. It means there is an ever-growing pool of interchangeable workers, many of them shunting from one low-paying job to another. If one moves on, another one - or maybe two or three - is there to take his place. Although Mr. Peralta arrived in New York almost 40 years after Mr. Zannikos, the two share a remarkably similar beginning. They came at the same age to the same section of New York City, without legal papers or more than a few words of English. Each dreamed of a better life. But monumental changes in the economy and in attitudes toward immigrants have made it far less likely that Mr. Peralta and his children will experience the same upward mobility as Mr. Zannikos and his family. Of course, there is a chance that Mr. Peralta may yet take his place among the Mexican-Americans who have succeeded here. He realizes that he will probably not do as well as the few who have risen to high office or who were able to buy the vineyards where their grandfathers once picked grapes. But he still dreams that his children will someday join the millions who have lost their accents, gotten good educations and firmly achieved the American dream. Political scientists are divided over whether the 25 million people of Mexican ancestry in the United States represent an exception to the classic immigrant success story. Some, like [5]John H. Mollenkopf at the [6]City University of New York, are convinced that Mexicans will eventually do as well as the Greeks, Italians and other Europeans of the last century who were usually well assimilated after two or three generations. Others, including Mexican-Americans like [7]Rodolfo O. de la Garza, a professor at [8]Columbia, have done studies showing that Mexican-Americans face so many obstacles that even the fourth generation trails other Americans in education, home ownership and household income. The situation is even worse for the millions more who have illegally entered the United States since 1990. Spread out in scores of cities far beyond the Southwest, they find jobs plentiful but advancement difficult. President Vicente Fox of Mexico was forced to apologize this month for declaring publicly what many Mexicans say they feel, that the [9]illegal immigrants "are doing the work that not even blacks want to do in the United States." Resentment and race subtly stand in their way, as does a lingering attachment to Mexico, which is so close that many immigrants do not put down deep roots here. They say they plan to stay only long enough to make some money and then go back home. Few ever do. But the biggest obstacle is their illegal status. With few routes open to become legal, they remain, like Mr. Peralta, without rights, without security and without a clear path to a better future. "It's worrisome," said [10]Richard Alba, a sociologist at the [11]State University of New York, Albany, who studies the assimilation and class mobility of contemporary immigrants, "and I don't see much reason to believe this will change." Little has changed for Mr. Peralta, a cook who has worked at menial jobs in the United States for the last 15 years. Though he makes more than he ever dreamed of in Mexico, his life is anything but middle class and setbacks are routine. Still, he has not given up hope. Querer es poder, he sometimes says: Want something badly enough and you will get it. But desire may not be enough anymore. That is what concerns [12]Arturo Sarukhan, Mexico's consul general in New York. Mr. Sarukhan recently took an urgent call from New York's police commissioner about an increase in gang activity among young Mexican men, a sign that they were moving into the underside of American life. Of all immigrants in New York City, officials say, Mexicans are the poorest, least educated and least likely to speak English. The failure or success of this generation of Mexicans in the United States will determine the place that Mexicans will hold here in years to come, Mr. Sarukhan said, and the outlook is not encouraging. "They will be better off than they could ever have been in Mexico," he said, "but I don't think that's going to be enough to prevent them from becoming an underclass in New York." Different Results There is a break in the middle of the day at 3 Guys, after the lunchtime limousines leave and before the private schools let out. That was when Mr. Zannikos asked the Mexican cook who replaced Mr. Peralta to prepare some lunch for him. Then Mr. Zannikos carried the chicken breast on pita to the last table in the restaurant. "My life story is a good story, a lot of success," he said, his accent still heavy. He was just a teenager when he left the Greek island of Chios, a few miles off the coast of Turkey. World War II had just ended, and Greece was in ruins. "There was only rich and poor, that's it," Mr. Zannikos said. "There was no middle class like you have here." He is 70 now, with short gray hair and soft eyes that can water at a mention of the past. Because of the war, he said, he never got past the second grade, never learned to read or write. He signed on as a merchant seaman, and in 1953, when he was 19, his ship docked at Norfolk, Va. He went ashore one Saturday with no intention of ever returning to Greece. He left behind everything, including his travel documents. All he had in his pockets was $100 and the address of his mother's cousin in the Jackson Heights-Corona section of Queens. Almost four decades later, Mr. Peralta underwent a similar rite of passage out of Mexico. He had finished the eighth grade in the poor southern state of Guerrero and saw nothing in his future there but fixing flat tires. His father, Inocencio, had once dreamed of going to the United States, but never had the money. In 1990, he borrowed enough to give his first-born son a chance. Mr. Peralta was 19 when he boarded a smoky bus that carried him through the deserted hills of Guerrero and kept going until it reached the edge of Mexico. With eight other Mexicans he did not know, he crawled through a sewer tunnel that started in Tijuana and ended on the other side of the border, in what Mexicans call el Norte. He had carried no documents, no photographs and no money, except what his father gave him to pay his shifty guide and to buy an airline ticket to New York. Deep in a pocket was the address of an uncle in the same section of Queens where Mr. Zannikos had gotten his start. By 1990, the area had gone from largely Greek to mostly Latino. Starting over in the same working-class neighborhood, Mr. Peralta and Mr. Zannikos quickly learned that New York was full of opportunities and obstacles, often in equal measure. On his first day there, Mr. Zannikos, scared and feeling lost, found the building he was looking for, but his mother's cousin had moved. He had no idea what to do until a Greek man passed by. Walk five blocks to the Deluxe Diner, the man said. He did. The diner was full of Greek housepainters, including one who knew Mr. Zannikos's father. On the spot, they offered him a job painting closets, where his mistakes would be hidden. He painted until the weather turned cold. Another Greek hired him as a dishwasher at his coffee shop in the Bronx. It was not easy, but Mr. Zannikos worked his way up to short-order cook, learning English as he went along. In 1956, immigration officials raided the coffee shop. He was deported, but after a short while he managed to sneak back into the country. Three years later he married a Puerto Rican from the Bronx. The marriage lasted only a year, but it put him on the road to becoming a citizen. Now he could buy his own restaurant, a greasy spoon in the South Bronx that catered to a late-night clientele of prostitutes and undercover police officers. Since then, he has bought and sold more than a dozen New York diners, but none have been more successful than the original 3 Guys, which opened in 1978. He and his partners own two other restaurants with the same name farther up Madison Avenue, but they have never replicated the high-end appeal of the original. "When employees come in I teach them, 'Hey, this is a different neighborhood,' " Mr. Zannikos said. What may be standard in some other diners is not tolerated here. There are no Greek flags or tourism posters. There is no television or twirling tower of cakes with cream pompadours. Waiters are forbidden to chew gum. No customer is ever called "Honey." "They know their place and I know my place," Mr. Zannikos said of his customers. "It's as simple as that." His place in society now is a far cry from his days in the Bronx. He and his second wife, June, live in Wyckoff, a New Jersey suburb where he pampers fig trees and dutifully looks after a bird feeder shaped like the Parthenon. They own a condominium in Florida. His three children all went far beyond his second-grade education, finishing high school or attending college. They have all done well, as has Mr. Zannikos, who says he makes about $130,000 a year. He says he is not sensitive to class distinctions, but he admits he was bothered when some people mistook him for the caterer at fund-raising dinners for the local Greek church he helped build. All in all, he thinks immigrants today have a better chance of moving up the class ladder than he did 50 years ago. "At that time, no bank would give us any money, but today they give you credit cards in the mail," he said. "New York still gives you more opportunity that any other place. If you want to do things, you will." He says he has done well, and he is content with his station in life. "I'm in the middle and I'm happy." A Divisive Issue Mr. Peralta cannot guess what class Mr. Zannikos belongs to. But he is certain that it is much tougher for an immigrant to get ahead today than 50 years ago. And he has no doubt about his own class. "La pobreza," he says. "Poverty." It was not what he expected when he boarded the bus to the border, but it did not take long for him to realize that success in the United States required more than hard work. "A lot of it has to do with luck," he said during a lunch break on a stoop around the corner from the Queens diner where he went to work after 3 Guys. "People come here, and in no more than a year or two they can buy their own house and have a car," Mr. Peralta said. "Me, I've been here 15 years, and if I die tomorrow, there wouldn't even be enough money to bury me." In 1990, Mr. Peralta was in the vanguard of Mexican immigrants who bypassed the traditional barrios in border states to work in far-flung cities like Denver and New York. The 2000 census counted 186,872 Mexicans in New York, triple the 1990 figure, and there are undoubtedly many more today. The Mexican consulate, which serves the metropolitan region, has issued more than 500,000 ID cards just since 2001. Fifty years ago, illegal immigration was a minor problem. Now it is a divisive national issue, pitting those who welcome cheap labor against those with concerns about border security and the cost of providing social services. Though newly arrived Mexicans often work in industries that rely on cheap labor, like restaurants and construction, they rarely organize. Most are desperate to stay out of sight. Mr. Peralta hooked up with his uncle the morning he arrived in New York. He did not work for weeks until the bakery where the uncle worked had an opening, a part-time job making muffins. He took it, though he didn't know muffins from crumb cake. When he saw that he would not make enough to repay his father, he took a second job making night deliveries for a Manhattan diner. By the end of his first day he was so lost he had to spend all his tip money on a cab ride home. He quit the diner, but working there even briefly opened his eyes to how easy it could be to make money in New York. Diners were everywhere, and so were jobs making deliveries, washing dishes or busing tables. In six months, Mr. Peralta had paid back the money his father gave him. He bounced from job to job and in 1995, eager to show off his newfound success, he went back to Mexico with his pockets full of money, and he married. He was 25 then, the same age at which Mr. Zannikos married. But the similarities end there. When Mr. Zannikos jumped ship, he left Greece behind for good. Though he himself had no documents, the compatriots he encountered on his first days were here legally, like most other Greek immigrants, and could help him. Greeks had never come to the United States in large numbers - the 2000 census counted only 29,805 New Yorkers born in Greece - but they tended to settle in just a few areas, like the Astoria section of Queens, which became cohesive communities ready to help new arrivals. Mr. Peralta, like many other Mexicans, is trying to make it on his own and has never severed his emotional or financial ties to home. After five years in New York's Latino community, he spoke little English and owned little more than the clothes on his back. He decided to return to Huamuxtitl?n (pronounced wa-moosh-teet-LAHN), the dusty village beneath a flat-topped mountain where he was born. "People thought that since I was coming back from el Norte, I would be so rich that I could spread money around," he said. Still, he felt privileged: his New York wages dwarfed the $1,000 a year he might have made in Mexico. He met a shy, pretty girl named Matilde in Huamuxtitl?n, married her and returned with her to New York, again illegally, all in a matter of weeks. Their first child was born in 1996. Mr. Peralta soon found that supporting a family made it harder to save money. Then, in 1999, he got the job at 3 Guys. "Barba Yanni helped me learn how to prepare things the way customers like them," Mr. Peralta said, referring to Mr. Zannikos with a Greek title of respect that means Uncle John. The restaurant became his school. He learned how to saut? a fish so that it looked like a work of art. The three partners lent him money and said they would help him get immigration documents. The pay was good. But there were tensions with the other workers. Instead of hanging their orders on a rack, the waiters shouted them out, in Greek, Spanish and a kind of fractured English. Sometimes Mr. Peralta did not understand, and they argued. Soon he was known as a hothead. Still, he worked hard, and every night he returned to his growing family. Matilde, now 27, cleaned houses until the second child, Heidi, was born three years ago. Now she tries to sell Mary Kay products to other mothers at Public School 12, which their son, Antony, 8, attends. Most weeks, Mr. Peralta could make as much as $600. Over the course of a year that could come to over $30,000, enough to approach the lower middle class. But the life he leads is far from that and uncertainty hovers over everything about his life, starting with his paycheck. To earn $600, he has to work at least 10 hours a day, six days a week, and that does not happen every week. Sometimes he is paid overtime for the extra hours, sometimes not. And, as he found out in May, he can be fired at any time and bring in nothing, not even unemployment, until he lands another job. In 2004, he made about $24,000. Because he is here illegally, Mr. Peralta can easily be exploited. He cannot file a complaint against his landlord for charging him $500 a month for a 9-foot-by-9-foot room in a Queens apartment that he shares with nine other Mexicans in three families who pay the remainder of the $2,000-a-month rent. All 13 share one bathroom, and the established pecking order means the Peraltas rarely get to use the kitchen. Eating out can be expensive. Because they were born in New York, Mr. Peralta's children are United States citizens, and their health care is generally covered by Medicaid. But he has to pay out of his pocket whenever he or his wife sees a doctor. And forget about going to the dentist. As many other Mexicans do, he wires money home, and it costs him $7 for every $100 he sends. When his uncle, his nephew and his sister asked him for money, he was expected to lend it. No one has paid him back. He has middle-class ornaments, like a cellphone and a DVD player, but no driver's license or Social Security card. He is the first to admit that he has vices that have held him back; nothing criminal, but he tends to lose his temper and there are nights when he likes to have a drink or two. His greatest weakness is instant lottery tickets, what he calls "los scratch," and he sheepishly confesses that he can squander as much as $75 a week on them. It is a way of preserving hope, he said. Once he won $100. He bought a blender. Years ago, he and Matilde were so confident they would make it in America that when their son was born they used the American spelling of his name, Anthony, figuring it would help pave his passage into the mainstream. But even that effort failed. "Look at this," his wife said one afternoon as she sat on the floor of their room near a picture of the Virgin of Guadalupe. Mr. Peralta sat on a small plastic stool in the doorway, listening. His mattress was stacked against the wall. A roll of toilet paper was stashed nearby because they dared not leave it in the shared bathroom for someone else to use. She took her pocketbook and pulled out a clear plastic case holding her son's baptismal certificate, on which his name is spelled with an "H." But then she unfolded his birth certificate, where the "H" is missing. "The teachers won't teach him to spell his name the right way until the certificate is legally changed," she said. "But how can we do that if we're not legal?" Progress, but Not Success An elevated subway train thundered overhead, making the afternoon light along Roosevelt Avenue blink like a failing fluorescent bulb. Mr. Peralta's daughter and son grabbed his fat hands as they ran some errands. He had just finished a 10-hour shift, eggs over easy and cheeseburgers since 5 a.m. It had been especially hard to stand the monotony that day. He kept thinking about what was going on in Mexico, where it was the feast day of Our Lady of the Rosary. And, oh, what a feast there was - sweets and handmade tamales, a parade, even a bullfight. At night, fireworks, bursting loud and bright against the green folds of the mountains. Paid for, in part, by the money he sends home. But instead of partying, he was walking his children to the Arab supermarket on Roosevelt Avenue to buy packages of chicken and spare ribs, and hoping to get to use the kitchen. And though he knew better, he grabbed a package of pink and white marshmallows for the children. He needed to buy tortillas, too, but not there. A Korean convenience store a few blocks away sells La Maizteca tortillas, made in New York. The swirl of immigrants in Mr. Peralta's neighborhood is part of the fabric of New York, just as it was in 1953, when Mr. Zannikos arrived. But most immigrants then were Europeans, and though they spoke different languages, their Caucasian features helped them blend into New York's middle class. Experts remain divided over whether Mexicans can follow the same route. Samuel P. Huntington, a Harvard professor of government, takes the extreme view that Mexicans will not assimilate and that the separate culture they are developing threatens the United States. Most others believe that recent Mexican immigrants will eventually take their place in society, and perhaps someday muster political clout commensurate with their numbers, though significant impediments are slowing their progress. [13]Francisco Rivera-Batiz, a Columbia University economics professor, says that prejudice remains a problem, that factory jobs have all but disappeared, and that there is a growing gap between the educational demands of the economy and the limited schooling that the newest Mexicans have when they arrive. But the biggest obstacle by far, and the one that separates newly arrived Mexicans from Greeks, Italians and most other immigrants - including earlier generations of Mexicans - is their illegal status. Professor Rivera-Batiz studied what happened to illegal Mexican immigrants who became legal after the last national amnesty in 1986. Within a few years, their incomes rose 20 percent and their English improved greatly. "Legalization," he said, "helped them tremendously." Although the Bush administration is again talking about legalizing some Mexicans with a guest worker program, there is opposition to another amnesty, and the number of Mexicans illegally living in the United States continues to soar. Desperate to get their papers any way they can, many turn to shady storefront legal offices. Like Mr. Peralta, they sign on to illusory schemes that cost hundreds of dollars but almost never produce the promised green cards. Until the 1980's, Mexican immigration was largely seasonal and mostly limited to agricultural workers. But then economic chaos in Mexico sent a flood of immigrants northward, many of them poorly educated farmers from the impoverished countryside. Tighter security on the border made it harder for Mexicans to move back and forth in the traditional way, so they tended to stay here, searching for low-paying unskilled jobs and concentrating in barrios where Spanish, constantly replenished, never loses its immediacy. "Cuidado!" Mr. Peralta shouted when Antony carelessly stepped into Roosevelt Avenue without looking. Although the boy is taught in English at school, he rarely uses anything but Spanish at home. Even now, after 15 years in New York, Mr. Peralta speaks little English. He tried English classes once, but could not get his mind to accept the new sounds. So he dropped it, and has stuck with only Spanish, which he concedes is "the language of busboys" in New York. But as long as he stays in his neighborhood, it is all he needs. It was late afternoon by the time Mr. Peralta and his children headed home. The run-down house, the overheated room, the stacked mattress and the hoarded toilet paper - all remind him how far he would have to go to achieve a success like Mr. Zannikos's. Still, he says, he has done far better than he could ever have done in Mexico. He realizes that the money he sends to his family there is not enough to satisfy his father, who built stairs for a second floor of his house made of concrete blocks in Huamuxtitl?n, even though there is no second floor. He believes Manuel has made it big in New York and he is waiting for money from America to complete the upstairs. Manuel has never told him the truth about his life up north. He said his father's images of America came from another era. The older man does not know how tough it is to be a Mexican immigrant in the United States now, tougher than any young man who ever left Huamuxtitl?n would admit. Everything built up over 15 years here can come apart as easily as an adobe house in an earthquake. And then it is time to start over, again. A Conflict Erupts It was the end of another busy lunch at 3 Guys in late spring 2003. Mr. Peralta made himself a turkey sandwich and took a seat at a rear table. The Mexican countermen, dishwashers and busboys also started their breaks, while the Greek waiters took care of the last few diners. It is not clear how the argument started. But a cross word passed between a Greek waiter and a Mexican busboy. Voices were raised. The waiter swung at the busboy, catching him behind the ear. Mr. Peralta froze. So did the other Mexicans. Even from the front of the restaurant, where he was watching the cash register, Mr. Zannikos realized something was wrong and rushed back to break it up. "I stood between them, held one and pushed the other away," he said. "I told them: 'You don't do that here. Never do that here.' " Mr. Zannikos said he did not care who started it. He ordered both the busboy and the waiter, a partner's nephew, to get out. But several Mexicans, including Mr. Peralta, said that they saw Mr. Zannikos grab the busboy by the head and that they believed he would have hit him if another Mexican had not stepped between them. That infuriated them because they felt he had sided with the Greek without knowing who was at fault. Mr. Zannikos said that was not true, but in the end it did not matter. The easygoing atmosphere at the restaurant changed. "Everybody was a little cool," Mr. Zannikos recalled. What he did not know then was that the Mexicans had reached out to the Restaurant Opportunities Center, a workers' rights group. Eventually six of them, including Mr. Peralta, cooperated with the group. He did so reluctantly, he said, because he was afraid that if the owners found out, they would no longer help him get his immigration papers. The labor group promised that the owners would never know. The owners saw it as an effort to shake them down, but for the Mexicans it became a class struggle pitting powerless workers against hard-hearted owners. Their grievances went beyond the scuffle. They complained that with just one exception, only Greeks became waiters at 3 Guys. They challenged the sole Mexican waiter, Salomon Paniagua, a former Mexican army officer who, everyone agreed, looked Greek, to stand with them. But on the day the labor group picketed the restaurant, Mr. Paniagua refused to put down his order pad. A handful of demonstrators carried signs on Madison Avenue for a short while before Mr. Zannikos and his partners reluctantly agreed to settle. Mr. Zannikos said he felt betrayed. "When I see these guys, I see myself when I started, and I always try to help them," he said. "I didn't do anything wrong." The busboy and the Mexican who intervened were paid several thousand dollars and the owners promised to promote a current Mexican employee to waiter within a month. But that did not end the turmoil. Fearing that the other Mexicans might try to get back at him, Mr. Paniagua decided to strike out on his own. After asking Mr. Zannikos for advice, he bought a one-third share of a Greek diner in Jamaica, Queens. He said he put it in his father's name because the older man had become a legal resident after the 1986 amnesty. After Mr. Paniagua left, 3 Guys went without a single Mexican waiter for 10 months, despite the terms of the settlement. In March, an eager Mexican busboy with a heavy accent who had worked there for four years got a chance to wear a waiter's tie. Mr. Peralta ended up having to leave 3 Guys around the same time as Mr. Paniagua. Mr. Zannikos's partners suspected he had sided with the labor group, he said, and started to criticize his work unfairly. Then they cut back his schedule to five days a week. After he hurt his ankle playing soccer, they told him to go home until he was better. When Mr. Peralta came back to work about two weeks later, he was fired. Mr. Zannikos confirms part of the account but says the firing had nothing to do with the scuffle or the ensuing dispute. "If he was good, believe me, he wouldn't get fired," he said of Mr. Peralta. Mr. Peralta shrugged when told what Mr. Zannikos said. "I know my own work and I know what I can do," he said. "There are a lot of restaurants in New York, and a lot of workers." When 3 Guys fired Mr. Peralta, another Mexican replaced him, just as Mr. Peralta replaced a Mexican at the Greek diner in Queens where he went to work next. This time, though, there was no Madison Avenue address, no elaborate menu of New Zealand mussels or designer mushrooms. In the Queens diner a bowl of soup with a buttered roll cost $2, all day. If he fried burgers and scraped fat off the big grill for 10 hours a day, six days a week, he might earn about as much as he did on Madison Avenue, at least for a week. His schedule kept changing. Sometimes he worked the lunch and dinner shift, and by the end of the night he was worn out, especially since he often found himself arguing with the Greek owner. But he did not look forward to going home. So after the night manager lowered the security gate, Mr. Peralta would wander the streets. One of those nights he stopped at a phone center off Roosevelt Avenue to call his mother. "Everything's O.K.," he told her. He asked how she had spent the last $100 he sent, and whether she needed anything else. There is always need in Huamuxtitl?n. Still restless, he went to the Scorpion, a shot-and-beer joint open till 4 a.m. He sat at the long bar nursing vodkas with cranberry juice, glancing at the soccer match on TV and the busty Brazilian bartender who spoke only a little Spanish. When it was nearly 11 p.m., he called it a night. Back home, he quietly opened the door to his room. The lights were off, the television murmuring. His family was asleep in the bunk bed that the store had now threatened to repossess. Antony was curled up on the top, Matilde and Heidi cuddled in the bottom. Mr. Peralta moved the plastic stool out of the way and dropped his mattress to the floor. The children did not stir. His wife's eyes fluttered, but she said nothing. Mr. Peralta looked over his family, his home. "This," he said, "is my life in New York." Not the life he imagined, but his life. In early March, just after Heidi's third birthday, he quit his job at the Queens diner after yet another heated argument with the owner. In his mind, preserving his dignity is one of the few liberties he has left. "I'll get another job," he said while baby-sitting Heidi at home a few days later. The rent is already paid till the end of the month and he has friends, he said. People know him. To him, jobs are interchangeable - just as he is to the jobs. If he cannot find work as a grillman, he will bus tables. Or wash dishes. If not at one diner, then at another. "It's all the same," he said. It took about three weeks, but Mr. Peralta did find a new job as a grillman at another Greek diner in a different part of New York. His salary is roughly the same, the menu is roughly the same (one new item, Greek burritos, was a natural), and he sees his chance for a better future as being roughly the same as it has been since he got to America. A Long Day Closes It was now dark again outside 3 Guys. About 9 p.m. Mr. Zannikos asked his Mexican cook for a small salmon steak, a little rare. It had been another busy 10-hour day for him, but a good one. Receipts from the morning alone exceeded what he needed to take in every day just to cover the $23,000 a month rent. He finished the salmon quickly, left final instructions with the lone Greek waiter still on duty and said good night to everyone else. He put on his light tan corduroy jacket and the baseball cap he picked up in Florida. "Night," he said to the lone table of diners. Outside, as Mr. Zannikos walked slowly down Madison Avenue, a self-made man comfortable with his own hard-won success, the bulkhead doors in front of 3 Guys clanked open. Faint voices speaking Spanish came from below. A young Mexican who started his shift 10 hours earlier climbed out with a bag of garbage and heaved it onto the sidewalk. New Zealand mussel shells. Uneaten bits of portobello mushrooms. The fine grounds of decaf cappuccino. One black plastic bag after another came out until Madison Avenue in front of 3 Guys was piled high with trash. "Hurry up!" the young man shouted to the other Mexicans. "I want to go home, too." * [14]Copyright 2005 [15]The New York Times Company * [16]Home * [17]Privacy Policy * [18]Search * [19]Corrections * [20]RSS * [21]Help * [22]Contact Us * [23]Back to Top References Visible links 1. http://www.nytimes.com/services/xml/rss/nyt/National.xml 2. http://www.zagat.com/ 3. http://movies2.nytimes.com/gst/movies/filmography.html?p_id=70905 4. http://www.whitney.org/index.php 5. http://web.gc.cuny.edu/sociology/faculty/mollenkopf.html 6. http://portal.cuny.edu/ 7. http://www.columbia.edu/cu/polisci/fac-bios/de-la-Garza/faculty.html 8. http://www.columbia.edu/ 9. http://www.nytimes.com/2005/05/19/international/americas/19mexico.html 10. http://www.albany.edu/sociology/html/faculty/albar/alba0204.pdf 11. http://www.albany.edu/ 12. http://www.consulmexny.org/eng/english.htm 13. http://www.tc.columbia.edu/faculty/?facid=flr9 14. http://www.nytimes.com/ref/membercenter/help/copyright.html 15. http://www.nytco.com/ 16. http://www.nytimes.com/ 17. http://www.nytimes.com/ref/membercenter/help/privacy.html 18. http://query.nytimes.com/search/advanced/ 19. http://www.nytimes.com/corrections.html 20. http://www.nytimes.com/rss 21. http://www.nytimes.com/membercenter/sitehelp.html 22. http://www.nytimes.com/membercenter/formh.html 23. http://www.nytimes.com/2005/05/26/national/class/MEXICANS-FINAL.html?pagewanted=print#top Hidden links: 24. http://www.nytimes.com/adx/bin/adx_click.html?type=goto&page=www.nytimes.com/printer-friendly&pos=Position1&camp=foxsearch-emailtools09-nyt5&ad=kinsey_pf.gif&goto=http%3A%2F%2Fwww%2Efoxsearchlight%2Ecom%2Fkinsey%2Findex%5Fdvd%2Ehtml From checker at panix.com Thu May 26 18:57:23 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:57:23 -0400 (EDT) Subject: [Paleopsych] Bill Gates: The New World of Work Message-ID: From: Bill Gates To: johnmac at acm.org Subject: The New World of Work Over the past decade, software has evolved to build bridges between disconnected islands of information and give people powerful ways to communicate, collaborate and access the data that's most important to them. But the software challenges that lie ahead are less about getting access to the information people need, and more about making sense of the information they have -- giving them the ability to focus, prioritize and apply their expertise, visualize and understand key data, and reduce the amount of time they spend dealing with the complexity of an information-rich environment. To tackle these challenges, information-worker software needs to evolve. It's time to build on the capabilities we have today and create software that helps information workers adapt and thrive in an ever-changing work environment. Advances in pattern recognition, smart content, visualization and simulation, as well as innovations in hardware, displays and wireless networks, all give us an opportunity to re-imagine how software can help people get their jobs done. This is an important goal not only because the technology has evolved to make it possible, but also because the way we work is changing. Since you are a subscriber to executive emails from Microsoft, I hope you'll find this discussion of those changes useful. Now more than ever, competitive advantage comes from the ability to transform ideas into value -- through process innovation, strategic insights and customized services. We are evolving toward a diverse yet unified global market, with customers, partners and suppliers that work together across cultures and continents. The global workforce is always on and always connected -- requiring new tools to help people organize and prioritize their work and personal lives. Business is becoming more transparent, with a greater need to ensure accountability, security and privacy within and across organizations. And a generation of young people who grew up with the Internet is entering the workforce, bringing along workstyles and technologies that feel as natural to them as pen and paper. All of these changes are giving people new and better ways to work, but they also bring a new set of challenges: a deluge of information, constant demands on their attention, new skills to master and pressure to be ever more productive. For example, "information overload" is becoming a serious drag on productivity -- the typical information worker in North America gets 10 times as much e-mail as in 1997, and that number continues to increase. A recent study showed that 56 percent of workers are overwhelmed by multiple simultaneous projects and interrupted too often; one-third say that multi-tasking and distractions are keeping them from stepping back to process and reflect on the work they're doing. In the United Kingdom, it's estimated that stress accounts for nearly one-third of absenteeism and sick leave. It's also not easy enough just to find the information people need to do their jobs. The software innovations of the 1980s and 1990s, which revolutionized how we create and manipulate information, have created a new set of challenges: finding information, visualizing and understanding it, and taking action. Industry analysts estimate that information workers spend up to 30 percent of their working day just looking for data they need. All the time people spend tracking down information, managing and organizing documents, and making sure their teams have the data they need, could be much better spent on analysis, collaboration, insight and other work that adds value. At Microsoft, we believe that the key to helping businesses become more agile and productive in the global economy is to empower individual workers -- giving them tools that improve efficiency and enable them to focus on the highest-value work. And a new generation of software is an important ingredient in making this happen. HOW WE WILL WORK Over the next decade, we see a tremendous opportunity to help companies of all sizes maximize the impact of employees and workgroups, drive deeper connections with customers and partners, enable informed and timely decision-making, and manage and protect critical information. The next generation of information-worker applications will build on promising technologies -- such as machine learning, rich metadata for data and objects, new services-based standards for collaboration, advances in computing and display hardware, and self-administering, self-configuring applications -- transforming them into software that will truly enhance the way people work -- Improving personal productivity: One consequence of an "always-on" environment is the challenge of prioritizing, focusing and working without interruption. Today's software can handle some of this, but hardly at a level that matches the judgment and awareness of a human being. That will change -- new software will learn from the way you work, understand your needs, and help you set priorities. Pattern recognition and adaptive filtering: Rules and learned behavior will soon be able to automate many routine tasks. Software will be able to make inferences about what you're working on and deliver the information you need in an integrated and proactive way. As software learns your working preferences, it can flexibly manage your interruptions -- if you're working on a high-priority memo under a tight deadline, for example, software should be able to understand this and only allow phone calls or e-mails from, say, your manager or a family member. Unified communication: Integrated communication will provide a single "point of entry" to the networked world that is consistent across applications and devices. People should have a unified, complete view of their communication options, whether by voice or text, real-time or offline, with ready access to tools like speech-to-text and machine translation. You should be able to listen to your email, or read your voicemail. Project notifications, meetings, business applications, contacts and schedules should be accessible within a single consistent view, whether you're at your desk, down the hall, on the road or working at home. Presence: We're just beginning to tap the potential of presence information to help information and notifications flow where they're needed and better enable ad-hoc collaboration to solve problems and get things done. Presence information connects people and their schedules to documents and workflow, keeping you close to the changing data and expert insight that is relevant to what you're doing. Team collaboration: Over the next decade, shared workspaces will become far more robust, with richer tools to automate workflow and connect all the people, data and resources it takes to get things done. They will capture live data and documents in ways that will benefit teams that work across the hall or around the globe. Meetings will be recorded with sophisticated cameras that can detect and focus on speakers around the room. Notes taken on a whiteboard will automatically be captured and emailed to participants, and attached to the video of the meeting. They will also serve as lasting repositories for institutional knowledge, so teams won't have to "reinvent the wheel" and work with limited knowledge of the company's past experience. Optimizing supply chains: XML and rich Web services are increasingly making it possible for businesses to seamlessly share information and processes with partners, and build supply chains that stretch across multiple organizations but work as a unified whole. But there's still plenty of friction that can be removed from the way companies work together. Employees shouldn't have to manually match purchase orders with invoices. They shouldn't need to print and mail bills that could easily be sent in electronic form. Expanding the reach of Web services can help optimize and reduce the amount of unnecessary manual work and make these supply chains vastly more efficient. Finding the right information: A new layer of context-sensitive services will give you flexible and intuitive ways to manage information that go beyond the "file and folder" metaphor of today. You shouldn't have to "think like a database" and formulate search queries to ask for the information you need. Pattern recognition can help tag and organize information automatically, as well as extract meaning from documents and enable them to be queried in more natural and intuitive ways. Spotting trends for business intelligence: Sophisticated algorithms will be able to sort through millions of gigabytes of data to identify trends that human analysts might miss. Software should be able to find meaningful connections in mountains of data and present them to experts -- or even automated processes -- that can act on them. Software can ensure that actions which result in changes to other work processes will automatically ripple through the system, making the entire business more agile and responsive to information that affects the bottom line. Over time, software will "learn" what information people use -- and what they don't -- and will adjust its behavior accordingly. Insights and structured workflow: Software should take a more holistic view of workflow, providing data and metrics on specific activities to make it easier and faster to spot inefficiencies and points of failure. Smarter workflow tools will use pattern recognition and logic to find problems such as repeated customer complaints or inventory problems, and route them to the right person for resolution. This will go a long way towards reducing frustration, lost time and errors that result from broken or inefficient processes. A NEW GENERATION OF PRODUCTIVITY SOFTWARE In a new world of work, where collaboration, business intelligence and prioritizing scarce time and attention are critical factors for success, the tools that information workers use must evolve in ways that do not add new complexity for people who already feel the pressure of an "always-on" world and ever-rising expectations for productivity. We believe that the way out of this maze is through integration, simplification, and a new breed of software applications and services that manage complexity in the background, and extend human capabilities by automating low-value tasks and helping people make sense of complex data. We aim to make this happen through a next-generation productivity platform that builds on the solid foundation of today's Microsoft Office system of programs and services. We will enable people to create more effective professional documents, access work information from anywhere, and better manage personal, team and project tasks. We're investing in a secure infrastructure that makes it easy for anyone to securely collaborate on documents and work processes. We're offering better data visualization and analysis tools that bring out the trends and patterns buried in mountains of data. We're making it easier for businesses to create, track, manage and distribute content both within and across organizational boundaries. And we're offering open XML standards and rapid development tools so corporate developers can build and extend applications that specifically target their needs. Microsoft has been innovating for the information worker for more than two decades -- and in many ways we've only just begun to scratch the surface of how software can help people realize their full potential. Bill Gates You can learn more about our vision for the New World of Work at http://www.microsoft.com/mscorp/execmail. From checker at panix.com Thu May 26 18:58:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:58:02 -0400 (EDT) Subject: [Paleopsych] Religion and Education: Recasting Agreements that Govern Teaching and Learning Message-ID: Recasting Agreements that Govern Teaching and Learning: An Intellectual and Spiritual Framework for Transformation [First, the summary from CHE, then excerpts from the article.] Thursday, May 26, 2005 A glance at the spring issue of Religion & Education: Questioning the "agreements" of academe The first step toward meaningful change in academe is taking stock of the underlying agreements that govern higher education, says Laura I. Rend?n, a professor of education psychology at California State University at Long Beach. Consciously or unconsciously, she says, faculty members and administrators have made agreements about their professional lives. One of those agreements, she says, is to be addicted to work. "Faculty and administrators are socialized to believe that the 'best' academics are those who are constantly publishing, getting millions of dollars in grants, putting in long hours, working on weekends, and traveling extensively," she writes. "When we ask our colleagues, 'How are you?' we almost never get the answer: 'Oh, I am so relaxed! I got so much rest this weekend. I had time to do everything I wanted to do with my family." Ms. Rend?n says she was forced to acknowledge her own work addiction when a younger colleague died of colon cancer. For the scholars who mourned their friend's passing, it was a "wake-up call to slow down, assess the error of our ways, and recognize that there is more to life than our academic work," she says. She also identifies five other agreements in academe: to value mental knowing over other types of intelligence; to keep faculty members and students separate, with knowledge flowing in only one direction; to pit students against one another in competition; to insist on perfection from students, not allowing them to be tentative in exploring new concepts; and to value Western structures of knowledge to the exclusion of others. Those agreements should be recast, she says, if higher education is to honor "the whole of who we are as intellectual, compassionate, authentic human beings who value love, peace, democracy, community, diversity, and hope for humanity." An excerpt of the article, "Recasting Agreements That Govern Teaching and Learning: An Intellectual and Spiritual Framework for Transformation," is online at http://fp.uni.edu/jrae/Spring%202005/Rendon%20Excerpt.htm --Kellie Bartlett ---------------- Volume 32 Number 1 Spring 2005 Recasting Agreements that Govern Teaching and Learning: An Intellectual and Spiritual Framework for Transformation Laura I. Rend?n If we can see it is our agreements which rule our life, and we don?t like the dream of our life, we need to change the agreements. Don Miguel Ruiz, The Four Agreements1 In The Four Agreements2 Don Miguel Ruiz, a healer and teacher who studied the teachings of the Toltec in Mexico, explains that the mind dreams 24 hours a day. When the mind is awake, we dream according to the framework of what we have been taught and what we have agreed to believe. When the mind is asleep, we lack this conscious framework, and the dream changes constantly. In the awakened state, we function according to society?s Dreamfield?a collective, holographic reflection of our shared beliefs. Don Miguel elaborates on the concept of human dreaming: The dream of the planet is the collective dream of billions of smaller, personal dreams, which together created a dream of family, a dream of community, a dream of a city, a dream of country, and finally a dream of the whole humanity. The dream of the planet includes all of society?s rules, its beliefs, its laws, its religions, its different cultures and ways to be, its governments, schools, social events and holidays.3 Don Miguel provides additional examples citing that when we were born, we were given a name, and we agreed to the name. When we were children, we were given a language, and we agreed to speak that language. We were given moral and cultural values. We began to have faith in these agreements passed on to us from the adults we were told to respect and to honor. We used these agreements to judge others and to judge ourselves. As long as we followed the agreements, we were rewarded. When we went against the rules we were punished, and pleasing others became a way of life, so much so that we became not who we really are, but a copy of someone else?s beliefs. As we became adults we tried to rebel against some beliefs, which we began to understand made little sense or were inflicting harm. For example, some of us may have been told we were dumb, fat, or ugly. In our educational system, some social rules have created inequalities and injustices such as belief systems that view women and people of color as lacking in leadership, as well as having limited intellectual abilities. But many of us became afraid of expressing our freedom to articulate a different truth because we feared punishment for going against the prevailing belief system, even when we had no role in creating it. The dominant belief system is powerful, entrenched, validated and constantly rewarded by the social structure that created it?so much so that when even when we begin to see that some of the agreements in the belief system are flawed and in need of change, we find it very difficult to challenge them. Don Miguel notes that we need "a great deal of courage to challenge our own beliefs. Because even if we know we didn?t choose all these beliefs, it is also true that we agreed to all of them. The agreement is so strong that even if we understand the concept of it not being true, we feel the blame, the guilt, and the shame that occur if we go against these rules."4 Like Don Miguel, I believe that a group of people can theorize to develop a set of agreements to guide a transformational change. For instance, a core group of higher education faculty and administrators can consciously begin to hold the same thoughts that represent a newly formed vision of teaching, research, leadership and service. A small, but critical mass of individuals can create what Malcolm Gladwell5 calls a "tipping point," a boiling point when an idea, trend or social behavior, like an epidemic, bursts into society and spreads like wildfire. In higher education, our shared beliefs about teaching and learning constitute the agreements that guide our present pedagogical Dreamfield. This Dreamfield is fraught with some powerful, entrenched agreements that, though shared by many, are in need of revision because they do not completely honor our humanity and our freedom to express who we are and what we represent. Purpose I write with three purposes: 1) to expose the privileged agreements that govern teaching and learning in higher education; 2) to provide an intellectual and spiritual framework for recasting the agreements in order to transform teaching and learning; and 3) to join the many existing voices of educational transformation to contribute to the generation of a new "tipping point"? a movement that wishes to create a new dream of education. The foundation of this dream is a more harmonic, holistic vision of education that honors the whole of who we are as intellectual, compassionate, authentic human beings who value love, peace, democracy, community, diversity and hope for humanity. [To read more...] [I cannot retrieve it.] From checker at panix.com Thu May 26 18:58:24 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:58:24 -0400 (EDT) Subject: [Paleopsych] SW: On Communication with Extraterrestrials Message-ID: Astrobiology: On Communication with Extraterrestrials http://scienceweek.com/2004/sa041008-5.htm The following points are made by Woodruff T. Sullivan III (Nature 2004 431:27): 1) Although the Search for Extraterrestrial Intelligence (SETI) has yet to detect a signal, the efforts continue because so little of the possible parameter space has been searched so far. These projects have almost all followed the dominant paradigm --launched 45 years ago by Cocconi and Morrison(1) -- of using radio telescopes to look for signs of extraterrestrial life. This focus on electromagnetic waves (primarily at radio wavelengths, but also at optical ones) was based on various arguments for their efficiency as a means of interstellar communication. However, Rose and Wright(2) have made the case that if speedy delivery is not required, long messages are in fact more efficiently sent in the form of material objects -- effectively messages in a bottle. Although the suggestion itself is not new(3,4), it had never before been backed up by quantitative analysis. 2) A fundamental problem in searching for extraterrestrial intelligence is to guess the communications set-up of the extraterrestrials who might be trying to contact us. In which direction should we look for their transmitter? At which frequencies? How might the message be coded? How often is it broadcast? (For this discussion I am assuming that the signals are intentional, setting aside the a priori equally likely possibility that the first signal found could be merely leakage arising from their normal activities.) Conventional wisdom holds that they would set up a beam of electromagnetic waves, just as we could do with, for example, the 305-meter Arecibo radio telescope in Puerto Rico, Earth's most powerful radio transmitter, or a pulsed laser on the 10-meter Keck optical telescope in Hawaii. Rose and Wright(2) conclude, however, that the better choice would be to send packages laced with information. 3) Unless the messages are short or the extraterrestrials are nearby, this "write" strategy requires less energy per bit of transmitted information than the "radiate" strategy does. Cone-shaped beams of radiation necessarily grow in size as they travel outwards, meaning that the great majority of the energy is wasted, even if some of it hits the intended target. A package, on the other hand, is not "diluted" as it travels across space, presuming that it's correctly aimed at its desired destination. For short messages, however, electromagnetic waves win out because of the overheads involved in launching, shielding and then decelerating a package, no matter how small it is. For a two-way conversation with extraterrestrials, the light-speed of electromagnetic waves is far superior. 4) As an example of a large message, consider all of the written and electronic information now existing on Earth: it's estimated(5) to amount to about one exabyte (10^(18) bytes). Rose and Wright(2) calculate that, using scanning tunnelling microscopy, these bits could be inscribed (in nanometer squares) within one gram of material! But this precious package would still require a cocoon of 10,000 kilograms to accelerate it from our planet to a speed of 0.1% of the speed of light, protect it from radiation damage along a 10,000-light-year route, and then decelerate it upon arrival. References: 1. Cocconi, G. & Morrison, P. Nature 184, 844-846 (1959) 2. Rose, C. & Wright, G. Nature 431, 47-49 (2004) 3. Bracewell, R. Nature 187, 670-671 (1960) 4. Papagiannis, M. Q. J. R. Astron. Soc. 19, 277-281 (1978) 5. Murphy, C. Atlantic 277, No. 5, 20-22 (1996) Nature http://www.nature.com/nature -------------------------------- Related Material: ASTROBIOLOGY: ON INTELLIGENT LIFE IN THE UNIVERSE The following points are made by J. Cohen and I. Stewart ((Nature 22 Feb 01 409:1119): 1) The authors point out that it is possible to imagine the existence of forms of life very different from those found on Earth, occupying habitats that are unsuitable for our kind of life. Some of those aliens might be technological, because technology is an autocatalytic process, and it follows that some aliens might possess technology well in advance of our own, including interstellar transportation. So much is clear, but this train of logic begs the obvious question of where these intelligent non-humanoid aliens might be. 2) The authors point out that the subject area of this discussion is often called "astrobiology", although in science fiction circles (where the topic has arguably been thought through more carefully than it has been in academic circles) the term "xenobiology" is favored. The authors suggest the difference is significant: Astrobiology is a mixture of astronomy and biology, and the tendency is to assume that the field must be assembled from contemporary astronomy and biology; in contrast, xenobiology is the biology of the strange, and the name inevitably involves the idea of extending contemporary biology into new and alien realms. 3) The authors ask: Upon what science should xenobiology be based? The authors suggest that the history of science indicates that any discussion of alien life will be misleading if it is based on the presumption that contemporary science is the ultimate in human understanding. Consider the position of science a century ago. We believed then that we inhabited a newtonian clockwork Universe with absolute space and absolute time; that time was independent of space; that both were of infinite extent; and that the Universe had always existed, always would exist, and was essentially static. We knew about the biological cell, but we had a strong feeling that life possessed properties that could not be reduced to conventional physics; we had barely begun to appreciate the role of natural selection in evolution; and we had no idea about genetics beyond mendelian numerical patterns. Our technology was equally primitive: cars were inferior to the horse, and there was no radio, television, computers, biotechnology or mobile phones. Space travel was the stuff of fantasy. If the past is any guide, then almost everything we now think we know will be substantially qualified or proven wrong within the next 25 years, let alone another century. Biology, in particular, will not persist in its current primitive form. At present, biology is at a stage roughly analogous to physics when Newton (1642-1727) discovered his law of gravity. "There is an awfully long way to go." 4) The authors point out that evolution on Earth has been in progress for at least 3.8 billion years. "This is deep time --too deep for scenarios expressed in human terms to make much sense. A hundred years is the blink of an eye compared with the time that humans have existed on Earth. The lifespan of the human race is similarly short when compared with the time that life has existed on Earth. It is ridiculous to imagine that somehow, in a single century of human development, we have suddenly worked out the truth about life. After all, we do not really understand how a light switch works at a fundamental level, let alone a mitochondrion." Nature http://www.nature.com/nature -------------------------------- Related Material: PROSPECTS FOR THE SEARCH FOR EXTRATERRESTRIAL INTELLIGENCE Notes by ScienceWeek: The conjured image is poignant: intelligent life sprinkled throughout our Galaxy, each sprinkle separated from the others by 1000 light years, each sprinkle searching for the others with radio transmitters and receivers, small robotic spacecraft sent beeping into empty space between the stars, the beeping like a faint bleating in the dark as the sprinkles search for each other. Of course, the conjured image may be wrong: there may be intelligent life dense in the Galaxy; or we may be alone. It does not matter. For the human species on this planet Earth, the quest is part of our destiny, part of what we do as a species, and it will go on as long as we remain civilized. J.C. Tarter and C.F. Chyba (SETI Institute, US) present a review of current and future efforts in the search for extraterrestrial intelligence, the authors making the following points: 1) During the past 40 years, researchers have conducted searches for radio signals from an extraterrestrial technology, sent spacecraft to all but one of the planets in our Solar System, and expanded our knowledge of the conditions in which living systems can survive. The public perception is that we have looked extensively for signs of life elsewhere. But in reality, we have hardly begun to search. Assuming our current, comparatively robust space program continues, by 2050 we may finally know whether there is, or ever was, life elsewhere in our Solar System. At a minimum, we will have thoroughly explored the most likely candidates, a task not yet accomplished. We will have discovered whether life exists on Jupiter's moon Europa, or on Mars. And we will have undertaken the systematic exobiological exploration of planetary systems around other stars, seeking traces of life in the spectra of planetary atmospheres. These surveys will be complemented by expanded searches for intelligent signals. 2) The authors point out that although the current language is that of a "search for extraterrestrial intelligence" (SETI), what is being sought is evidence of extraterrestrial technologies. Until now, researchers have concentrated on only one specific technology -- radio transmissions at wavelengths with weak natural backgrounds and little absorption. No verified evidence of a distant technology has been found, but the null result may have more to do with limitations in range and sensitivity than with actual lack of civilization. The most distant star probed directly is still less than 1 percent of the distance across our Galaxy. 3) The authors conclude: "If by 2050 we have found no evidence of an extraterrestrial technology, it may be because technical intelligence almost never evolves, or because technical civilizations rapidly bring about their own destruction, or because we have not yet conducted an adequate search using the right strategy. If humankind is still here in 2050 and still capable of doing SETI searches, it will mean that our technology has not yet been our own undoing -- a hopeful sign for life generally. By then we may begin considering the active transmission of a signal for someone else to find, at which point we will have to tackle the difficult questions of who will speak for Earth and what they will say." Scientific American 1999 December From checker at panix.com Thu May 26 18:58:41 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:58:41 -0400 (EDT) Subject: [Paleopsych] SW: Light, Matter, and Metamaterials Message-ID: Materials Science: Light, Matter, and Metamaterials http://scienceweek.com/2004/sa041001-2.htm The following points are made by W. Barnes and R. Sambles (Science 2004 305:785): 1) In recent years, there has been an explosion of interest in controlling the interaction between light and matter by introducing structure on length scales equal to or smaller than the wavelength of the light involved. Functionality is thus as much a property of geometry as of material parameters -- a concept sometimes referred to as "metamaterials". In the 1980s, Yablonovitch (1) and John (2) showed that by introducing three-dimensional periodicity on the scale of the wavelength of light, one can alter how light interacts with the material, specifically by blocking the propagation of light to make photonic band gap (PBG) materials. More recently, by introducing structure smaller than the wavelength of light involved, synthetic "left-handed" materials have been created that have the fascinating property of negative refraction (3). Pendry et al (4) have recently demonstrated how in theory we may be able to exploit another aspect of structure on the subwavelength scale, this time to create a new family of surface electromagnetic modes. 2) Wavelength-scale periodic structures with even a very small refractive index contrast may lead to the strong selective reflection of light and photonic stop-bands. However, producing a truly three-dimensional PBG material that reflects over a range of wavelengths for all directions is very challenging. Indeed, practical applications of the PBG idea have been most fruitfully pursued in restricted dimensions, notably in the photonic crystal fiber and in two-dimensional planar slabs. An interesting variant is to apply the same idea to surface waves. In 1996 Kitson etal (5) demonstrated that a full PBG for surface plasmon-polariton (SPP) modes could be made by introducing periodic texture into the metallic surface that supports SPPs. In the latest development by Pendry et al (4), surface structure is used not just to control surface modes but also to create them. 3) Surface plasmon-polaritons (SPPs) are surface modes that propagate at metal-dielectric interfaces and constitute an electromagnetic field coupled to oscillations of the conduction electrons at the metal surface. The fields associated with the SPP are enhanced at the surface and decay exponentially into the media on either side of the interface. In the visible domain, there is a very short penetration depth of the field into the metal and a relatively short penetration depth into the dielectric, thus allowing one to concentrate light on a scale much smaller than the wavelength involved. 4) In the microwave regime, metals have a very large complex refractive index (n+ik), where n and k are the real and imaginary parts, respectively, and -n ~ k ~ 10^(3). The SPP mode is then very nearly a plane wave that extends huge distances into the dielectric but only very short distances into the metal. Metals in the microwave domain are therefore frequently treated as ideal conductors, thus reflecting microwaves perfectly, which limits the usefulness of applying near-field concepts being developed in the visible domain to the microwave regime. What Pendry et al (4) have demonstrated is that by puncturing the metal surface with subwavelength holes, some of the field may penetrate the (effective) surface. This changes the field-matching situation at the bounding surface and leads to a new effective surface plasmon resonance frequency. References (abridged): 1. E. Yablonovitch, Phys. Rev. Lett. 58, 2059 (1987) 2. S. John, Phys. Rev. Lett. 58, 2486 (1987) 3. R. A. Shelby, D. R. Smith, S. Schultz, Science 292, 77 (2001) 4. J. B. Pendry, L. Martin-Moreno, F. J. Garcia-Vidal, Science 305, 847 (2004) 5. S. C. Kitson, W. L. Barnes, J. R. Sambles, Phys. Rev. Lett. 77, 2670 (1996) Science http://www.sciencemag.org -------------------------------- Related Material: OPTICS: ON NEGATIVE REFRACTION The following points are made by J.B. Pendry and D.R. Smith (Physics Today 2004 June): 1) Victor Veselago, in a paper(1) published in 1968, pondered the consequences for electromagnetic waves interacting with a hypothetical material for which both the electric permittivity (e) and the magnetic permeability (m) were simultaneously negative. Because no naturally occurring material or compound has ever been demonstrated with negative (e) and (m), Veselago wondered whether this apparent asymmetry in material properties was just happenstance or perhaps had a more fundamental origin. He concluded that not only should such materials be possible, but if ever found, they would exhibit remarkable properties unlike those of any known materials and would give a twist to virtually all electromagnetic phenomena. Foremost among these properties is a negative index of refraction. 2) Veselago always referred to the materials as "left handed", because the wave vector is antiparallel to the usual right-handed cross product of the electric and magnetic fields. The authors prefer the negative-index description. The names mean the same thing, but the authors suggest their description appeals more to everyday intuition and is less likely to be confused with chirality, an entirely different phenomenon. 3) Why are there no materials with negative (e) and (m)? One first needs to understand what it means to have a negative (e) or (m) and how negative values occur in materials. The Drude-Lorentz model of a material is a good starting point, because it conceptually replaces the atoms and molecules of a real material by a set of harmonically bound electron oscillators resonant at some frequency (F). At frequencies far below (F), an applied electric field displaces the electrons from the positive cores and induces a polarization in the same direction as the applied field. At frequencies near resonance, the induced polarization becomes very large, as is typical in resonance phenomena; the large response represents accumulation of energy over many cycles, such that a considerable amount of energy is stored in the resonator (in this case, the medium) relative to the driving field. 4) So large is this stored energy that even changing the sign of the applied electric field has little effect on the polarization near resonance. That is, as the frequency of the driving electric field is swept through the resonance, the polarization flips from in-phase to out-of-phase with the driving field, and the material exhibits a negative response. If instead of electrons the material response were due to harmonically bound magnetic moments, then a negative magnetic response would exist. 5) Although somewhat less common than positive materials, negative materials are nevertheless easy to find. Materials with negative (e) include metals (such as silver, gold, and aluminum) at optical frequencies; materials with negative (m) include resonant ferromagnetic or antiferromagnetic systems. 6) That negative material parameters occur near a resonance has two important consequences. First, negative material parameters will exhibit frequency dispersion: They will vary as a function of frequency. Second, the usable bandwidth of negative materials will be relatively narrow compared with positive materials. These consequences can help answer our initial question as to why materials with (e) and (m) both negative are not readily found. In existing materials, the resonances that give rise to electric polarizations typically occur at very high frequencies -- in the optical for metals, and at least in the terahertz-to-IR region for semiconductors and insulators. On the other hand, resonances in magnetic systems typically occur at much lower frequencies and usually tail off toward the THz and IR region. In short, the fundamental electronic and magnetic processes that give rise to resonant phenomena in materials simply do not occur at the same frequencies, although no physical law would preclude such overlap.(2-5) References (abridged): 1. V. G. Veselago, Sov. Phys. Usp. 10, 509 (1968) 2. J. B. Pendry, A. J. Holden, D. J. Robbins, W. J. Stewart, IEEE Trans. Microwave Theory Tech. 47, 2075 (1999) 3. R. A. Shelby, D. R. Smith, S. Schultz, Science 292, 77 (2001) 4. A. A. Houck, J. B. Brock, I. L. Chuang, Phys. Rev. Lett. 90, 137401 (2003) 5. C. G. Parazzoli, R. B. Greegor, K. Li, B. E. C. Koltenbah, M. Tanielian, Phys. Rev. Lett. 90, 107401 (2003) Physics Today http://www.physicstoday.org -------------------------------- Related Material: MATERIALS SCIENCE: ON TERAHERTZ MAGNETIC RESPONSE The following points are made by T.J. Yen et al (Science 2004 303:1494): 1) The range of electromagnetic material response found in nature represents only a small subset of that which is theoretically possible. This limited range can be extended by the use of artificially structured materials, or metamaterials, that exhibit electromagnetic properties not available in naturally occurring materials. For example, artificial electric response has been introduced in metallic wire grids or cell meshes, with the spacing on the order of wavelength (1); a diversity of these meshes are now used in THz optical systems (2). 2) More recently, metamaterials with subwavelength scattering elements have shown negative refraction at microwave frequencies (3), for which both the electric permittivity and the magnetic permeability are simultaneously negative. The negative-index metamaterial relied on an earlier theoretical prediction that an array of nonmagnetic conductive elements could exhibit a strong, resonant response to the magnetic component of an electromagnetic field (4). 3) Conventional materials that exhibit magnetic response are far less common in nature than materials that exhibit electric response, and they are particularly rare at THz and optical frequencies. The reason for this imbalance is fundamental in origin: Magnetic polarization in materials follows indirectly either from the flow of orbital currents or from unpaired electron spins. In magnetic systems, resonant phenomena, analogous to the phonons or collective modes that lead to an enhanced electric response at infrared or higher frequencies, tend to occur at far lower frequencies, resulting in relatively little magnetic material response at THz and higher frequencies. 4) Magnetic response of materials at THz and optical frequencies is particularly important for the implementation of devices such as compact cavities, adaptive lenses, tunable mirrors, isolators, and converters. A few natural magnetic materials that respond above microwave frequencies have been reported. For example, certain ferromagnetic and antiferromagnetic systems exhibit a magnetic response over a frequency range of several hundred gigahertz (5) and even higher. However, the magnetic effects in these materials are typically weak and often exhibit narrow bands, which limits the scope of possible THz devices. The realization of magnetism at THz and higher frequencies will substantially affect THz optics and their applications. 5) In summary: The authors demonstrate that magnetic response at terahertz frequencies can be achieved in a planar structure composed of nonmagnetic conductive resonant elements. The effect is realized over a large bandwidth and can be tuned throughout the terahertz frequency regime by scaling the dimensions of the structure. The authors suggest that artificial magnetic structures, or hybrid structures that combine natural and artificial magnetic materials, can play a key role in terahertz devices. References (abridged): 1. R. Ulrich, Infrared Phys. 7, 37 (1967) 2. S. T. Chase, R. D. Joseph, Appl. Opt. 22, 1775(1983) 3. R. A. Shelby, D. R. Smith, S. Schultz, Science 292, 79 (2001) 4. J. B. Pendry, A. J. Holden, D. J. Robbins, W. J. Stewart, IEEE Trans. Microw. Theory Tech. 47, 2075(1999) 5. P. Grunberg, F. Metawe, Phys. Rev. Lett. 39, 1561 (1977) Science http://www.sciencemag.org From checker at panix.com Thu May 26 18:58:50 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:58:50 -0400 (EDT) Subject: [Paleopsych] SW: On Female Development Message-ID: Medical Biology: On Female Development http://scienceweek.com/2004/sc040924-3.htm The following points are made by Ieuan A. Hughes (New Engl. J. Med. 2004 351:748): 1) It is an indisputable fact that the constitutive sex in mammalian fetal development is female. Furthermore, a functioning ovary is not required for the female phenotype, whereas a testis is mandatory for male development. More than 50 years after Jost performed experiments in rabbit embryos in which castration was followed by testis engraftment, his observations remain a beacon of clarity illuminating the mechanism of fetal sexual differentiation -- the physical phenotype consistent with male or female sex.(1) 2) Another indisputable fact of mammalian fetal sexual development is the morphogenesis of a bipotential gonad in the genital ridge, dual internal genital ducts, and a common anlagen for the external genitalia. So what factors determine sexual dimorphism? Clearly, the production of an XY zygote at fertilization is the simplest explanation for the development of a bipotential gonad into a testis. The production of m?llerian inhibiting substance by Sertoli cells and androgens by Leydig cells in a critical-concentration-dependent and time-dependent manner induces male sexual differentiation by means of a hormone-dependent process. 3) In contrast, a panoply of genes are involved in gonadal development and, hence, sex determination.(2) There is compelling evidence that a gene on the short arm of the Y chromosome close to the pseudo-autosomal region -- called SRY, for the sex-determining region of the Y chromosome -- is the key player in testis development. The translocation of SRY to the X chromosome during paternal meiosis explains testis development in 90 percent of XX males. Mutations in SRY are present in 10 to 15 percent of XY females who have complete gonadal dysgenesis, and the insertion of the corresponding mouse gene sry into XX mouse embryos results in male offspring. 4) A number of other genes are involved in testis determination, such as SOX9, which is induced by SRY; steroidogenic factor 1 (SF1); Wilms' tumor 1 (WT1); and DAX1. Perturbations in the structure or function of these genes or their products cause sex-disorder syndromes in humans. More testis-determining genes have been identified through studies in transgenic mice. However, their relevance to disorders such as XY gonadal dysgenesis in humans remains unknown. 5) Are there no key genes for human ovarian determination and differentiation of the female reproductive tract? Some syndromes associated with gene duplication cause XY sex reversal. For instance, the duplication of a 160-kb region on the short arm of the X chromosome (Xp21.3) leads to the development of an XY female. Since this dose-sensitive sex locus contains the DAX1 gene, one might ask whether DAX1 is an ovarian-determining gene. Apparently not, since female mice with a mutant dax1 gene have normal ovaries. Studies of dax1 in concert with sry in transgenic mice suggest that DAX1 normally complements the role of SRY in testis determination but can act in an antitestis manner when it is overexpressed. A factor whose role in female sexual development seems more critical is highlighted by the duplication of chromosome 1p31-p35. Within this region lies the WNT4 gene, which encodes one member of a large family of locally acting growth factors that are involved in intracellular signaling.(3) The overexpression of WNT4 up-regulates DAX1 and may cause sex reversal by means of the same mechanism to which Xp21 duplication is ascribed.(4) References (abridged): 1. Jost A. Problems of fetal endocrinology: the gonadal and hypophyseal hormones. Recent Prog Horm Res 1953;8:379-418 2. MacLaughlin DT, Donahoe PK. Sex determination and differentiation. N Engl J Med 2004;350:367-378 3. Dale TC. Signal transduction by the Wnt family of ligands. Biochem J 1998;329:209-223 4. Vainio S, Heikkila M, Kispert A, Chin N, McMahon AP. Female development in mammals is regulated by Wnt-4 signalling. Nature 1999;397:405-409 New Engl. J. Med. http://www.nejm.org -------------------------------- Related Material: PRENATAL HORMONE EXPOSURE AND SEXUAL VARIATION American Scientist 2003 91:218 The following points are made by John G. Vandenbergh: 1) X and Y chromosomes are only the beginning of sex determination. Biologists have recently taken a closer look at the events between fertilization and sexual maturity that establish an individual's sexual characteristics. These events help explain variability among individuals in sexual anatomy, physiology and behavior. For example, an XX individual can exhibit some masculine traits if hormone production or sensitivity is abnormal during early development. 2) In fact, there is almost a continuum of sexual traits between male and female. The ability of environmental influences during development to produce such a continuum demonstrates that another classical dichotomy, between nature and nurture, is in fact a synergy: Genes and environment work together to produce an organism. Specific genes are only turned on when the environment of the cell, tissue, organ or organism calls for them. In the case of sexual characteristics, one mechanism by which environmental variables exert their influence is through the activity of hormones. 3) Hormones are substances released by cells into the bloodstream, where they travel throughout the body and influence the function of other, distant cells. Hormone molecules themselves, or the enzymes that produce those molecules, are encoded by the genome, but hormone concentrations can be modulated by a wide array of environmental factors, including stress, food consumption, temperature, and time of year. In turn, hormone concentrations modulate the expression of genes in a variety of different cell and tissue types, producing anatomical, physiological and behavioral differences. -------------------------------- Related Material: GENOME BIOLOGY: ON THE Y CHROMOSOME The following points are made by Huntington F. Willard (Nature 2003 423:810): 1) Because of its distinctive role in sex determination, the Y chromosome has long attracted special attention from geneticists, evolutionary biologists and even the lay public. It is known to consist of regions of DNA that show quite distinctive genetic behavior and genomic characteristics. The two human sex chromosomes, X and Y, originated a few hundred million years ago from the same ancestral "autosome" -- a non-sex chromosome --during the evolution of sex determination. They then diverged in sequence over the succeeding aeons. Nowadays, there are relatively short regions at either end of the Y chromosome that are still identical to the corresponding regions of the X chromosome, reflecting the frequent exchange of DNA between these regions ("recombination") that occurs during sperm production. But more than 95% of the modern-day Y chromosome is male-specific, consisting of some 23 million base pairs (Mb) of euchromatin -- the part of our genome containing most of the genes -- and a variable amount of heterochromatin, consisting of highly repetitive DNA and often dismissed as non-functional. In an accomplishment that can only be described as heroic, Skaletsky et al (Nature 2003 423:825) have reported the complete sequence of the 23-Mb euchromatic segment, which they designate the MSY, for "male-specific region of the Y". 2) Prioritization in the Human Genome Project had led to the heterochromatic regions of the Y and other chromosomes being set aside to be dealt with later, if ever. But there was reason to hope that the euchromatin of the Y chromosome would present no more difficult a sequencing challenge than that found elsewhere in the genome. That supposition could not have been more wrong. As Skaletsky et al report, the MSY is a mosaic of complex and interrelated sequences that made this one of the most problematic regions of the human genome thus far to be successfully sequenced and assembled. 3) For instance, approximately 10 to 15% of the MSY consists of stretches of sequence that moved there from the X chromosome within only the past few million years. These stretches are still 99% identical to their X-chromosome counterparts and are dominated by a high proportion of interspersed repetitive sequences, with only two genes. A further 20% of the MSY consists of a class of sequences ("X-degenerate" sequences) that are more distantly related to the X chromosome, reflecting their more ancient common origin. And the remainder comprises a web of Y-specific repetitive sequences that make up a series of palindromes -- sequences that read the same on both strands of the DNA double helix, with two "arms" stretching out from a central point of mirrored symmetry. These palindromes come in a range of sizes, up to almost 3 Mb in length, with more than 99.9% identity between the two arms of each palindrome. Nature http://www.nature.com/nature -------------------------------- Related Material: ON X AND Y CHROMOSOMES The following points are made by Bob Beale (The Scientist 2001 23 Jul): 1) The understanding of sex is currently experiencing important changes due to new insights gained from sociology, biology, and medicine. The differences between females and males, once believed clearcut, now appear blurred. For example, there is now evidence that the Y chromosome has degenerated over evolutionary time. In mice, some genes involved in early stages of sperm production are on the female X chromosome. In addition, the gene that can produce ambiguous genitalia has been identified. 2) Genetic studies are revealing that men and women are more similar than distinct. Of the approximately 31,000 genes in the human genome, men and women differ only in the two sex chromosomes, X and Y, and only a few dozen genes are apparently involved. Moreover, it is now known that the Y chromosome has only approximately 30 genes, and many of those are involved in basic housekeeping duties or in regulating sperm production. In contrast, the X chromosome has hundreds of genes with a vast array of roles. 3) Strong evidence exists that the X and Y chromosomes were once a matching pair of X chromosomes. It is unclear why the male sex chromosome, the Y chromosome, shrunk and shed most of its genes over time. Humans are not alone in this phenomenon: the degeneration of the Y chromosome is well documented in fruit flies and is clearly an ongoing process in all animals. Detailed molecular and embryological studies are revealing how genes determine the anatomical sex of a fetus and how the process can be corrupted. The Scientist http://www.the-scientist.com From checker at panix.com Thu May 26 18:58:59 2005 From: checker at panix.com (Premise Checker) Date: Thu, 26 May 2005 14:58:59 -0400 (EDT) Subject: [Paleopsych] SW: On Computational and Systems Biology Message-ID: ---------- Forwarded message ---------- Date: Wed, 25 May 2005 11:08:12 -0400 (EDT) From: Premise Checker To: Premise Checker: ; Subject: SW: On Computational and Systems Biology Theoretical Biology: On Computational and Systems Biology http://scienceweek.com/2004/sc040924-1.htm The following points are made by Albert Goldbeter (Current Biology 2004 14:R601): 1) Systems biology, computational biology, integrative biology --many names are being used to describe an emerging field that is characterized by the application of quantitative theoretical methods and a tendency to take a global view of problems in biology. This field is not entirely novel, but what is clear and significant is that the life sciences community recognizes its increasing importance. This is the really new aspect: many experimentalists are beginning to accept the view that theoretical models and computer simulations can be useful to address the dynamic behavior of complex regulatory networks in biological systems. 2) Theoretical or mathematical biology has existed for many decades, as attested by the journals that carry these terms as part of their names. Until recently, however, these journals were outside of the mainstream and largely ignored by the majority of molecular and cell biologists. As the attitude to theoretical approaches in biology is shifting, it is not surprising to see their revival under new names, if only because a change in name is often needed to focus attention. After all, even at the cellular level, many sensory systems are built to respond to changes in stimulus intensity and adapt to constant signals. 3) The hype that currently surrounds computational and systems biology has the beneficial consequences of triggering further interest and creating a momentum for new opportunities, but it also carries some dangers [1], in particular that of making the field appear merely a fashion. The French stylist Coco Chanel once said la mode, c'est ce qui se demode - fashion is what comes out of fashion . In the view of the author, this does not apply to computational approaches to biological dynamics, which are here to stay. 4) Regarding the surge of interest in theoretical approaches to biology it is natural to ask: why now? One triggering factor is undoubtedly the completion of genome projects for a number of species and realization that the sequences alone cannot tell us how cells and organisms function. Understanding dynamic cellular behavior and making sense of the data that are accumulating at an ever increasing pace requires the study of protein and gene regulatory networks. This network approach naturally encourages one to take a more integrative view of the cell and, at an even higher level, of the whole organism. 5) Quantitative models show that certain types of biological behavior occur only in precise conditions, within a domain bounded by critical parameter values. This can contrast with the intuitive expectations from simple verbal descriptions. This is well illustrated by cellular rhythms [2,3]. Thus, cytosolic Ca2+ oscillations are triggered in various types of cell by treatment with a hormone or neurotransmitter. But repetitive Ca2+ spiking only occurs in a range of stimulation bounded by two critical values: below and above this range, the intracellular Ca2+ concentration reaches a low or a high steady-state level, respectively. Another example is the well-known generation of oscillations in models based on negative feedback. It is straightforward to explain in words why oscillations can readily be generated by negative feedback; but this verbal explanation largely misses the point, as it fails to explain why oscillations only occur in precise conditions, which critically affect both the degree of cooperativity of repression and the delay in the negative feedback loop.(2-5) References (abridged): 1. North, G. (2003). Biophysics and the place of theory in biology. Curr. Biol. 13, R719-R720 2. Goldbeter, A. (1996). Biochemical Oscillations and Cellular Rhythms. (Cambridge, UK: Cambridge Univ. Press) 3. Goldbeter, A. (2002). Computational approaches to cellular rhythms. Nature 420, 238-245 4. Thomas, R. and d'Ari, R. (1990). Biological Feedback. (Boca Raton, FL: CRC Press) 5. Pomerening, J.R., Sontag, E.D., and Ferrell, J.E.Jr. (2003). Building a cell cycle oscillator: hysteresis and bistability in the activation of Cdc2. Nat. Cell Biol. 5, 346-351 Current Biology http://www.current-biology.com -------------------------------- Related Material: THEORETICAL BIOLOGY: ON BIOLOGICAL ANALYSIS The following points are made by Eors Szathmary (Current Biology 2004 14:R145): 1) Billions of years of evolution have produced organisms of stunning diversity. Some of these are relatively simple, like the bacteria; others show impressive complexity. For two decades, the author has worked on a theoretical reconstruction and understanding of the major transitions that generated the levels of biological organization that we see today. Although many in biology have an antipathy to mathematics, the author "simply cannot live without it." A large part of his research consists of making models of intermediate stages of organization and the evolutionary transitions between them. 2) Although theoretical biology is avoided by many biologists because of their formulae phobia, theoretical biology is not necessarily mathematical, at least not when important ideas and concepts are conceived for the first time. The theory of Charles Darwin (1809-1882), as he presented it, was not mathematical (although later he commented that his reluctance to embrace mathematics was foolish, as mathematically minded persons seem to have an "extra sense"). But neither was the conceptualization by Michael Faraday (1791-1867) of the electromagnetic field: the mathematical structure was built later by James Clerk Maxwell (1831-1879). The theoretical evolutionary embryologist August Weismann (1834-1914) was often more rigorous than Darwin, but still not mathematical. 3) The Golden Age of theoretical biology was the first half of the 20th century, when Ronald Fisher (1860-1962), John Burdon Sanderson Haldane (1892-1964) and Sewall Wright (1889-1988) founded population genetics and Alfred Lotka (1880-1949), Vito Volterra (1860-1940) and Vladimir Kostitzin (1883-1963) started to build up theoretical ecology. These seeds have born many fruits since then. Take evolutionary biology, for example. A few decades after the Golden Age, evolutionary biologists started to tackle (ultimately with considerable success) questions where the Darwinian answer is far from obvious. Why do we age? Why are there sterile insect castes? At first it does not seem to make much sense to argue that your death or sterility increases your fitness. But evolutionary theory can provide satisfactory resolutions of these conundrums. In some cases even the question itself cannot be formulated well enough without some modeling: the problem of the evolutionary maintenance of sex is a case in point. Whole sub-disciplines, like evolutionary game theory, have been set up to meet such challenges. 4) The problems become a lot harder when we come to the large-scale dynamics of evolution. Imagine, say, a thousand Earth-like planets with exactly the same initial conditions of planetary development. After one, two, three billion years (and so on), how many of them would still have living creatures? And would they be like the eukaryotes? We have simply no knowledge about the time evolution of this distribution, and "educated" guesses differ widely.(1-4) References: 1> Benner, S.A. (2003). Synthetic biology: Act natural. Nature 421, 118 2. Ganti, T. (1971). The Principle of Life (in Hungarian). (Budapest: Gondolat) 3. Ganti, T. (2003). The Principles of Life. ( Oxford University Press) 4. Maynard Smith, J. and Szathmary, E. (1995). The Major Transitions in Evolution. (Oxford: Freeman/Spektrum), Current Biology http://www.current-biology.com -------------------------------- Related Material: THEORETICAL BIOLOGY: ON THEORY IN CELL BIOLOGY The following points are made by John J. Tyson (Current Biology 2004 14:R262): 1) Many areas of modern science and engineering owe their strength and vitality to a rich interplay of experiment, theory, and computation. For example, quantum chemistry, aerodynamics, meteorology, and membrane electrophysiology are all firmly based on extensive quantitative observations, sound theoretical formalisms, and accurate predictive calculations. Molecular cell biology, on the other hand, is still, for the most part, proudly and precariously balanced on one leg -- experimental observations -- and its staunchest defenders believe that theoretical and computational approaches have little or nothing to contribute to our understanding of cell physiology(1). 2) This view is surely wrong. A living cell is an intrinsically dynamical system, ceaselessly adapting in space, time, and internal state to environmental challenges. Catalogs of genes and static diagrams of the structural and functional relationships of proteins, though necessary for full understanding, can never adequately account for the dynamism of organelles and cells. Take, for example, cilia: these beautiful tiny whips, attached to many cells, lash back and forth in wondrous synchrony, propelling cells through liquids or liquids past cells. Without cilia you would not have been born (they transport eggs from ovary to uterus) and you could not breathe (they continually sweep mucus and debris from the lungs and airways). How do these elegant little machines accomplish their essential tasks? 3) Open any modern textbook of cell biology and you will find an attempt to answer this fundamental question. What you will see is a parts list of a typical cilium -- dynein, tubulin, nexin, and so on -- and a pseudo-color, artist's rendition of how the parts seem to be connected. Then a few words about how dynein molecules can pull on microtubules, causing then to slide past each other. End of story. 4) This explanation leaves one unsatisfied. How are we to understand the dynamic function of a cilium from this static textbook picture? The essence of a cilium is to move in space and time. What principles organize the tiny pulls of each dynein motor into the "power stroke" that sweeps along the cilium from base to tip? What forces drive the recovery stroke along a trajectory so different from the power stroke? What invisible choreographer synchronizes the movements of vast fields of cilia to carry the egg to its destination? 5) These sorts of questions cannot be answered by cataloging parts, defining their connections, and drawing schematic diagrams. The problem demands a movie. "Well then, if you want a movie, go to the electronic version of the textbook and click on the icon for the quick time movie of a beating cilium." What you will see is either a living cilium observed through a microscope or an animated cartoon of how the author imagines a cilium to move. But animation is not scientific explanation; it is likely to be as entertaining and as fundamentally mistaken as a Road Runner cartoon. What we desire is a realistic computation of the coordinated motion of a field of cilia, based on solid principles of biochemistry and biophysics, including the forces exerted by motor proteins on the stiff and elastic components of the axoneme, and the forces exerted by cilia on the viscoelastic liquid in which they are immersed. Although much interesting work has been done on this problem [2-5], a full and satisfying solution remains for the future. References (abridged): 1 Lawrence, P. (2004). Theoretical embryology: a route to extinction?. Curr. Biol. 14, R7-R8 2 Dillon, R.H. and Fauci, L.J. (2000). An integrative model of internal axoneme mechanics and external fluid dynamics in ciliary beating. J. Theor. Biol. 207, 415-430 3 Gueron, S. and Levit-Gurevich, K. (2001). A three-dimensional model for ciliary motion based on the internal 9+2 structure. Proc. R. Soc. Lond. B. Biol. Sci. 268, 599-607 4 Brokaw, C.J. (2002). Computer simulation of flagellar movement VIII: Coordination of dynein by local curvature control can generate helical bending waves. Cell Motil. Cytoskeleton 53, 103-124 5 Lindemann, C.B. (2002). Geometric clutch model version 3: The role of the inner and outer arm dyneins in the ciliary beat. Cell Motil. Cytoskel. 52, 242-254 Current Biology http://www.current-biology.com From anonymous_animus at yahoo.com Fri May 27 01:54:01 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Thu, 26 May 2005 18:54:01 -0700 (PDT) Subject: [Paleopsych] health care In-Reply-To: <200505261800.j4QI0aR18571@tick.javien.com> Message-ID: <20050527015401.10384.qmail@web30811.mail.mud.yahoo.com> Steve says: >>65% of Americans now favor national health care.<< --Only if you actually call it "health care". But not when you call it "socialist tyranny". It's all in the framing : ) Michael __________________________________ Do you Yahoo!? Yahoo! Small Business - Try our new Resources site http://smallbusiness.yahoo.com/resources/ From shovland at mindspring.com Fri May 27 03:00:12 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 26 May 2005 20:00:12 -0700 Subject: [Paleopsych] health care Message-ID: <01C5622D.87C87980.shovland@mindspring.com> Before long, many Americans will be surprised to find themselves developing a considerable appetite for "Socialism" :-) Steve Hovland www.stevehovland.net -----Original Message----- From: Michael Christopher [SMTP:anonymous_animus at yahoo.com] Sent: Thursday, May 26, 2005 6:54 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] health care Steve says: >>65% of Americans now favor national health care.<< --Only if you actually call it "health care". But not when you call it "socialist tyranny". It's all in the framing : ) Michael __________________________________ Do you Yahoo!? Yahoo! Small Business - Try our new Resources site http://smallbusiness.yahoo.com/resources/ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From Euterpel66 at aol.com Fri May 27 03:20:26 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Thu, 26 May 2005 23:20:26 EDT Subject: [Paleopsych] health care Message-ID: <208.1d69872.2fc7ebfa@aol.com> In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, shovland at mindspring.com writes: --Only if you actually call it "health care". But not when you call it "socialist tyranny". It's all in the framing : ) Michael I imagine that you are one of the lucky ones whose employer provides yours. Mine costs me $468 a month, one third of my monthly income. My daughter has no healthcare insurance at all. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Fri May 27 05:27:59 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 26 May 2005 22:27:59 -0700 Subject: [Paleopsych] health care Message-ID: <01C56242.2F5B3FC0.shovland@mindspring.com> I pay about $4,200 per year for major medical with a $5,000 deductible... A little Socialism is a good thing. Steve Hovland www.stevehovland.net -----Original Message----- From: Euterpel66 at aol.com [SMTP:Euterpel66 at aol.com] Sent: Thursday, May 26, 2005 8:20 PM To: paleopsych at paleopsych.org Subject: Re: [Paleopsych] health care In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, shovland at mindspring.com writes: --Only if you actually call it "health care". But not when you call it "socialist tyranny". It's all in the framing : ) Michael I imagine that you are one of the lucky ones whose employer provides yours. Mine costs me $468 a month, one third of my monthly income. My daughter has no healthcare insurance at all. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html << File: ATT00000.html >> << File: ATT00001.txt >> From shovland at mindspring.com Fri May 27 13:25:57 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 27 May 2005 06:25:57 -0700 Subject: [Paleopsych] Rise of the Plagiosphere Message-ID: <01C56284.F2435030.shovland@mindspring.com> By Ed Tenner June 2005 1 of 1 The 1960s gave us, among other mind-altering ideas, a revolutionary new metaphor for our physical and chemical surroundings: the biosphere. But an even more momentous change is coming. Emerging technologies are causing a shift in our mental ecology, one that will turn our culture into the plagiosphere, a closing frontier of ideas. The Apollo missions' photographs of Earth as a blue sphere helped win millions of people to the environmentalist view of the planet as a fragile and interdependent whole. The Russian geoscientist Vladimir Vernadsky had coined the word "biosphere" as early as 1926, and the Yale University biologist G. Evelyn Hutchinson had expanded on the theme of Earth as a system maintaining its own equilibrium. But as the German environmental scholar Wolfgang Sachs observed, our imaging systems also helped create a vision of the planet's surface as an object of rationalized control and management--a corporate and unromantic conclusion to humanity's voyages of discovery. What NASA did to our conception of the planet, Web-based technologies are beginning to do to our understanding of our written thoughts. We look at our ideas with less wonder, and with a greater sense that others have already noted what we're seeing for the first time. The plagiosphere is arising from three movements: Web indexing, text matching, and paraphrase detection. The first of these movements began with the invention of programs called Web crawlers, or spiders. Since the mid-1990s, they have been perusing the now billions of pages of Web content, indexing every significant word found, and making it possible for Web users to retrieve, free and in fractions of a second, pages with desired words and phrases. The spiders' reach makes searching more efficient than most of technology's wildest prophets imagined, but it can yield unwanted knowledge. The clever phrase a writer coins usually turns out to have been used for years, worldwide--used in good faith, because until recently the only way to investigate priority was in a few books of quotations. And in our accelerated age, even true uniqueness has been limited to 15 minutes. Bons mots that once could have enjoyed a half-life of a season can decay overnight into cliches. Still, the major search engines have their limits. Alone, they can check a phrase, perhaps a sentence, but not an extended document. And at least in their free versions, they generally do not produce results from proprietary databases like LexisNexis, Factiva, ProQuest, and other paid-subscription sites, or from free databases that dynamically generate pages only when a user submits a query. They also don't include most documents circulating as electronic manuscripts with no permanent Web address. Enter text-comparison software. A small handful of entrepreneurs have developed programs that search the open Web and proprietary databases, as well as e-books, for suspicious matches. One of the most popular of these is Turnitin; inspired by journalism scandals such as the New York Times' Jayson Blair case, its creators offer a version aimed at newspaper editors. Teachers can submit student papers electronically for comparison with these databases, including the retained texts of previously submitted papers. Those passages that bear resemblance to each other are noted with color highlighting in a double-pane view. Two years ago I heard a speech by a New Jersey electronic librarian who had become an antiplagiarism specialist and consultant. He observed that comparison programs were so thorough that they often flagged chance similarities between student papers and other documents. Consider, then, that Turnitin's spiders are adding 40 million pages from the public Web, plus 40,000 student papers, each day. Meanwhile Google plans to scan millions of library books--including many still under copyright--for its Print database. The number of coincidental parallelisms between the various things that people write is bound to rise steadily. A third technology will add yet more capacity to find similarities in writing. Artificial-intelligence researchers at MIT and other universities are developing techniques for identifying nonverbatim similarity between documents to make possible the detection of nonverbatim plagiarism. While the investigators may have in mind only cases of brazen paraphrase, a program of this kind can multiply the number of parallel passages severalfold. Some universities are encouraging students to precheck their papers and drafts against the emerging plagiosphere. Perhaps publications will soon routinely screen submissions. The problem here is that while such rigorous and robust policing will no doubt reduce cheating, it may also give writers a sense of futility. The concept of the biosphere exposed our environmental fragility; the emergence of the plagiosphere perhaps represents our textual impasse. Copernicus may have deprived us of our centrality in the cosmos, and Darwin of our uniqueness in the biosphere, but at least they left us the illusion of the originality of our words. Soon that, too, will be gone. From waluk at earthlink.net Fri May 27 18:21:45 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Fri, 27 May 2005 11:21:45 -0700 Subject: [Paleopsych] Re: Rise of the Plagiosphere Message-ID: <42976539.3080103@earthlink.net> Steve Hovland writes: / >>Some universities are encouraging students to precheck their papers and drafts against the emerging plagiosphere. Perhaps publications will soon routinely screen submissions. The problem here is that while such rigorous and robust policing will no doubt reduce cheating, it may also give writers a sense of futility. The concept of the biosphere exposed our environmental fragility; the emergence of the plagiosphere perhaps represents our textual impasse. Copernicus may have deprived us of our centrality in the cosmos, and Darwin of our uniqueness in the biosphere, but at least they left us the illusion of the originality of our words. Soon that, too, will be gone.>> / What you identify as a sense of futility should definitely be felt by those scholars who follow the flow and run the risk of being ID'd as plagiarists. However, there are a few of us who have chosen to take the road less traveled by. Up to now our ideas have been pooh-poohed by the mainstream but I sense a new world awakening that will be filled with the originality of our thoughts, even if only an illusion. Regards, Gerry Reinhart-Waller Independent Scholar author of: * Alekseev Manuscript: an archaeology of the former Soviet Union * Pharaoh is Dead: formation of the predynastic state in Egypt forthcoming: * Collapse of California: social and economic decay of the Golden State From Euterpel66 at aol.com Fri May 27 19:47:48 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Fri, 27 May 2005 15:47:48 EDT Subject: [Paleopsych] NYT: China, New Land of Shoppers, Builds Malls on Gigantic S... Message-ID: <215.1adaffd.2fc8d364@aol.com> In a message dated 5/25/2005 2:48:24 P.M. Eastern Daylight Time, checker at panix.com writes: "Forget the idea that consumers in China don't have enough money to spend," said David Hand, a real estate and retailing expert at [4]Jones Lang LaSalle in Beijing. "There are people with a lot of money here. And that's driving the development of these shopping malls." There are still millions of people in China who do not have enough money to spend, but many enterprising Chinese are starting private businesses. One young man I met had his own travel business and was doing quite well. Another young woman in X'ian who was a travel guide invited me back to her Spartan but quite spacious apartment. I did however meet the poverty stricken laborers and beggars and trinket sellers who will never enter the new malls. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jvkohl at bellsouth.net Sat May 28 03:21:28 2005 From: jvkohl at bellsouth.net (JV Kohl) Date: Fri, 27 May 2005 23:21:28 -0400 Subject: [Paleopsych] health care In-Reply-To: <208.1d69872.2fc7ebfa@aol.com> References: <208.1d69872.2fc7ebfa@aol.com> Message-ID: <4297E3B8.9000400@bellsouth.net> http://www.mercola.com/2005/feb/16/medical_costs.htm "About half of all bankruptcies in 2001 were the result of medical problems and, surprisingly, most of those (more than three-quarters) who went bankrupt were covered by health insurance at the start of the illness." "On average, out-of-pocket medical costs reached $13,460 for those with private insurance and $10,893 for those with no insurance. Ironically, those with the highest costs, on average about $18,000, were people who initially had private health insurance but lost it after becoming ill." Even those who don't go bankrupt tend to lose whatever savings they have. I can't imagine much worse than seeing everything I've worked for my entire life be spent on medical costs--and in many cases that's what happens. I've worked in the medical laboratory for many years and have seen test costs escalate far beyond most people's ability to pay. A co-worker recently was diagnosed with Hodgkin's lymphoma and has estimated she will be out of pocket more than $15,000 by the time she finishes chemotherapy. One of the worst things about this is that we work at a hospital, and have a group policy through Blue Cross/Blue Shield. Even though she has seen only participating providers, the number of charges not covered is enlightening. Most of us would be better off being illegal immigrants, who get their medical care for free. About 4 non-English speaking patients with no insurance deliver babies here (in Northern Georgia) compared to each non-Hispanic White. Cancer and disability supplements with set payments are expensive jokes, that will only stay the bankruptcy for a short time.. Meanwhile, the Veteran's administration (the last place anyone wants to go for medical treatment) no longer accepts honorably discharged Veterans into their program, unless the Vet is earning less than $18,000 per year. Until two years ago, the program was open to all Vets. Now that there will be more injured vets coming home, I'd need to be half dead and competely broke before seeking help from the Veterans administration. I wish you all good health, you need it. Jim Kohl Euterpel66 at aol.com wrote: > In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, > shovland at mindspring.com writes: > > --Only if you actually call it "health care". But not > when you call it "socialist tyranny". It's all in the > framing : ) > > Michael > > I imagine that you are one of the lucky ones whose employer provides > yours. Mine costs me $468 a month, one third of my monthly income. My > daughter has no healthcare insurance at all. > > Lorraine Rice > > Believe those who are seeking the truth. Doubt those who find it. > ---Andre Gide > > http://hometown.aol.com/euterpel66/myhomepage/poetry.html > >------------------------------------------------------------------------ > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendulf at shaw.ca Sat May 28 04:48:44 2005 From: kendulf at shaw.ca (Val Geist) Date: Fri, 27 May 2005 21:48:44 -0700 Subject: [Paleopsych] health care References: <208.1d69872.2fc7ebfa@aol.com> <4297E3B8.9000400@bellsouth.net> Message-ID: <009901c56340$873d3780$873e4346@yourjqn2mvdn7x> Become Canadians! Sincerely, Val Geist ----- Original Message ----- From: JV Kohl To: The new improved paleopsych list Sent: Friday, May 27, 2005 8:21 PM Subject: Re: [Paleopsych] health care http://www.mercola.com/2005/feb/16/medical_costs.htm "About half of all bankruptcies in 2001 were the result of medical problems and, surprisingly, most of those (more than three-quarters) who went bankrupt were covered by health insurance at the start of the illness." "On average, out-of-pocket medical costs reached $13,460 for those with private insurance and $10,893 for those with no insurance. Ironically, those with the highest costs, on average about $18,000, were people who initially had private health insurance but lost it after becoming ill." Even those who don't go bankrupt tend to lose whatever savings they have. I can't imagine much worse than seeing everything I've worked for my entire life be spent on medical costs--and in many cases that's what happens. I've worked in the medical laboratory for many years and have seen test costs escalate far beyond most people's ability to pay. A co-worker recently was diagnosed with Hodgkin's lymphoma and has estimated she will be out of pocket more than $15,000 by the time she finishes chemotherapy. One of the worst things about this is that we work at a hospital, and have a group policy through Blue Cross/Blue Shield. Even though she has seen only participating providers, the number of charges not covered is enlightening. Most of us would be better off being illegal immigrants, who get their medical care for free. About 4 non-English speaking patients with no insurance deliver babies here (in Northern Georgia) compared to each non-Hispanic White. Cancer and disability supplements with set payments are expensive jokes, that will only stay the bankruptcy for a short time.. Meanwhile, the Veteran's administration (the last place anyone wants to go for medical treatment) no longer accepts honorably discharged Veterans into their program, unless the Vet is earning less than $18,000 per year. Until two years ago, the program was open to all Vets. Now that there will be more injured vets coming home, I'd need to be half dead and competely broke before seeking help from the Veterans administration. I wish you all good health, you need it. Jim Kohl Euterpel66 at aol.com wrote: In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, shovland at mindspring.com writes: --Only if you actually call it "health care". But not when you call it "socialist tyranny". It's all in the framing : ) Michael I imagine that you are one of the lucky ones whose employer provides yours. Mine costs me $468 a month, one third of my monthly income. My daughter has no healthcare insurance at all. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html ---------------------------------------------------------------------------- _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.322 / Virus Database: 267.1.0 - Release Date: 5/27/2005 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Sat May 28 15:43:34 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 28 May 2005 08:43:34 -0700 Subject: [Paleopsych] health care Message-ID: <01C56361.55A1C5D0.shovland@mindspring.com> These days the most important health care is what you do for yourself. Drinking filtered water, not drinking very much alcohol, eating good food, and getting the right amount of exercise are the core. The things you can do to keep from getting old too fast are also the things that keep you healthy. "The Anti-Aging Solution" by Giampapa, Pero, and Zimmerman is an excellent starting point. Steve Hovland www.stevehovland.net -----Original Message----- From: JV Kohl [SMTP:jvkohl at bellsouth.net] Sent: Friday, May 27, 2005 8:21 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] health care http://www.mercola.com/2005/feb/16/medical_costs.htm "About half of all bankruptcies in 2001 were the result of medical problems and, surprisingly, most of those (more than three-quarters) who went bankrupt were covered by health insurance at the start of the illness." "On average, out-of-pocket medical costs reached $13,460 for those with private insurance and $10,893 for those with no insurance. Ironically, those with the highest costs, on average about $18,000, were people who initially had private health insurance but lost it after becoming ill." Even those who don't go bankrupt tend to lose whatever savings they have. I can't imagine much worse than seeing everything I've worked for my entire life be spent on medical costs--and in many cases that's what happens. I've worked in the medical laboratory for many years and have seen test costs escalate far beyond most people's ability to pay. A co-worker recently was diagnosed with Hodgkin's lymphoma and has estimated she will be out of pocket more than $15,000 by the time she finishes chemotherapy. One of the worst things about this is that we work at a hospital, and have a group policy through Blue Cross/Blue Shield. Even though she has seen only participating providers, the number of charges not covered is enlightening. Most of us would be better off being illegal immigrants, who get their medical care for free. About 4 non-English speaking patients with no insurance deliver babies here (in Northern Georgia) compared to each non-Hispanic White. Cancer and disability supplements with set payments are expensive jokes, that will only stay the bankruptcy for a short time.. Meanwhile, the Veteran's administration (the last place anyone wants to go for medical treatment) no longer accepts honorably discharged Veterans into their program, unless the Vet is earning less than $18,000 per year. Until two years ago, the program was open to all Vets. Now that there will be more injured vets coming home, I'd need to be half dead and competely broke before seeking help from the Veterans administration. I wish you all good health, you need it. Jim Kohl Euterpel66 at aol.com wrote: > In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, > shovland at mindspring.com writes: > > --Only if you actually call it "health care". But not > when you call it "socialist tyranny". It's all in the > framing : ) > > Michael > > I imagine that you are one of the lucky ones whose employer provides > yours. Mine costs me $468 a month, one third of my monthly income. My > daughter has no healthcare insurance at all. > > Lorraine Rice > > Believe those who are seeking the truth. Doubt those who find it. > ---Andre Gide > > http://hometown.aol.com/euterpel66/myhomepage/poetry.html > >------------------------------------------------------------------------ > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > << File: ATT00000.html >> << File: ATT00001.txt >> From waluk at earthlink.net Sat May 28 16:16:45 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sat, 28 May 2005 09:16:45 -0700 Subject: [Paleopsych] health care In-Reply-To: <01C56361.55A1C5D0.shovland@mindspring.com> References: <01C56361.55A1C5D0.shovland@mindspring.com> Message-ID: <4298996D.5090903@earthlink.net> Medical costs without insurance become prohibitive for someone who has no formal job and works as an Independent Scholar. Your suggestions are right on target. Should filtered water also be used for pets? Gerry Reinhart-Waller Steve Hovland wrote: >These days the most important health care is >what you do for yourself. > >Drinking filtered water, not drinking very much >alcohol, eating good food, and getting the right >amount of exercise are the core. > >The things you can do to keep from getting >old too fast are also the things that keep you >healthy. > >"The Anti-Aging Solution" by Giampapa, Pero, >and Zimmerman is an excellent starting point. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: JV Kohl [SMTP:jvkohl at bellsouth.net] >Sent: Friday, May 27, 2005 8:21 PM >To: The new improved paleopsych list >Subject: Re: [Paleopsych] health care > >http://www.mercola.com/2005/feb/16/medical_costs.htm > >"About half of all bankruptcies in 2001 were the result of medical >problems and, surprisingly, most of those (more than three-quarters) who >went bankrupt were covered by health insurance at the start of the illness." > >"On average, out-of-pocket medical costs reached $13,460 for those with >private insurance and $10,893 for those with no insurance. Ironically, >those with the highest costs, on average about $18,000, were people who >initially had private health insurance but lost it after becoming ill." > >Even those who don't go bankrupt tend to lose whatever savings they >have. I can't imagine much worse than seeing everything I've worked for >my entire life be spent on medical costs--and in many cases that's what >happens. > >I've worked in the medical laboratory for many years and have seen test >costs escalate far beyond most people's ability to pay. A co-worker >recently was diagnosed with Hodgkin's lymphoma and has estimated she >will be out of pocket more than $15,000 by the time she finishes >chemotherapy. One of the worst things about this is that we work at a >hospital, and have a group policy through Blue Cross/Blue Shield. Even >though she has seen only participating providers, the number of charges >not covered is enlightening. Most of us would be better off being >illegal immigrants, who get their medical care for free. About 4 >non-English speaking patients with no insurance deliver babies here (in >Northern Georgia) compared to each non-Hispanic White. Cancer and >disability supplements with set payments are expensive jokes, that will >only stay the bankruptcy for a short time.. > >Meanwhile, the Veteran's administration (the last place anyone wants to >go for medical treatment) no longer accepts honorably discharged >Veterans into their program, unless the Vet is earning less than $18,000 >per year. Until two years ago, the program was open to all Vets. Now >that there will be more injured vets coming home, I'd need to be half >dead and competely broke before seeking help from the Veterans >administration. > >I wish you all good health, you need it. > >Jim Kohl > > >Euterpel66 at aol.com wrote: > > > >>In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, >>shovland at mindspring.com writes: >> >> --Only if you actually call it "health care". But not >> when you call it "socialist tyranny". It's all in the >> framing : ) >> >> Michael >> >>I imagine that you are one of the lucky ones whose employer provides >>yours. Mine costs me $468 a month, one third of my monthly income. My >>daughter has no healthcare insurance at all. >> >>Lorraine Rice >> >>Believe those who are seeking the truth. Doubt those who find it. >>---Andre Gide >> >>http://hometown.aol.com/euterpel66/myhomepage/poetry.html >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> > << File: ATT00000.html >> << File: ATT00001.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From checker at panix.com Sat May 28 19:24:11 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:24:11 -0400 (EDT) Subject: [Paleopsych] Meme 042: Public Choice Analysis of the No Child Left Behind Act Message-ID: Meme 042: Public Choice Analysis of the No Child Left Behind Act sent 5.5.27 It won't be that hard to narrow the minority gap in educational achievement, the goal of the No Child Left Behind Act, though actually closing it is far beyond any educational technology the schools can adopt. The reason is that Democrats do not genuinely care about racial equality. Rather, they care about jobs for teachers, since a major part of their support is teachers' unions. So, teaching jobs are protected. But too many incompetent teachers in middle-class schools would provoke a revolt from the middle class. What happens is that the most incompetent teachers, as well as the most incompetent administrators, are never fired but safely sent to the underserved schools, whose parents go to the polls in far lesser percentages than middle-class parents. Every now and then, some oversocialized (the Unabomber's term) liberal observes the hypocrisy, but he is ignored. What counts for voter support is the promise of equality, not the results. The No Child Left Behind Act, in many ways, is a method of curbing the power of the teachers' unions. This power reflects the inability of the States to reign in a *nation-wide* lobbying effort of the teachers' unions. It is a structural *defect* in State constitutions, which we also see in the nation-wide lobbying of trial lawyers and medical malpractice lawyers, which also has been prompting Federal legislation, so far without very limited success. The really big success was reigning in the pilot and other airline employee unions through the Airline Deregulation Act of 1978, but that was not a Federal/State issue. The reason why there is public education in the first place is due to negative externalities created by irresponsible parents who have children they are unable or unwilling to properly support, properly being seen in the eyes of the taxpayers, who are unwilling to just let children of irresponsible parents fend for themselves and, so far, unwilling to restrict the rights of irresponsible parents to have children in the first place. I am speaking, not in terms of some outside notion of social justice, but rather of perceived failures by parents and, higher up, defects in State constitutional design. Liberals often claim that the Federal government needs to step in where the States have "failed," but this notion of failure falls into the realm of transcendental (in this case, liberal) values. What is more objective is State constitutional failure to reign in nation-wide lobbying efforts. So the worst teachers and the worst administrators are sloughed off to underserved schools, with the result that a great many of the students there cannot even read, even though reading is far easier than picking up the grammar of a spoken language. NCLB is offering incentives to improve the worst schools. To a limited extent, this will work and politicians will rejoice! Liberal educrats get more money; parents of children at underserved schools get children who can read; conservatives are emboldened by conservative policies that better achieve liberal ends. The conservatives get votes by promising these liberal results using conservative means. What's happened is that conservatives have bought into the idea that reducing or closing the achievement gap is the first order of business. This new ranking came about because conservatives are themselves products of public education, which socialized them into this value structure. Conservative politicians play into this. The chief drawback of NCLB is that it mandates that the States develop State-wide curriculum standards and that schools have to show "annual yearly progress" on tests designed to measure learning under these State-wide curriculum standards, or else penalties and, sometime in 2013/4, actual withdrawal of Federal money. The result is that the revised curricula are directed toward those things upon which it is easy to show this "annual yearly progress," which means drill, drill, drill, discipline, discipline in curricula that are irrelevant to the world of 2025, when kids will get out of school. There had been gradual evolution of State curriculum changes in the direction of critical thinking skills and other skills that will be more and more needed in the world of 2025, but no more. These skills are difficult to conceptualize, difficult to teach, difficult to measure. (As the current headmaster of my high school has said, the most critical skill in the future will be how to distinguish good from bogus information in the Internet. I think I can do a pretty good job of it, but it would be very difficult to teach the intuitions I have developed, but I'd be willing to give it a try. So, the curriculum is marching backwards, but the politicians will be able to claim that never before in history are students better able to enter the world of ... 1955! Maybe it is not surprising that we would be upgrading an Eisenhower-era curriculum. After all, most of the movers and shakers in the world went to school when Ike was President. It's the generals fighting the last war, all over again. But it's the gifted who will be most deprived of what might have been. I wonder if teachers' unions will be weakened, like unions of all sorts that have weakened during the last half century. Politics is shifting away from egalitarianism as the major left-right political axis toward universalism vs. pluralism. This means Big Ed will have to shift its promises from ending the minority gap (a universal standard based upon standarized *tests*) to providing race-based education for the genetic plurality that exists among different races. This cannot possibly happen while the Republicans are in charge, since their entire promise in education is to end the minority gap. It will take Democrats about five years to figure out that this justification is no longer needed and that it won't produce jobs like it used to. They will figure out that race-based education can double the number of educrats. Imagine all the research grants, new teacher training programs, assessment development! And the gifted may at last get programs that really help them out especially. A pure free market in education would do better, of course, but that will not come about until irresponsible parents stop having children, which is considerably farther away from five years after the Republicans leave. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Sat May 28 19:33:34 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:33:34 -0400 (EDT) Subject: [Paleopsych] The Times: Richard Dawkins: Creationism: God's gift to the ignorant Message-ID: Richard Dawkins: Creationism: God's gift to the ignorant http://www.timesonline.co.uk/printFriendly/0,,1-196-1619264,00.html 5.5.21 [Dawkins is a devout atheist.] As the Religious Right tries to ban the teaching of evolution in Kansas, Richard Dawkins speaks up for scientific logic Science feeds on mystery. As my colleague Matt Ridley has put it: "Most scientists are bored by what they have already discovered. It is ignorance that drives them on." Science mines ignorance. Mystery - that which we don't yet know; that which we don't yet understand - is the mother lode that scientists seek out. Mystics exult in mystery and want it to stay mysterious. Scientists exult in mystery for a very different reason: it gives them something to do. Admissions of ignorance and mystification are vital to good science. It is therefore galling, to say the least, when enemies of science turn those constructive admissions around and abuse them for political advantage. Worse, it threatens the enterprise of science itself. This is exactly the effect that creationism or "intelligent design theory" (ID) is having, especially because its propagandists are slick, superficially plausible and, above all, well financed. ID, by the way, is not a new form of creationism. It simply is creationism disguised, for political reasons, under a new name. It isn't even safe for a scientist to express temporary doubt as a rhetorical device before going on to dispel it. "To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree." You will find this sentence of Charles Darwin quoted again and again by creationists. They never quote what follows. Darwin immediately went on to confound his initial incredulity. Others have built on his foundation, and the eye is today a showpiece of the gradual, cumulative evolution of an almost perfect illusion of design. The relevant chapter of my Climbing Mount Improbable is called "The fortyfold Path to Enlightenment" in honour of the fact that, far from being difficult to evolve, the eye has evolved at least 40 times independently around the animal kingdom. The distinguished Harvard geneticist Richard Lewontin is widely quoted as saying that organisms "appear to have been carefully and artfully designed". Again, this was a rhetorical preliminary to explaining how the powerful illusion of design actually comes about by natural selection. The isolated quotation strips out the implied emphasis on "appear to", leaving exactly what a simple-mindedly pious audience - in Kansas, for instance - wants to hear. The deceitful misquoting of scientists to suit an anti-scientific agenda ranks among the many unchristian habits of fundamentalist authors. But such Telling Lies for God (the book title of the splendidly pugnacious Australian geologist Ian Plimer) is not the most serious problem. There is a more important point to be made, and it goes right to the philosophical heart of creationism. The standard methodology of creationists is to find some phenomenon in nature which Darwinism cannot readily explain. Darwin said: "If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." Creationists mine ignorance and uncertainty in order to abuse his challenge. "Bet you can't tell me how the elbow joint of the lesser spotted weasel frog evolved by slow gradual degrees?" If the scientist fails to give an immediate and comprehensive answer, a default conclusion is drawn: "Right, then, the alternative theory; 'intelligent design' wins by default." Notice the biased logic: if theory A fails in some particular, theory B must be right! Notice, too, how the creationist ploy undermines the scientist's rejoicing in uncertainty. Today's scientist in America dare not say: "Hm, interesting point. I wonder how the weasel frog's ancestors did evolve their elbow joint. I'll have to go to the university library and take a look." No, the moment a scientist said something like that the default conclusion would become a headline in a creationist pamphlet: "Weasel frog could only have been designed by God." I once introduced a chapter on the so-called Cambrian Explosion with the words: "It is as though the fossils were planted there without any evolutionary history." Again, this was a rhetorical overture, intended to whet the reader's appetite for the explanation. Inevitably, my remark was gleefully quoted out of context. Creationists adore "gaps" in the fossil record. Many evolutionary transitions are elegantly documented by more or less continuous series of changing intermediate fossils. Some are not, and these are the famous "gaps". Michael Shermer has wittily pointed out that if a new fossil discovery neatly bisects a "gap", the creationist will declare that there are now two gaps! Note yet again the use of a default. If there are no fossils to document a postulated evolutionary transition, the assumption is that there was no evolutionary transition: God must have intervened. The creationists' fondness for "gaps" in the fossil record is a metaphor for their love of gaps in knowledge generally. Gaps, by default, are filled by God. You don't know how the nerve impulse works? Good! You don't understand how memories are laid down in the brain? Excellent! Is photosynthesis a bafflingly complex process? Wonderful! Please don't go to work on the problem, just give up, and appeal to God. Dear scientist, don't work on your mysteries. Bring us your mysteries for we can use them. Don't squander precious ignorance by researching it away. Ignorance is God's gift to Kansas. Richard Dawkins, FRS, is the Charles Simonyi Professor of the Public Understanding of Science, at Oxford University. His latest book is The Ancestor's Tale From checker at panix.com Sat May 28 19:33:53 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:33:53 -0400 (EDT) Subject: [Paleopsych] New Scientist: 11 steps to a better brain Message-ID: New Scientist: 11 steps to a better brain http://www.newscientist.com/channel/being-human/mg18625011.900 11 steps to a better brain * 28 May 2005 You must remember this It doesn't matter how brainy you are or how much education you've had - you can still improve and expand your mind. Boosting your mental faculties doesn't have to mean studying hard or becoming a reclusive book worm. There are lots of tricks, techniques and habits, as well as changes to your lifestyle, diet and behaviour that can help you flex your grey matter and get the best out of your brain cells. And here are 11 of them. Smart drugs Does getting old have to mean worsening memory, slower reactions and fuzzy thinking? AROUND the age of 40, honest folks may already admit to noticing changes in their mental abilities. This is the beginning of a gradual decline that in all too many of us will culminate in full-blown dementia. If it were possible somehow to reverse it, slow it or mask it, wouldn't you? A few drugs that might do the job, known as "cognitive enhancement", are already on the market, and a few dozen others are on the way. Perhaps the best-known is modafinil. Licensed to treat narcolepsy, the condition that causes people to suddenly fall asleep, it has notable effects in healthy people too. Modafinil can keep a person awake and alert for 90 hours straight, with none of the jitteriness and bad concentration that amphetamines or even coffee seem to produce. In fact, with the help of modafinil, sleep-deprived people can perform even better than their well-rested, unmedicated selves. The forfeited rest doesn't even need to be made good. Military research is finding that people can stay awake for 40 hours, sleep the normal 8 hours, and then pull a few more all-nighters with no ill effects. It's an open secret that many, perhaps most, prescriptions for modafinil are written not for people who suffer from narcolepsy, but for those who simply want to stay awake. Similarly, many people are using Ritalin not because they suffer from attention deficit or any other disorder, but because they want superior concentration during exams or heavy-duty negotiations. The pharmaceutical pipeline is clogged with promising compounds - drugs that act on the nicotinic receptors that smokers have long exploited, drugs that work on the cannabinoid system to block pot-smoking-type effects. Some drugs have also been specially designed to augment memory. Many of these look genuinely plausible: they seem to work, and without any major side effects. So why aren't we all on cognitive enhancers already? "We need to be careful what we wish for," says Daniele Piomelli at the University of California at Irvine. He is studying the body's cannabinoid system with a view to making memories less emotionally charged in people suffering from post-traumatic stress disorder. Tinkering with memory may have unwanted effects, he warns. "Ultimately we may end up remembering things we don't want to." Gary Lynch, also at UC Irvine, voices a similar concern. He is the inventor of ampakines, a class of drugs that changes the rules about how a memory is encoded and how strong a memory trace is - the essence of learning (see New Scientist, 14 May, p 6). But maybe the rules have already been optimised by evolution, he suggests. What looks to be an improvement could have hidden downsides. Still, the opportunity may be too tempting to pass up. The drug acts only in the brain, claims Lynch. It has a short half-life of hours. Ampakines have been shown to restore function to severely sleep-deprived monkeys that would otherwise perform poorly. Preliminary studies in humans are just as exciting. You could make an elderly person perform like a much younger person, he says. And who doesn't wish for that? Food for thought You are what you eat, and that includes your brain. So what is the ultimate mastermind diet? YOUR brain is the greediest organ in your body, with some quite specific dietary requirements. So it is hardly surprising that what you eat can affect how you think. If you believe the dietary supplement industry, you could become the next Einstein just by popping the right combination of pills. Look closer, however, and it isn't that simple. The savvy consumer should take talk of brain-boosting diets with a pinch of low-sodium salt. But if it is possible to eat your way to genius, it must surely be worth a try. First, go to the top of the class by eating breakfast. The brain is best fuelled by a steady supply of glucose, and many studies have shown that skipping breakfast reduces people's performance at school and at work. But it isn't simply a matter of getting some calories down. According to research published in 2003, kids breakfasting on fizzy drinks and sugary snacks performed at the level of an average 70-year-old in tests of memory and attention. Beans on toast is a far better combination, as Barbara Stewart from the University of Ulster, UK, discovered. Toast alone boosted children's scores on a variety of cognitive tests, but when the tests got tougher, the breakfast with the high-protein beans worked best. Beans are also a good source of fibre, and other research has shown a link between a high-fibre diet and improved cognition. If you can't stomach beans before midday, wholemeal toast with Marmite makes a great alternative. The yeast extract is packed with B vitamins, whose brain-boosting powers have been demonstrated in many studies. A smart choice for lunch is omelette and salad. Eggs are rich in choline, which your body uses to produce the neurotransmitter acetylcholine. Researchers at Boston University found that when healthy young adults were given the drug scopolamine, which blocks acetylcholine receptors in the brain, it significantly reduced their ability to remember word pairs. Low levels of acetylcholine are also associated with Alzheimer's disease, and some studies suggest that boosting dietary intake may slow age-related memory loss. A salad packed full of antioxidants, including beta-carotene and vitamins C and E, should also help keep an ageing brain in tip-top condition by helping to mop up damaging free radicals. Dwight Tapp and colleagues from the University of California at Irvine found that a diet high in antioxidants improved the cognitive skills of 39 ageing beagles - proving that you can teach an old dog new tricks. Round off lunch with a yogurt dessert, and you should be alert and ready to face the stresses of the afternoon. That's because yogurt contains the amino acid tyrosine, needed for the production of the neurotransmitters dopamine and noradrenalin, among others. Studies by the US military indicate that tyrosine becomes depleted when we are under stress and that supplementing your intake can improve alertness and memory. Don't forget to snaffle a snack mid-afternoon, to maintain your glucose levels. Just make sure you avoid junk food, and especially highly processed goodies such as cakes, pastries and biscuits, which contain trans-fatty acids. These not only pile on the pounds, but are implicated in a slew of serious mental disorders, from dyslexia and ADHD (attention deficit hyperactivity disorder) to autism. Hard evidence for this is still thin on the ground, but last year researchers at the annual Society for Neuroscience meeting in San Diego, California, reported that rats and mice raised on the rodent equivalent of junk food struggled to find their way around a maze, and took longer to remember solutions to problems they had already solved. It seems that some of the damage may be mediated through triglyceride, a cholesterol-like substance found at high levels in rodents fed on trans-fats. When the researchers gave these rats a drug to bring triglyceride levels down again, the animals' performance on the memory tasks improved. Brains are around 60 per cent fat, so if trans-fats clog up the system, what should you eat to keep it well oiled? Evidence is mounting in favour of omega-3 fatty acids, in particular docosahexaenoic acid or DHA. In other words, your granny was right: fish is the best brain food. Not only will it feed and lubricate a developing brain, DHA also seems to help stave off dementia. Studies published last year reveal that older mice from a strain genetically altered to develop Alzheimer's had 70 per cent less of the amyloid plaques associated with the disease when fed on a high-DHA diet. Finally, you could do worse than finish off your evening meal with strawberries and blueberries. Rats fed on these fruits have shown improved coordination, concentration and short-term memory. And even if they don't work such wonders in people, they still taste fantastic. So what have you got to lose? The Mozart effect Music may tune up your thinking, but you can't just crank up the volume and expect to become a genius A DECADE ago Frances Rauscher, a psychologist now at the University of Wisconsin at Oshkosh, and her colleagues made waves with the discovery that listening to Mozart improved people's mathematical and spatial reasoning. Even rats ran mazes faster and more accurately after hearing Mozart than after white noise or music by the minimalist composer Philip Glass. Last year, Rauscher reported that, for rats at least, a Mozart piano sonata seems to stimulate activity in three genes involved in nerve-cell signalling in the brain. This sounds like the most harmonious way to tune up your mental faculties. But before you grab the CDs, hear this note of caution. Not everyone who has looked for the Mozart effect has found it. What's more, even its proponents tend to think that music boosts brain power simply because it makes listeners feel better - relaxed and stimulated at the same time - and that a comparable stimulus might do just as well. In fact, one study found that listening to a story gave a similar performance boost. There is, however, one way in which music really does make you smarter, though unfortunately it requires a bit more effort than just selecting something mellow on your iPod. Music lessons are the key. Six-year-old children who were given music lessons, as opposed to drama lessons or no extra instruction, got a 2 to 3-point boost in IQ scores compared with the others. Similarly, Rauscher found that after two years of music lessons, pre-school children scored better on spatial reasoning tests than those who took computer lessons. Maybe music lessons exercise a range of mental skills, with their requirement for delicate and precise finger movements, and listening for pitch and rhythm, all combined with an emotional dimension. Nobody knows for sure. Neither do they know whether adults can get the same mental boost as young children. But, surely, it can't hurt to try. Bionic brains If training and tricks seem too much like hard work, some technological short cuts can boost brain function (See graphic, above) Gainful employment Put your mind to work in the right way and it could repay you with an impressive bonus UNTIL recently, a person's IQ - a measure of all kinds of mental problem-solving abilities, including spatial skills, memory and verbal reasoning - was thought to be a fixed commodity largely determined by genetics. But recent hints suggest that a very basic brain function called working memory might underlie our general intelligence, opening up the intriguing possibility that if you improve your working memory, you could boost your IQ too. Working memory is the brain's short-term information storage system. It's a workbench for solving mental problems. For example if you calculate 73 - 6 + 7, your working memory will store the intermediate steps necessary to work out the answer. And the amount of information that the working memory can hold is strongly related to general intelligence. A team led by Torkel Klingberg at the Karolinska Institute in Stockholm, Sweden, has found signs that the neural systems that underlie working memory may grow in response to training. Using functional magnetic resonance imaging (fMRI) brain scans, they measured the brain activity of adults before and after a working-memory training programme, which involved tasks such as memorising the positions of a series of dots on a grid. After five weeks of training, their brain activity had increased in the regions associated with this type of memory (Nature Neuroscience, vol 7, p 75). Perhaps more significantly, when the group studied children who had completed these types of mental workouts, they saw improvement in a range of cognitive abilities not related to the training, and a leap in IQ test scores of 8 per cent (Journal of the American Academy of Child and Adolescent Psychiatry, vol 44, p 177). It's early days yet, but Klingberg thinks working-memory training could be a key to unlocking brain power. "Genetics determines a lot and so does the early gestation period," he says. "On top of that, there is a few per cent - we don't know how much - that can be improved by training." Memory marvels Mind like a sieve? Don't worry. The difference between mere mortals and memory champs is more method than mental capacity AN AUDITORIUM is filled with 600 people. As they file out, they each tell you their name. An hour later, you are asked to recall them all. Can you do it? Most of us would balk at the idea. But in truth we're probably all up to the task. It just needs a little technique and dedication. First, learn a trick from the "mnemonists" who routinely memorise strings of thousands of digits, entire epic poems, or hundreds of unrelated words. When Eleanor Maguire from University College London and her colleagues studied eight front runners in the annual World Memory Championships they did not find any evidence that these people have particularly high IQs or differently configured brains. But, while memorising, these people did show activity in three brain regions that become active during movements and navigation tasks but are not normally active during simple memory tests. This may be connected to the fact that seven of them used a strategy in which they place items to be remembered along a visualised route (Nature Neuroscience, vol 6, p 90). To remember the sequence of an entire pack of playing cards for example, the champions assign each card an identity, perhaps an object or person, and as they flick through the cards they can make up a story based on a sequence of interactions between these characters and objects at sites along a well-trodden route. Actors use a related technique: they attach emotional meaning to what they say. We always remember highly emotional moments better than less emotionally loaded ones. Professional actors also seem to link words with movement, remembering action-accompanied lines significantly better than those delivered while static, even months after a show has closed. Helga Noice, a psychologist from Elmhurst College in Illinois, and Tony Noice, an actor, who together discovered this effect, found that non-thesps can benefit by adopting a similar technique. Students who paired their words with previously learned actions could reproduce 38 per cent of them after just 5 minutes, whereas rote learners only managed 14 per cent. The Noices believe that having two mental representations gives you a better shot at remembering what you are supposed to say. Strategy is important in everyday life too, says Barry Gordon from Johns Hopkins University in Baltimore, Maryland. Simple things like always putting your car keys in the same place, writing things down to get them off your mind, or just deciding to pay attention, can make a big difference to how much information you retain. And if names are your downfall, try making some mental associations. Just remember to keep the derogatory ones to yourself. Sleep on it Never underestimate the power of a good night's rest SKIMPING on sleep does awful things to your brain. Planning, problem-solving, learning, concentration,working memory and alertness all take a hit. IQ scores tumble. "If you have been awake for 21 hours straight, your abilities are equivalent to someone who is legally drunk," says Sean Drummond from the University of California, San Diego. And you don't need to pull an all-nighter to suffer the effects: two or three late nights and early mornings on the trot have the same effect. Luckily, it's reversible - and more. If you let someone who isn't sleep-deprived have an extra hour or two of shut-eye, they perform much better than normal on tasks requiring sustained attention, such taking an exam. And being able to concentrate harder has knock-on benefits for overall mental performance. "Attention is the base of a mental pyramid," says Drummond. "If you boost that, you can't help boosting everything above it." These are not the only benefits of a decent night's sleep. Sleep is when your brain processes new memories, practises and hones new skills - and even solves problems. Say you're trying to master a new video game. Instead of grinding away into the small hours, you would be better off playing for a couple of hours, then going to bed. While you are asleep your brain will reactivate the circuits it was using as you learned the game, rehearse them, and then shunt the new memories into long-term storage. When you wake up, hey presto! You will be a better player. The same applies to other skills such as playing the piano, driving a car and, some researchers claim, memorising facts and figures. Even taking a nap after training can help, says Carlyle Smith of Trent University in Peterborough, Ontario. There is also some evidence that sleep can help produce moments of problem-solving insight. The famous story about the Russian chemist Dmitri Mendeleev suddenly "getting" the periodic table in a dream after a day spent struggling with the problem is probably true. It seems that sleep somehow allows the brain to juggle new memories to produce flashes of creative insight. So if you want to have a eureka moment, stop racking your brains and get your head down. Body and mind Physical exercise can boost brain as well as brawn IT'S a dream come true for those who hate studying. Simply walking sedately for half an hour three times a week can improve abilities such as learning, concentration and abstract reasoning by 15 per cent. The effects are particularly noticeable in older people. Senior citizens who walk regularly perform better on memory tests than their sedentary peers. What's more, over several years their scores on a variety of cognitive tests show far less decline than those of non-walkers. Every extra mile a week has measurable benefits. It's not only oldies who benefit, however. Angela Balding from the University of Exeter, UK, has found that schoolchildren who exercise three or four times a week get higher than average exam grades at age 10 or 11. The effect is strongest in boys, and while Balding admits that the link may not be causal, she suggests that aerobic exercise may boost mental powers by getting extra oxygen to your energy-guzzling brain. There's another reason why your brain loves physical exercise: it promotes the growth of new brain cells. Until recently, received wisdom had it that we are born with a full complement of neurons and produce no new ones during our lifetime. Fred Gage from the Salk Institute in La Jolla, California, busted that myth in 2000 when he showed that even adults can grow new brain cells. He also found that exercise is one of the best ways to achieve this. In mice, at least, the brain-building effects of exercise are strongest in the hippocampus, which is involved with learning and memory. This also happens to be the brain region that is damaged by elevated levels of the stress hormone cortisol. So if you are feeling frazzled, do your brain a favour and go for a run. Even more gentle exercise, such as yoga, can do wonders for your brain. Last year, researchers at the University of California, Los Angeles, reported results from a pilot study in which they considered the mood-altering ability of different yoga poses. Comparing back bends, forward bends and standing poses, they concluded that the best way to get a mental lift is to bend over backwards. And the effect works both ways. Just as physical exercise can boost the brain, mental exercise can boost the body. In 2001, researchers at the Cleveland Clinic Foundation in Ohio asked volunteers to spend just 15 minutes a day thinking about exercising their biceps. After 12 weeks, their arms were 13 per cent stronger. Nuns on a run If you don't want senility to interfere with your old age, perhaps you should seek some sisterly guidance THE convent of the School Sisters of Notre Dame on Good Counsel Hill in Mankato, Minnesota, might seem an unusual place for a pioneering brain-science experiment. But a study of its 75 to 107-year-old inhabitants is revealing more about keeping the brain alive and healthy than perhaps any other to date. The "Nun study" is a unique collaboration between 678 Catholic sisters recruited in 1991 and Alzheimer's expert David Snowdon of the Sanders-Brown Center on Aging and the University of Kentucky in Lexington. The sisters' miraculous longevity - the group boasts seven centenarians and many others well on their way - is surely in no small part attributable to their impeccable lifestyle. They do not drink or smoke, they live quietly and communally, they are spiritual and calm and they eat healthily and in moderation. Nevertheless, small differences between individual nuns could reveal the key to a healthy mind in later life. Some of the nuns have suffered from Alzheimer's disease, but many have avoided any kind of dementia or senility. They include Sister Matthia, who was mentally fit and active from her birth in 1894 to the day she died peacefully in her sleep, aged 104. She was happy and productive, knitting mittens for the poor every day until the end of her life. A post-mortem of Sister Matthia's brain revealed no signs of excessive ageing. But in some other, remarkable cases, Snowdon has found sisters who showed no outwards signs of senility in life, yet had brains that looked as if they were ravaged by dementia. How did Sister Matthia and the others cheat time? Snowdon's study, which includes an annual barrage of mental agility tests and detailed medical exams, has found several common denominators. The right amount of vitamin folate is one. Verbal ability early in life is another, as are positive emotions early in life, which were revealed by Snowdon's analysis of the personal autobiographical essays each woman wrote in her 20s as she took her vows. Activities, crosswords, knitting and exercising also helped to prevent senility, showing that the old adage "use it or lose it" is pertinent. And spirituality, or the positive attitude that comes from it, can't be overlooked. But individual differences also matter. To avoid dementia, your general health may be vital: metabolic problems, small strokes and head injuries seem to be common triggers of Alzheimer's dementia. Obviously, you don't have to become a nun to stay mentally agile. We can all aspire to these kinds of improvements. As one of the sisters put it, "Think no evil, do no evil, hear no evil, and you will never write a best-selling novel." Attention seeking You can be smart, well-read, creative and knowledgeable, but none of it is any use if your mind isn't on the job PAYING attention is a complex mental process, an interplay of zooming in on detail and stepping back to survey the big picture. So unfortunately there is no single remedy to enhance your concentration. But there are a few ways to improve it. The first is to raise your arousal levels. The brain's attentional state is controlled by the neurotransmitters dopamine and noradrenalin. Dopamine encourages a persistent, goal-centred state of mind whereas noradrenalin produces an outward-looking, vigilant state. So not surprisingly, anything that raises dopamine levels can boost your powers of concentration. One way to do this is with drugs such as amphetamines and the ADHD drug methylphenidate, better known as Ritalin. Caffeine also works. But if you prefer the drug-free approach, the best strategy is to sleep well, eat foods packed with slow-release sugars, and take lots of exercise. It also helps if you are trying to focus on something that you find interesting. The second step is to cut down on distractions. Workplace studies have found that it takes up to 15 minutes to regain a deep state of concentration after a distraction such as a phone call. Just a few such interruptions and half the day is wasted. Music can help as long as you listen to something familiar and soothing that serves primarily to drown out background noise. Psychologists also recommend that you avoid working near potential diversions, such as the fridge. There are mental drills to deal with distractions. College counsellors routinely teach students to recognise when their thoughts are wandering, and catch themselves by saying "Stop! Be here now!" It sounds corny but can develop into a valuable habit. As any Zen meditator will tell you, concentration is as much a skill to be lovingly cultivated as it is a physiochemical state of the brain. Positive feedback Thought control is easier than you might imagine IT SOUNDS a bit New Age, but there is a mysterious method of thought control you can learn that seems to boost brain power. No one quite knows how it works, and it is hard to describe exactly how to do it: it's not relaxation or concentration as such, more a state of mind. It's called neurofeedback. And it is slowly gaining scientific credibility. Neurofeedback grew out of biofeedback therapy, popular in the 1960s. It works by showing people a real-time measure of some seemingly uncontrollable aspect of their physiology - heart rate, say - and encouraging them to try and change it. Astonishingly, many patients found that they could, though only rarely could they describe how they did it. More recently, this technique has been applied to the brain - specifically to brain wave activity measured by an electroencephalogram, or EEG. The first attempts were aimed at boosting the size of the alpha wave, which crescendos when we are calm and focused. In one experiment, researchers linked the speed of a car in a computer game to the size of the alpha wave. They then asked subjects to make the car go faster using only their minds. Many managed to do so, and seemed to become more alert and focused as a result. This early success encouraged others, and neurofeedback soon became a popular alternative therapy for ADHD. There is now good scientific evidence that it works, as well as some success in treating epilepsy, depression, tinnitus, anxiety, stroke and brain injuries. And to keep up with the times, some experimenters have used brain scanners in place of EEGs. Scanners can allow people to see and control activity of specific parts of the brain. A team at Stanford University in California showed that people could learn to control pain by watching the activity of their pain centres (New Scientist, 1 May 2004, p 9). But what about outside the clinic? Will neuro feedback ever allow ordinary people to boost their brain function? Possibly. John Gruzelier of Imperial College London has shown that it can improve medical students' memory and make them feel calmer before exams. He has also shown that it can improve musicians' and dancers' technique, and is testing it out on opera singers and surgeons. Neils Birbaumer from the University of T?bingen in Germany wants to see whether neurofeedback can help psychopathic criminals control their impulsiveness. And there are hints that the method could boost creativity, enhance our orgasms, give shy people more confidence, lift low moods, alter the balance between left and right brain activity, and alter personality traits. All this by the power of thought. Close this window From checker at panix.com Sat May 28 19:34:07 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:34:07 -0400 (EDT) Subject: [Paleopsych] Financial Times: The most dangerous idea on earth? Message-ID: The most dangerous idea on earth? http://news.ft.com/cms/s/c7eb8502-cda3-11d9-9a8a-00000e2511c8.html 5.5.27 By Stephen Cave and Friederike von Tiesenhausen Cave It is easy to see how you could be tempted. It might start with genetically screening your children for a lower risk of a hereditary cancer. Or perhaps with a pill that promised to keep your memory fresh and clear into old age. But what if, while you were having your future children engineered to be cancer-free, you were offered the chance to make them musically gifted? Or, if instead of taking a memory-enhancing pill, you were offered a neural implant that would instantly make you fluent in all the world's languages? Or cleverer by half? Wouldn't it be difficult to say no? And what if you were offered a whole new body - one that would never decay or grow old? A growing number of people believe these will be the fruits of the revolutions in biotechnology expected this century. And they consider it every individual's right to take advantage of these changes. They think it will soon be within our reach to become something more than human - healthier, stronger, cleverer. All we have to do is live long enough to be around when science makes these advances. If we are, then we may just live forever. This idea, known as transhumanism, is steadily spreading from a handful of cranks and Star Trek fans into the mainstream and across the Atlantic. But it is an idea that Francis Fukuyama, famed for proclaiming the end of history when US-style liberal democracy triumphed in the cold war, has described as the most dangerous in the world. In a world at war with terrorism, divided by religious fundamentalism and haunted by racism, sexism and countless other prejudices, how is it that transhumanism has earned the hotly contested title of the most dangerous idea on earth? According to Nick Bostrom's "The Transhumanist FAQ", transhumanists believe "that the human species in its current form does not represent the end of our development but rather a comparatively early phase". With the help of technology, we will be able to enhance our capacities far beyond their present state. It will be within our reach not only to live longer, but to live better. Bostrom, a lecturer at the University of Oxford and the intellectual spearhead of the transhumanist movement in the UK, sees it as the natural extension of humanism - the belief that we can improve our lot through the application of reason. In the past, humanism has relied on education and democratic institutions to improve the human condition. But in the future, Bostrom claims, "we can also use technological means that will eventually enable us to move beyond what some would think of as 'human'". Transhumanists are utopians. They foresee a world in which our intellects will be as far above those of our current selves as we are now above chimpanzees. They dream of being impervious to disease and eternally youthful, of controlling their moods, never feeling tired or irritated, and of being able to experience pleasure, love and serenity beyond anything the human mind can currently imagine. But dreams of eternal youth are as old as mankind and no dreamer has yet escaped the grave. Why transhumanists believe they are different - and why Fukuyama considers them so dangerous - is because their hopes are based on technologies that are already being developed. Around the world, there is a growing number of patients who are being helped through the insertion of electrodes and microchips into their brains. These "brain-computer interfaces" are returning sight to the blind and hearing to the deaf. They are even enabling the completely paralysed to control computers using only their thoughts. According to computer scientist and writer Ramez Naam, it is only a matter of time before we can plug these interfaces into the higher brain functions. We will then be able to use them not only to heal but to enhance our mental abilities. Naam foresees a world in which we can do away with paraphernalia such as keyboards, accessing the enormous power of computers using our thoughts alone. It is the stuff of comic books: he predicts super-normal senses, X-ray vision, and sending e-mails just by thinking about it. We could lie in bed surfing the internet in our heads. In his new book, More Than Human, Naam pins down the defining belief of transhumanism: that there is no distinction between treatment and enhancement. Practically and morally, they are a continuum. In a breathless account, he details the astonishing advances in medicine over the past 20 years. And he shows how the same technologies that could cure Parkinson's or give sight to the blind could also transform the able-bodied. An ultra-liberal technophile, Naam gushes that "we are the prospective parents of new and unimaginable creatures". He is at his best when indulging his futurological visions, skipping through some of the trickier moral and social questions. He prophesies a revolution in human interaction whereby we can send pictures or even feelings direct into each other's brains and can read the thoughts of those too young, stubborn or sulky to communicate. Extrapolating from technologies that are already being developed, he argues that there will come a time when we are all linked together through a single worldwide mind. In the self-consciously sober prose of the Transhumanist FAQ, a free online publication found on the World Transhumanist Association's website (http://transhumanism.org/index.php/WTA/faq), Bostrom describes a yet more radical dream: that the integration of brains and computers will one day enable us to leave the confines of our grey matter altogether. The ultimate escape from the deterioration that flesh is prone to would be to have our minds "uploaded" on to new bodies made of silicone. Our new metal brains would be composed of super computers that would run our thought processes many times faster than their fleshy equivalents. We could even make back-ups of our minds and have ourselves reloaded in the event of emergencies. The FAQ also pins the hopes of transhumanists on areas of research which are now only in their infancy, such as nanotechnology. Theorists believe that one day nanotechnology will enable us to build complex objects atom by atom. These nanotech "assemblers" would work like computer printers but in three dimensions. Just as a machine now will print out whatever we ask it to in two dimensions, in the future, these assemblers will, like a magic lamp, instantly create whatever we ask - anything from diamond rings to three-course dinners. The holy grail of nanotechnology is to use it to help us live longer and healthier lives. With the ability to move atoms and molecules around, it will be possible to destroy tumours and rebuild cell walls and membranes. Ultimately, all diseases can be seen as the result of certain atoms being in the wrong place and therefore could be curable by nanotech intervention. Transhumanists also foresee nanotechnology contributing to a second scientific revolution this century - the development of superintelligence. We will one day be able to build computers that can radically outperform the human brain. These superintelligent systems will not only be able to do sums faster than we can, but could be wiser, funnier and more creative. As the FAQ puts it, they "may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development". But even the most optimistic of trans-humanists recognises that not all of these breakthroughs will happen tomorrow. So in order to be around to see this new dawn, many of them are investing in expensive insurance policies. For a few thousand pounds, you can ensure that as soon as you are declared dead, your body will be flown to one of the US's growing number of cryonics institutes. There your cadaver will be frozen in liquid nitrogen and thawed only when medical technology is capable of undoing the ravages of whichever disease caused your demise. Needless to say, cryonics may not work - currently, the technology does not exist to reverse the damage caused by freezing, let alone lethal cancers. But there is no question that it will improve the odds of a comeback compared with the conventional alternative: rotting in a grave. As Bostrom puts it, "cryonics is the second worst thing that can happen to you." The more laborious approach to sticking around long enough to become transhuman involves changing to a radically healthier lifestyle. In Fantastic Voyage: Live Long Enough to Live Forever, published in the UK this month, inventor and futurist Ray Kurzweil and physician Terry Grossman offer a 450-page step-by-step guide to achieving immortality. Like Bostrom and Naam, Kurzweil and Grossman are wowed by the potential of new technologies such as genetic engineering and artificial intelligence, and they sketch the ways in which they might add to the human life span. But for the ageing baby boomer generation to which they belong, keeping going long enough to reap these benefits is a real and pressing concern. The bulk of their book is therefore dedicated to a detailed compilation of cutting-edge health advice. Although many of their recommendations - such as to eat more veg and take more exercise - are the stuff of all our New Year's resolutions, others are not for the half-hearted. They prescribe a regime of "aggressive supplementation" which would transform any kitchen into a pharmacy. For some vitamins they advocate between ten and 100 times the current recommended daily allowance. But despite its extraordinary ambitions, Fantastic Voyage is serious and extensively researched. Combined with the boldness of its prescriptions, this puts it in a league above most other health books on the shelf. There is a long and colourful history of those who have striven for physical immortality, from the advocates of ingesting precious metals to the supporters of pickling oneself in wine. The one thing these advocates have in common is that they are now all 6ft under. To many, transhumanism will seem a continuation of this age-old and egoistic quest, updated with the modish language of science fiction. But to transhumanists it is a mission to save the world. Every week, one million people die on this planet. So instead of bans and moratoria, transhumanists want to see greater investment in the kind of research that could make death through disease and old age entirely avoidable. In Kurzweil and Grossman's words, "even minor delays will result in the suffering and death of millions of people." For them, this makes it a moral imperative. Fukuyama disagrees. He counsels humility before meddling with human nature. In last September's Foreign Policy magazine article, when he labelled transhumanism the world's most dangerous idea, he argued that "the seeming reasonableness of the project, particularly when considered in increments, is part of its danger." We might not all buy the fruits of transhumanism wholesale, but "it is very possible that we will nibble at biotechnology's tempting offerings without realising that they come at a frightful moral cost." In his sophisticated and deeply researched book Our Posthuman Future, Fukuyama expands his case, arguing for caution on two main grounds. First, he believes the transhumanist ideal is a threat to equality of rights. Underlying the idea of universal human rights, he argues, is the belief in a universal human essence. The aim of transhumanism is to change that essence. What rights may superintelligent immortals claim for themselves? "What will happen to political rights once we are able to, in effect, breed some people with saddles on their backs, and others with boots and spurs?" Fukuyama's second argument is based on what he calls the miraculous complexity of human beings. After hundreds of thousands of years of evolution, we cannot so easily be unpicked into good qualities and bad. "If we weren't violent and aggressive," he argues, "we wouldn't be able to defend ourselves; if we didn't have feelings of exclusivity, we wouldn't be loyal to those close to us; if we never felt jealousy, we would also never feel love." Fukuyama's answer to the threat of transhumanism is straightforward: stringent regulation. Despite the current deregulatory mood in America, his views chime with those of the anti-abortion right, a core constituency of the Bush administration. When President George W. Bush first came to power, he set up his Council on Bioethics to, as he put it, "help people like me understand what the terms mean and how to come to grips with how medicine and science interface with the dignity of the issue of life and the dignity of life, and the notion that life is - you know, that there is a Creator". Members of the president's Council on Bioethics, on which Fukuyama sits, are widely credited with crafting Bush's stem cell policy, which saw a ban on federal funding for research on new stem cell lines. This propelled the question of regulating biotechnology to the top of the political agenda. During the Democratic Party Convention last year, presidential candidate John Kerry mentioned stem cell research more often than unemployment. Much of the transhumanist literature has been written in response to Fukuyama's book and the edicts of the president's Council. Permeating their work is the sense that technologically they are advancing steadily, but politically the bio-conservatives are holding the centre ground. They therefore oscillate between proselytising the good news that technology is soon to free us from the bonds of mortality and plaintively arguing for the right to use this technology as they see fit. In Citizen Cyborg, James Hughes maps what he sees as these emerging parties in bio-politics and their relationship to the ideologies and isms of the 20th century. A transhumanist, he nonetheless believes it is possible to find a middle way between the libertarians who advocate a technological free-for-all and the bio-conservatives who want the lot banned. He places himself within the traditions of both liberal and social democracy, arguing that "transhumanist technologies can radically improve our quality of life, and that we have a fundamental right to use them to control our bodies and minds. But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies." Contrary to Fukuyama, Hughes does not believe that the biotech wonders of the transhumanist era will create new elites. He argues that they could even strengthen equality by empowering those who are currently downtrodden: "a lot of social inequality is built on a biological foundation and enhancement technology makes it possible to redress that." But despite his support for some regulation of transhumanist inventions, Hughes, like Naam, is unrelentingly technophile. At times this becomes a naive utopianism, such as when he claims that "technology is about to make possible the elimination of pain and lives filled with unimaginable pleasure and contentment." He rightly argues that in Our Posthuman Future, Fukuyama "treats every hypothetically negative consequence from the use of technology with great gravity, while dismissing as hype all the possible benefits". Unfortunately, he does not always recognise when he is mirroring that very mistake. The biotechnology revolution has caused Fukuyama to revise his contention that we have reached the end of history - history rolls on, but driven by scientists instead of kings. What all these writers have in common is the firm belief that the biotech era will shake up the old political allegiances and create new dividing lines. On one side will be those who believe such meddling unnatural and unwise. On the other, those who want to take the offerings of the biotech revolution and become something more than human. Won't you be tempted? OUR POSTHUMAN FUTURE by Francis Fukuyama Profile Books ?8.99, 256 pages MORE THAN HUMAN by Ramez Naam Broadway Books ?24.95, 288 pages FANTASTIC VOYAGE by Ray Kurzweil and Terry Grossman Rodale ?17.99, 452 pages CITIZEN CYBORG by James Hughes Westview Press $26.95, 294 pages From checker at panix.com Sat May 28 19:34:47 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:34:47 -0400 (EDT) Subject: [Paleopsych] NYT: More Sex, Less 'Joy' Message-ID: More Sex, Less 'Joy' http://www.nytimes.com/2005/05/29/fashion/sundaystyles/29SEX.html By [2]RUTH LA FERLA AT the Barnes & Noble on Union Square in Manhattan, just a few steps across the aisle from Self-Improvement and Relationships, the bookshelves groan with that venerable publishing genre, the sex manual. But to pull a recent example from its perch is to enter a world of steamy provocation that readers of a previous generation could not have imagined. There is, for instance, "The Lowdown on Going Down," with a sharp-focus photograph of a naked woman on the cover, thighs raised suggestively. Between the covers are 144 pages of explicit instructions for oral gymnastics. "Lowdown" is a title from Broadway Books, a subsidiary of the publishing giant Random House. The book, by Marcy Michaels and Marie DeSalle, is one of dozens of new entries published in the last year in the growing and increasingly racy genre of how-to sex books, which employ provocative titles and slang - sometimes vulgar - to capture new readers. Vying for space on the same shelves are "Hot Monogamy," "The Wild Guide to Sex" and "Mind-Blowing Sex." At least since 1972, when "The Joy of Sex" by Dr. Alex Comfort was published, with its self-consciously literary tone and section headings like "Mouth Music" and "Playtime," sex books - or marriage manuals, as they were once euphemistically called - have spiced up their contents to keep pace with the times. Now the old textbookish tomes like "Joy of Sex," which invited readers to expand their horizons beyond the face-to face missionary position have been replaced by shiny paperbacks extolling the excitement that could come from oral sex, anal sex, fetishism and S&M. Couples who were formerly portrayed in a modest embrace are now shown to reveal full penetration. Careful, scholarly, sometimes clinical language has been replaced by chatty girlfriend-speak that might have been ghostwritten by Samantha Jones, the outspoken and sexually ravenous publicist of "Sex and the City." Those in the business of publishing such books say the evolution has accelerated, fueled by the need to seem relevant in an increasingly sexualized culture. "The generation we're publishing for today is much more open about terminology and much more forthright," said Bryce Willett, the sales marketing manager of Ulysses Press in Berkley, Calif., which publishes "The Little Bit Naughty Book of Sex Positions" and the "Wild Guide to Sex and Loving." "They're used to hearing 'Sex and the City' dialogue and aren't scared or squeamish about language and topics that in an earlier era would have caused them to drop their voices or switch to a really careful tone," Mr. Willett said. Even "The Joy of Sex," an indisputable franchise, which spent years on the New York Times best-seller list after it was published and was so racy for its time that it was banned in libraries in some cities, has had to adapt. While the current edition, fully revised in 2002 by Crown Publishers, still retains the allusions to Darwin and Freud originally written by Dr. Comfort (a trained biologist), some references to the female anatomy are now rendered as slang. In addition the charcoal drawings of intertwined couples are more erotically charged. "People are a lot more accepting of a broader range of sexual vernacular now," said Steve Ross, the publisher of Crown, about the updated version, which he said was edited to be more colloquial and direct than the original. The revival and boomlet of sex guides owes a debt in part to Judith Regan of ReganBooks, the publisher of "How to Have a XXX Sex Life," "How to Make Love Like a Porn Star" and "She Comes First" (2004), a sprightly treatise on cunnilingus, which has been successful enough to spawn a sequel, "He Comes Next," due out in February. "She's gone out and found edgy people and had them write more mainstream stuff," said Charlotte Abbott, the book news editor of Publishers Weekly. "She opened the door to a more explicit kind of sex book." Ms. Regan, describing an earlier generation of sex manuals as "tame and antiseptic," decided to do better. The latest books, while still providing much the same information as their forebears, she said, are "more outrageous and candid and at the same time more fun and friendly, like Las Vegas." Thanks to the anonymous nature of Internet shopping, publishers say, the latest sex how-to books have found an expanding readership. "Sex guides are the subject of perennial and reliable interest," Mr. Ross noted "But now that consumers can buy them without the traditional embarrassment, their growth has been explosive." He said that "203 Ways to Drive a Man Wild in Bed," for instance, has sold 325,000 copies. Women are the primary consumers of the new manuals, which, like "She Comes First," emphasize their enjoyment. "A lot of these books are about evening the score," Ms. Regan said. "They're saying, 'Hey guys, we need pleasure too.' " Publishers say there is no specific target demographic for the books, although feedback suggests that readers range from their 20's to their 60's. And though the books are written by both men and women, the women who write them tend to see a cause in what they are doing. Debra McLeod, co-author with her husband, Don, of "The French Maid: And 21 More Naughty Sex Fantasies to Surprise and Arouse Your Man" (Broadway), a collection of erotic fantasies published this year, said she wrote it mainly for women "because sex is now the domain of women," adding, "It is a woman's role to ensure a couple's sex life remains satisfying." Despite contents that seem to be ever pushing taboos - even including bestiality, in some volumes - publishers maintain that these are service books at heart, maybe even beneficial. "We're not publishing to shock," said Kristine Poupolo, a senior editor at Doubleday Broadway, whose current hits include "The Many Joys of Sex Toys" by Anne Semans. "I like to think we're improving peoples' lives." Some experts are skeptical. "You can promise the greatest sex in the history of the world, but that is not what most people want," said Dr. Marty Klein, a marriage and family counselor and a sex therapist in Palo Alto, Calif. Most couples, Dr. Klein continued, would happily settle for the simpler pleasures of closeness and affection. "A book called 'How to Get Your Wife to Hug You a Little Bit More' or 'How to Get Your Husband to Slow Down and Caress Your Hair and Love Doing It,' now those are books that would change people's lives." But the new sex manuals give relatively short shrift to intimacy and lasting connection. While "The Joy of Sex" includes an introduction asserting that it is above all about love, and also has a section on tenderness, its descendants stress experimentation and proficiency. "Try going through each others' wardrobes; why not see what you'd look like in each other's clothes," suggests Paul Scott, the author of "Mind-Blowing Sex." Then there are certain calisthenics for the mouth that seem to require as much practice as learning to play the oboe. One manual from Ulysses Press, whose title itself is vulgar, inducts readers into the arcana of sadomasochistic games, complete with props like paddles, handcuffs and video cameras. "If you want to make a Victorian porn film, simply turn the dial to sepia," the author, Flic Everett, suggests. Little is known about whether the new sex books have altered attitudes and approaches to human sexuality. "With the earlier manuals there was some research," said Dr. Julia Heiman, the director of the Kinsey Institute for Research in Sex, Gender and Reproduction. "We had some evidence at least that they effected changes in sexual functioning." Dr. Heiman added that no similar studies have recently appeared. "But that's what deserves to happen if we are to figure out whether these things have a positive impact on sexual health." she said. To some readers sexual health may be beside the point. Mr. Willett of Ulysses Press said that titles like "The Wild Guide to Sex and Loving" sold better in the Bible Belt than in markets like New York. The books are "explicit but not pornographic," he said. "In areas where people have a limited access to pornography these books satisfy a need." As the sex books become ever more steamy, some publishers, even the more venturesome, are already thinking of backing away. "There are still places you can go with these books," Ms. Regan suggested, "but I don't want to go there." "Social regulation, courtship, flowers, romance, those are things that seem newer right now," she added, "Maybe the only place to go is to get prudish again." From checker at panix.com Sat May 28 19:35:14 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:35:14 -0400 (EDT) Subject: [Paleopsych] Telegraph: (Eco) Heavyweight champion Message-ID: Heavyweight champion http://www.telegraph.co.uk/arts/main.jhtml;sessionid=QORZVBZX5JRKZQFIQMGSM54AVCBQWJVC?xml=/arts/2005/05/24/boeco22.xml&sSheet=/arts/2005/05/24/ixartleft.html&secureRefresh=true&_requestid=33594 (Filed: 24/05/2005) Umberto Eco has made a name - and fortune - for himself in the role of thinking man to the masses. Not that we understand what he is going on about most of the time. Nigel Farndale asks him to explain himself 'Mooo! Mooo!' Umberto Eco says by way of opening when I meet him in his high-ceilinged apartment overlooking the piazza Castello in Milan. 'I'm supposed to do this exercise for my throat,' the 73-year-old Italian philosopher and novelist explains. 'Mooo! Mooo! I had an operation on my vocal chords and am still recovering.' I tell him I will understand if he needs to rest his voice during our interview, or indeed if he needs to moo from time to time. Though he has a paunch and unexpectedly small, geisha-like feet, Eco has an energetic stride - as I discover when he leads the way along a winding corridor and I try to keep up with him. We pass through a labyrinthine library containing 30,000 books - he has a further 20,000 at his 17th-century palazzo near Urbino - and into a drawing-room full of curiosities: a glass cabinet containing seashells, rare comics and illustrated children's books, a classical sculpture of a nude man with his arms missing, a jar containing a pair of dog's testicles, a lute, a banjo, a collection of recorders, and a collage of paintbrushes by his friend the Pop artist Arman. Although Eco is still best known for his first novel, The Name of the Rose (1980), a medieval murder mystery that sold ten million copies, it is as an academic that he would like to be remembered. He has been a professor at Bologna, the oldest university in Europe, for more than 30 years. He has also lectured at Harvard, Yale, Cambridge and numerous other famous universities and, to fill in the rest of his time, writes cerebral essays on uncerebral subjects ranging from football to pornography and coffee pots. He is one of the fathers of postmodern literary criticism - the general gist of his approach being that it doesn't matter what an author intends to say, readers are entitled to interpret works of literature in any way they choose. He was also a pioneer of semiotics, the study of culture as a web of signs and messages to be decoded for hidden meaning. Doesn't it drive him mad, always seeing meaning where others just see things? 'It does become a habit, but you are not obliged to be on duty at every moment,' he says in his heavy Italian accent. 'If I drink a glass of scotch I am thinking only of the scotch; I am not thinking about what the brand of scotch I am drinking says about my personality. I know what you mean, though, and I suppose the answer is that I am driven no more mad than a pianist who always has melodies in his head.' He strokes his beard as he says this, and I notice he wears his watch over his shirt cuff, with the face on the inside of his wrist. Is this meaningful? 'There are two practical reasons for it - one is that in my job I am obliged to attend a lot of symposia, which are frequently very boring. If I do this to check the time [he bends his arm], everybody notices. If I do it this way [he looks down at his watch without moving his wrist], I can check surreptitiously without showing it. 'As for the sleeve, that is because my watch-strap gives me eczema. So,' he says with a laugh, 'there is a meaning there, but not a terribly interesting one.' I see he is also chewing on a dummy cigarette. 'Yes, I gave up smoking five months ago. I find it helps to have something in my mouth. I like nicotine because it excites my brain and helps me work. In the first two months after quitting I couldn't work. I felt lazy. Then I tried nicotine patches.' He has, he says, smoked 60 a day for most of his adult life. Hasn't he left it a little late to start worrying about his health? 'Perhaps I am not as wise as I like to think I am.' His second novel, Foucault's Pendulum, took eight years to write. It was about three editors at a Milan publishing house trying to link every conspiracy theory in history, including that now famous one about the medieval Knights Templar and the secret of the Holy Grail. 'I know, I know,' he says with a laugh. 'My book included the plot for The Da Vinci Code. But I was not being a prophet. It was old occult material. It was already all there. I treated it in a more sceptical way than Dan Brown did. He had the excellent idea of treating it as if it were true. Millions of people believed him. They took it seriously, but it was all a hoax.' The Da Vinci Code is one of the few novels to have sold more than The Name of the Rose, I point out. Must be quite galling, that. He shrugs. Has he read it? 'Yes.' Did he like it? He shrugs again. 'It's a page-turner.' The Vatican was not keen on Foucault's Pendulum, by all accounts. Its official newspaper described it as being full of 'profanations, blas-phemies, buffooneries and filth, held together by the mortar of arrogance and cynicism'. Even the late Pope condemned Eco personally as, 'the mystifier deluxe'. Is it true he was all but excommunicated? 'No. The whole affair was nothing but an invention of the newspapers that needed to have an Italian Salman Rushdie.' Salman Rushdie, interestingly enough, described Foucault's Pendulum as 'humourless, devoid of character, entirely free of anything resembling a credible spoken word and mind-numbingly full of gobbledygook of all sorts'. Other writers, academics and critics, perhaps envious of the success of Eco's first novel, also put the boot in, accusing the author of wearing his learning too heavily. Was it all just professional jealousy, does he think? 'When I went from being an academic to being a member of the community of writers some of my former colleagues did look on me with a certain resentment. But not all, and it is only after my work as a novelist that I received 33 honorary degrees from universities around the world.' Many academics, I suggest, seem to have felt that Eco's main intellectual interest was in showing off. Is that fair? Is he an exhibitionist? 'I think every professor and writer is in some way an exhibitionist because his or her normal activity is a theatrical one. When you give a lesson the situation is the same as writing a book. You have to capture the attention, the complicity of your audience.' Even though Eco makes subjects such as metaphysics and semiotics relatively accessible through his playful prose, he must suspect that many of his ideas go over the heads of his millions of readers. I mean, if a clever chap like Salman Rushdie struggles with it, what hope do the rest of us have? He shrugs again. 'I write what I write.' Does he worry, though, that some people buy his books in order to impress their friends, but never actually read them? 'If some people are so weak that they buy my books because they are piled high in bookshops, and then do not understand them, that is not my fault. If people buy my books for vanity, I consider it a tax on idiocy.' Is he a vain man himself - intellectually, I mean? 'Obviously there is a pleasure in teaching because it is a way to keep you young. But I think a poet or philosopher writing a paper who doesn't hope that his work will last for 1,000 years is a fool. Anyway, intellectual vanity does not exclude humility. If you write a poem, you hope to be as good as Shakespeare, but you accept you probably won't be and that you will have much to learn. 'I would describe myself as an insecure optimist who is sensitive to criticism. I always fear to be wrong. Those who are always certain of their own work risk being idiots. Insecurity is a great force, apropos of teaching. The moment I start a new class I feel panic. If you don't feel panic, you cannot succeed.' It seems remarkable, given his success as a novelist, that he still teaches. 'My success obliged me to seek greater privacy, but that is the only real difference it has made to my life. It is difficult going to [film] premi?res, for example, because people want to interview me or hand me their manuscript. I continued with my life as a scholar, publishing academic books. There was a continual osmosis between my academic research and my novels.' His latest novel, The Mysterious Flame of Queen Loana, is about a rare-book dealer who loses his 'autobiographical' memory - he doesn't know his own name or recognise his wife - but still has his 'semantic' memory and so is able to quote from every book he has ever read. The hero is the same age as Eco and has had similar life experiences. There is, then, I presume, much of his own autobiography in this book. 'It is difficult for me to recognise it as autobiography because it is more the biography of a generation. But it is obvious I gave to the character a lot of my personal memories. The "historic" or "public" memories are from my private collection of memorabilia, from the Flash Gordon or Mickey Mouse cartoons of my youth. The illustrations I use in the book are all from my own collection, as displayed in that cabinet back there.' He directs a thumb over his shoulder. 'The character lived his childhood through books and cartoons, as did I. They dominated my life.' So cartoons are to him what the madeleine was to Proust, a trigger to memory? 'No. I had to fight against Proust in this book. If you write a novel about memory, you have to. So I did the contrary of the great Proust. He went inside himself to retrieve senses, smells and memories. My hero does the opposite because he is only confronted with the external memories, public memories which a whole generation shared.' At one point in the book the hero remembers fighting with the Resistance during the war. Although he was only a teenager, Eco did something akin to this, having first been a committed Fascist. 'In 1942, at the age of ten, I received the First Provincial Award of Ludi Juveniles - a compulsory competition for young Italian Fascists, that is for every young Italian. I elaborated with rhetorical skill on the subject, "Should we die for the glory of Mussolini and the immortal destiny of Italy?" My answer was positive. I was a smart boy.' He recalls being proud of his Fascist youth uniform. 'I spent the following two years among the Germans - Fascists and partisans shooting at one another - and I learnt how to dodge a bullet. It was good exercise.' Can he recall exactly when he became disillusioned with Mussolini? He gives the question a contemplative nod before answering. 'There were two letters I wrote nine months apart. I found them when I was doing research for this book. In the first, which I wrote when I was ten, I was, rhetorically at least, a fanatical Fascist. You see, as a child I was exposed every day to the propaganda. It was like a religion. Saying I didn't believe in Mussolini would have been as shocking as saying I didn't believe in God. I was born under him - I never knew anything else. I loved him. It would have been perverse if I hadn't. In the second letter nine months later I had become sceptical and disillusioned. I tried to work out what had happened in between. It might have been that I was no longer optimistic about the outcome of the war, but more likely it was to do with the radio and with reading American cartoon books. I did research and remembered that at the same time as we were hearing official Fascist songs on Italian radio we also began listening to silly humorous songs on Radio Free London - we were learning about everyday life elsewhere. I began to fall in love with the idea of Englishness. I began to read about Jeeves and Bertie.' Umberto Eco was born in Alessandria, a medieval fortress city in the Po valley in northern Italy. His grandfather was a typographer and a committed socialist who organised strikes. His father was an office clerk for a manufacturer of iron bathtubs. He describes his family as being 'petit bourgeois'. Did his father have aspirations to be an intellectual? 'He never had the chance. He was the first child of a family of 13. They were poor. My father left school early and went to work. But he was a voracious reader and went to the book kiosks and read books there so he didn't have to pay for them. When they chased him off he would simply go to another kiosk.' His father died of a heart attack in 1962, and his mother died ten years later. 'My father didn't want me to be a philosopher, he wanted me to be a lawyer,' Eco says. 'But he accepted my decision when I enrolled at Turin university. It was important for me to show him it could be a fruitful experience, and I think he was pleased when I became a lecturer at 24. I think he was proud, too, when I published my doctoral dissertation on medieval aesthetics. I know he secretly read it entirely, even though he couldn't understand all the Latin in it.' Eco clears his throat. He does another 'Mooo!' Clearly, after an hour and a half of talking, his vocal chords are feeling the strain. Promising that this will be my last question, I ask whether the success he had with The Name of the Rose was diminished because his father was not around to see it. 'Yes,' he says, 'absolutely. I was 50. As a consequence, the pleasure of that success for me was diminished. To this day, every day, I silently tell my father about what I am doing. He could be sceptical, and every time I was too enthusiastic he was there to provide me with a cold shower. 'We are always children, I think, even when we are old. We always need parental approval. I never needed it as much from my mother, though, because I knew she was convinced I was a genius from the age of five! With my own children I tried to strike more of a balance between my mother's approach and my father's.' He married his German-born wife, Renate, the year his father died. She too is an academic, teaching architecture at Milan university. The couple have two grown-up children: Stefano, a television producer, and Carlotta, an architect. 'I honed my storytelling skills by telling my children complicated bedtime stories,' Eco recalls. 'When they left home I didn't have anyone to tell the stories to, so I began to write.' Now he has grandchildren to tell stories to, when his voice is strong enough. They reward him by painting portraits of him. One, pinned to the wall, is by a four-year-old. It shows a round, jolly face with glasses, a scruffy beard and a big grin. Oddly enough, the likeness is uncanny. 'The Mysterious Flame of Queen Loana' (Secker & Warburg, ?17.99) by Umberto Eco, published on 6 June, is available from Telegraph Books Direct (0870 155 7222) at ?15.99 plus ?2.25 p&p From checker at panix.com Sat May 28 19:35:53 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:35:53 -0400 (EDT) Subject: [Paleopsych] Gary North: Mainstream Media vs. Upstream Media Message-ID: Gary North: Mainstream Media vs. Upstream Media http://www.lewrockwell.com/north/north375.html 5.5.25 Recently, I was flipping through the local TV channels. I get four stations clearly, but none is worth watching more than once a week. I stopped briefly at an interview. Talking head #1 was a nationally known TV news teleprompter reader, also known as an anchorman. The other one was unfamiliar to me. He was a print media journalist - a reporter. The anchorman began his questioning of the journalist with this observation. "Were both representatives of the MSM: mainstream media." It hit me. The MSM is at long last visibly on the defensive. The moment you acknowledge that you are part of the mainstream media, you are necessarily also acknowledging the existence of another media, which I like to call the Upstream Media. It swims against the mainstream, which is flowing downstream. Its easy to flow downstream. You just let nature take its course. The trouble with downstream rafting is that eventually you either hit the rapids or go over the falls. In any movie about going over the falls, someone in the raft asks: "Whats that noise?" The optimists say that the river will carry them to the ocean. Fine. But if you dont climb off the raft, you will drift out to sea and disappear. The point is, at some point you had better get off the raft. The mainstream will eventually kill you. Today, because of the Internet, hundreds of millions of people are getting off the raft, all over the world. They grab a motorboat and head back upstream. There are lots of tributaries heading back upstream. No single tributary that is feeding into the river is getting all of the traffic. But hundreds of millions of people are now headed in the opposite direction, at least with respect to some important issues. Other issues will follow. Issue by issue, readers are concluding, "Weve been lied to." They are correct. The Establishment at some point will face the implications of widespread disbelief in everything it says. At some point, people will not voluntarily do what they are told when they perceive their leaders as liars. When that day comes, political consensus will disintegrate. So will the mainstream Establishments control systems. Upstream media were not readily accessible to most people as recently as a decade ago. The cost of locating alternative news sources was too high. The economists' rule held firm: "When price rises, less of the item is demanded." Now the same rule is being applied against the mainstream: "When price falls, more of the item will be demanded." The Internet has changed the relative pricing of media. This is a revolutionary turn of events. The price of obtaining alternative views is falling fast. In fact, the main expense today is the value of our time. We have less and less time for the boring, superficial, and lying mainstream media. They know it. There is nothing they can do about it. The monopoly that they have enjoyed for about 5,000 years is coming to an end. So is the free ride of political parties that rely on the mainstream media to keep the masses in line. NEWSPAPERS ARE DYING How much time do you spend each day reading newspapers? An hour? Probably not. How much time do you read on-line? More than you spend with a newspaper. Day by day, there are more people just like you. A decade ago, I subscribed to three daily newspapers and about two dozen magazines. I also subscribed to a dozen investment newsletters. Since 2000, I have subscribed only to half a dozen paper-based newsletters. More and more newsletters are digital. Instead of reading newspapers, I visit Websites. We all do. We are in news-overload mode. This is getting worse. Even with Google and similar search tools, we have too much on our plates. The allocation of our reading time has replaced the allocation of our subscription money as our most pressing reading problem. Year after year, the network news departments of the three main TV networks are watching the Nielsen numbers fall. The same thing is happening to newspapers. They are declining in circulation. This is especially true of local newspapers. Readers are interested in national news, and they go to the Web for it. This is creating a major problem for certain retail industries, most notably automobiles and furniture. Newspapers rely heavily on pages of full-page ads for local cars and furniture. As newspaper readers switch to on-line versions of local newspapers, as they are doing by the millions, the full-page ads pull per dollar is fading. There are little ads on-screen, which we have learned to ignore. There are even PRINT THIS buttons that strip out most of the ads. When I post a link to an article, I always link to the print-screen version. Local car companies and furniture stores ought to have on-line sites that are kept up to date hourly. A car is sold. Its photo should immediately be taken off the Website. But retailers in these industries have not yet made the transition to the Web. They do not understand it. Web marketing is still in its shake-out period. Yet the Web is replacing newspapers today. The Web shake-out will not be over before hundreds of paper-based newspapers die. [9]Subscriptions since 1990 have been steadily falling at 1% per year. This has now risen to [10]over 5% in some cases. Beginning in the late nineteenth century, large-circulation urban newspapers shaped local public opinion in America. There were many papers, morning and evening, and each one represented one of the two major political parties. Then came radio and television, both regulated by the Federal Communications Commission, which controlled frequencies and station broadcasting power. Now the Web and for-pay satellite TV and radio have unplugged power from the FCC. The FCC legally regulates the content of only the no-pay airwaves. It does not regulate the Web at all. Politicians in the two parties have built their power base on the basis of controlling local media. Today, local media are dying: newspapers and local TV stations. Broadcasting is dying; narrowcasting is replacing it. This will force a re-structuring of American politics. LOCALISM IS DYING Two competing social forces are now moving in opposite directions. Retail outlets along the main drag in every city are going national: Wal-Mart, Home Depot, Office Depot, etc. Locally owned retailers of physical stuff are disappearing. Price competition is killing them. At the same time, information is decentralizing. Choices are decentralizing. These are aspects of the same trend: anti-localism. The decentralization of information is virtual, not geographical. It is the radical decentralization of the Drudge Report: straight into a guys apartment in Hollywood. It bypasses regions, states, and townships. His apartment could be located anywhere. When information can come from anywhere and be delivered to anywhere at the same monetary price - zero - geography ceases to matter. We live in an information-centered age. So, we no longer live in a geography-centered age. This has never happened before. We are entering uncharted social waters. "Whats that noise?" Localism is fading: local loyalties, local politics, local schools. Higher levels of government absorb our tax money. Textbooks are produced nationally. Local school boards are impotent. Local politics gets the leftovers. When I want to buy a new product, I go onto the Web and read reviews. Then I use the Web to find the cheapest seller. For electronics, the seller is usually located in the northeast, probably in New York City, and is not open on Saturdays. (The seller is not a Seventh-Day Adventist.) What do I care? To save 20%, Ill buy on Thursday. Besides, I can order on-line 24x7. The phrase "24x7" is a sign of the times. Locally owned stores are not open 24x7. Web-based digital shopping carts are. Regionalism is also fading. People are mobile. They move every five years. They do not establish local loyalties, which are costly to break. The ties that bind no longer bind very efficiently - in housing, occupations, regions, or marriage. Regional mobility has been going on in the United States ever since the earliest days. Free land meant the move west. The kids got in a wagon and moved away . . . forever. Opportunity in America has always trumped regional loyalty in the long run. Regional loyalties have faded with every reduction in transportation costs. U-Haul and Ryder have done their work well. The South will not rise again. Similarly, the yankees of New England have not visibly run the country since Jack Kennedy died. They do it indirectly. After Lyndon Johnson, the Texas presidents have been ersatz: both Bushes are of Connecticut stock, by way of Yale University and Brown Brothers, Harriman, the investment banking firm. George W. Bush bought his Crawford, Texas ranch in 1999 in preparation for the 2000 Presidential campaign. I call it "Potemkin Ranch." Here is a man who could afford to buy 2.5 square miles of land 25 miles from Waco - not exactly prairie dog country. The mainstream media never bothered to point out these incongruities. That is why they are mainstream. When I say that the South will not rise again, I dont mean the old commitment of the South: resistance to centralized government. That idea is spreading as never before by means of the Web. It just isnt associated with a region any longer. This is not just an American phenomenon. It is becoming universal. The gatekeepers of every national government are on the defensive. The gates cannot easily keep out electronic digits. The gatekeepers have lost power ever since the invention of the printing press. They could exercise some control over printing presses, ink, and paper. They cannot control electronic digits. Our political world will change, even as our retailing world has changed. When it becomes obvious to voters that Washington, without robbing us blind, can no longer supply the stolen money with which it has bribed us for 90 years, mainstream politics will suffer a blow comparable to what the mainstream media are suffering. NEWSLETTERS ARE MORPHING INTO WEBSITES I mentioned that by 2000, I had cancelled all paper-based communications except for newsletters. They, too, are changing. They are dying off along with their editors. My favorite newsletter, Otto Scotts Compass, ceased publication in January, 2005. Mr. Scott, now in his mid-eighties, could no longer write it. His daughters placed him in a rest home. My second favorite newsletter, Hilaire du Barriers report on European affairs, ceased publication a year ago when the editor, age 94, died. Neither man was famous. Both were lifelong journalists. Hilaire du Barrier was not a well-known journalist. Yet I honestly believe that any historian who tries to write about European affairs, 1945 to 2004, who does not have a set of Hilaires newsletters will not get the story right. Hilaire was an upstream media man. He had been captured by the Japanese in 1941 in French Indo-China. He had been tortured for two weeks. He did not reveal anything about the network of French spies he knew about. After the war, they reciprocated. He had a network of informants like no other journalist I ever met. Yet he was always in the upstream. Almost no one knew about him. A couple of years before he died, I persuaded a friend of his to put all of his reports on a CD-ROM. At some point, this CD-ROM will go on-line. Of this, I am sure. Then his lifes work will get the readers it always deserved. The story of the insiders creation of the New Europe will then get the distribution it deserves. The gatekeepers have a problem. The insiders have a problem. The story is getting out. As it gets out, political loyalties fade. The European Union was sold to the voters by Jean Monnet and his successors on the basis of greater economic opportunity, not the benefits of a new political loyalty. There is still little political grass-roots loyalty to the European Union. France will probably vote against the new 230-page EU constitution. Anyway, I hope so. Websites are replacing paper-based newsletters. The flow of non-approved information is becoming a torrent. This undermines consensus. This process includes political consensus. Think of what home schooling means for the intellectual consensus. Think of the threat to the Powers That Be. The cost of textbook production has kept upstream interpretations away from most students. But now home school curriculum developers can get new views to millions of students by way of CD-ROM and the Internet. Parents who are sufficiently upstream to have pulled their children out of Americas only established church - the public school system - are ready to consider new interpretations. This is driving the academic gatekeepers crazy. Their monopoly over the media is fading. Now their near-monopoly over tax-funded education is slipping. CAMPUS FOLLIES Today, American higher education absorbs something in the range of a third of a trillion dollars a year, and this is rising by about 7% a year - the sign of government-enforced monopoly. The government-supervised college accrediting system keeps out price competition. It also keeps upstream opinions out of most colleges. But this monopoly is producing the familiar result: falling standards and falling output. The young wife of a college professor (engineering) I know told me that at the college, where she is finishing her bachelors degree in June, several of her professors in the social sciences will not accept as valid any citation from a Web site that does not end in .gov. These people are crazy leftists. I mean really crazy - over the top Democrats and statists who honestly believe that their students are being corrupted by non-.gov political Websites. They are trying to keep students away from non-government-approved digits. They really are crazy. They have lost touch with reality. They are tax-subsidized nut cases. In January, I visited an old friend who teaches history at an obscure state university. He and I were teaching assistants in the Western civilization program at the University of California, Riverside, in the late 1960s. That was back when all college grads had to take a class in Western civilization: dreary, long-dead days indeed. For 35 years, I have recalled that when he could not decide what grade to give a student exam, he would have me read it. This was always an A/B decision. Invariably, I could not help him. I always graded it the same way: right on the dividing line. Yet he was a New Deal Democrat, and I thought Reagan was a sell-out. (I voted for William Penn Patrick in the 1966 Republican gubernatorial primary.) We had the same sense of what constituted student competence. That world of semi-objective standards is gone - buried in waves of political correctness. He told me that his students today are extremely well-versed in digital research. They have grown up with the Internet. But, he said, there are two major problems: (1) they cannot evaluate the truth of what they read; (2) they are prone to submitting term papers that they have bought on-line. So, we are seeing the result the triumph of official relativism in academia: "There is no objective truth." The students have bought the academic party line. They respond accordingly: (1) "One opinion is as good as another." (2) "A purchased term paper may be worth the money and risk." The Web is filled with conflicting opinions and cheap term papers. Problem: in engineering and architecture, this outlook can lead to collapsing structures. THE POOL OF TALENT Year by year, a third of the labor pool emerges with a college degree. Most of these degrees are in the humanities and social sciences. Meanwhile, [11]China produces over 450,000 college graduates a year in science and engineering - as many scientists and engineers as the United States has, total. Then, next year, China will do it again. There are teamwork issues here. There are also cultural mindsets. If I were an American manufacturer, I would rather employ a team of scientists and engineers that individually graduated from American colleges and whose members are entrepreneurial. Progress in commercial product development is not just a matter of individual competence in surviving formal education, based mainly on skill in mathematics. But as the comparative supply of such graduates shrinks in the United States, and as the American tradition of entrepreneurship invades Asia - as it is invading - there will come a time when wage competition from Asia will undermine the competitive advantage enjoyed today by teams of scientists in the United States. Even if companies develop products here, they will have them produced off shore. Only the most creative science grads will be amply rewarded here for product development. Civil engineers - road-builders - will have an advantage based on geography. Electrical engineers wont. Until the year 2001, Asia sent its best graduate students to study in the United States. The post 9/11 tightening of immigration standards (not on the border with Mexico, of course), coupled with the new prestige of Asian technical training, [12]has reduced the percentage of foreign graduate students in American universities. This has never happened before in the post World War II era. CONCLUSION Mainstream media are losing to upstream media. This is eroding consensus among readers and TV viewers. Cable and satellite TV are undermining the networks. The Web is undermining the newspapers. Narrowcasting is undermining broadcasting. Home schools are undermining the tax-funded schools, though only at the fringes. Only the colleges seem immune, where government control is greatest. But they are becoming a laughingstock, even though parents still shell out far more than they need to (at least three times more) by sending their children off to college. Parents who know the system can get their kids through school for under $15,000 - maybe as little as $10,000 - which means that the kids can pay for their college educations by working part-time. The Establishment is on the defensive even in the halls of ivy. This is becoming clear: price competition is now unstoppable. If you are not in a position to sell something cheaper, you are in big trouble. This fact is killing the mainstream media, which lost its ability to compete after 80 years of government regulation and protection. It is going to kill every other cozy little arrangement with the state. Sell services, not stuff. Sell services locally, where Chinese college graduates cannot compete. Sell information, where Chinese college graduates cannot compete . . . and not many American college graduates can, either. This is the era in which everything mainstream is hitting the rapids. The mainstreamers thought they were cruising up a lazy river. They werent. "Whats that noise?" Gary North is the author of [14]Mises on Money. Visit [15]http://www.freebooks.com. He is also the author of a free multi-volume series, [16]An Economic Commentary on the Bible. [17]Gary North Archives References 9. http://www.stateofthenewsmedia.org/narrative_newspapers_audience.asp?cat=3&media=2 10. http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_content_id=1000904195 11. http://shurl.org/chinadegrees 12. http://news.com.com/2102-1008_3-5447691.html?tag=st.util.print 13. mailto:gary at kbot.com 14. http://www.lewrockwell.com/north/mom.html 15. http://www.freebooks.com/ 16. http://www.demischools.org/economic-bible-commentary.html 17. http://www.lewrockwell.com/north/north-arch.html From checker at panix.com Sat May 28 19:36:37 2005 From: checker at panix.com (Premise Checker) Date: Sat, 28 May 2005 15:36:37 -0400 (EDT) Subject: [Paleopsych] BBC: Wormhole 'no use' for time travel Message-ID: Wormhole 'no use' for time travel http://news.bbc.co.uk/1/hi/sci/tech/4564477.stm By Paul Rincon BBC News science reporter For budding time travellers, the future (or should that be the past?) is starting to look bleak. Hypothetical tunnels called wormholes once looked like the best bet for constructing a real time machine. These cosmic shortcuts, which link one point in the Universe to another, are favoured by science fiction writers as a means both of explaining time travel and of circumventing the limitations imposed by the speed of light. The concept of wormholes will be familiar to anyone who has watched the TV programmes Farscape, Stargate SG1 and Star Trek: Deep Space Nine. The opening sequence of the BBC's new Doctor Who series shows the Tardis hurtling through a "vortex" that suspiciously resembles a wormhole - although the Doctor's preferred method of travel is not explained in detail. But the idea of building these so-called traversable wormholes is looking increasingly shaky, according to two new scientific analyses. Remote connection A common analogy used to visualise these phenomena involves marking two holes at opposite ends of a sheet of paper, to represent distant points in the Universe. One can then bend the paper over so that the two remote points are positioned on top of each other. [The wormholes] you would like to build - the predictable ones where you can say Mr Spock will land in New York at 2pm on this day - those look like they will fall apart Stephen Hsu, University of Oregon If it were possible to contort space-time in this way, a person might step through a wormhole and emerge at a remote time or distant location. The person would pass through a region of the wormhole called the throat, which flares out on either side. According to one idea, a wormhole could be kept open by filling its throat, or the region around it, with an ingredient called exotic matter. This is strange stuff indeed, and explaining it requires scientists to look beyond the laws of classical physics to the world of quantum mechanics. Exotic matter is repelled, rather than attracted, by gravity and is said to have negative energy - meaning it has even less than empty space. Law breaker But according to a new study by Stephen Hsu and Roman Buniy, of the University of Oregon, US, this method of building a traversable wormhole may be fatally flawed. In a paper published on the arXiv pre-print server, the authors looked at a kind of wormhole in which the space-time "tube" shows only weak deviations from the laws of classical physics. These "semi-classical" wormholes are the most desirable type for time travel because they potentially allow travellers to predict where and when they would emerge. Wormholes entirely governed by the laws of quantum mechanics, on the other hand, would likely transport their payloads to an undesired time and place. Calculations by the Oregon researchers show a wormhole that combines exotic matter with semi-classical space-time would be fundamentally unstable. This result relies in part on a previous paper in which Hsu and Buniy argued that systems which violate a physical principle known as the null energy condition become unstable. "We aren't saying you can't build a wormhole. But the ones you would like to build - the predictable ones where you can say Mr Spock will land in New York at 2pm on this day - those look like they will fall apart," Dr Hsu said. Tight squeeze A separate study by Chris Fewster, of the University of York, UK, and Thomas Roman, of Central Connecticut State University, US, takes a different approach to tackling the question of wormholes. Amongst other things, their analysis deals with the proposal that wormhole throats could be kept open using arbitrarily small amounts of exotic matter. Fewster and Roman calculated that, even if it were possible to build such a wormhole, its throat would probably be too small for time travel. It might - in theory - be possible to carefully fine-tune the geometry of the wormhole so that the wormhole throat became big enough for a person to fit through, says Fewster. But building a wormhole with a throat radius big enough to just fit a proton would require fine-tuning to within one part in 10 to the power of 30. A human-sized wormhole would require fine-tuning to within one part in 10 to the power of 60. "Frankly no engineer is going to be able to do that," said the York researcher. The authors are currently preparing a manuscript for publication. Supporting view However, there is still support for the idea of traversable wormholes in the scientific community. One physicist told BBC News they could see problems with Hsu's and Buniy's conclusions. "Violations of the null energy condition are known to occur in a number of situations. And their argument would prohibit any violation of it," they commented. "If that's true, then don't worry about Hawking radiation from a black hole; the entire black hole vacuum becomes unstable." The underlying physics was not in doubt, the researcher argued. The real challenge was in explaining how to engineer wormholes big enough to be of practical use. Cambridge astrophysicist Stephen Hawking is amongst those researchers who have pondered the question of wormholes. In the 1980s, he argued that something fundamental in the laws of physics would prevent wormholes being used for time travel. This idea forms the basis of Hawking's Chronology Protection Conjecture. From shovland at mindspring.com Sat May 28 21:19:26 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 28 May 2005 14:19:26 -0700 Subject: [Paleopsych] health care Message-ID: <01C56390.41EE64B0.shovland@mindspring.com> Most of the time the bowl shared by our animals is filled with filtered water. Spoiled brats :-) Steve Hovland www.stevehovland.net -----Original Message----- From: G. Reinhart-Waller [SMTP:waluk at earthlink.net] Sent: Saturday, May 28, 2005 9:17 AM To: The new improved paleopsych list Subject: Re: [Paleopsych] health care Medical costs without insurance become prohibitive for someone who has no formal job and works as an Independent Scholar. Your suggestions are right on target. Should filtered water also be used for pets? Gerry Reinhart-Waller Steve Hovland wrote: >These days the most important health care is >what you do for yourself. > >Drinking filtered water, not drinking very much >alcohol, eating good food, and getting the right >amount of exercise are the core. > >The things you can do to keep from getting >old too fast are also the things that keep you >healthy. > >"The Anti-Aging Solution" by Giampapa, Pero, >and Zimmerman is an excellent starting point. > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: JV Kohl [SMTP:jvkohl at bellsouth.net] >Sent: Friday, May 27, 2005 8:21 PM >To: The new improved paleopsych list >Subject: Re: [Paleopsych] health care > >http://www.mercola.com/2005/feb/16/medical_costs.htm > >"About half of all bankruptcies in 2001 were the result of medical >problems and, surprisingly, most of those (more than three-quarters) who >went bankrupt were covered by health insurance at the start of the illness." > >"On average, out-of-pocket medical costs reached $13,460 for those with >private insurance and $10,893 for those with no insurance. Ironically, >those with the highest costs, on average about $18,000, were people who >initially had private health insurance but lost it after becoming ill." > >Even those who don't go bankrupt tend to lose whatever savings they >have. I can't imagine much worse than seeing everything I've worked for >my entire life be spent on medical costs--and in many cases that's what >happens. > >I've worked in the medical laboratory for many years and have seen test >costs escalate far beyond most people's ability to pay. A co-worker >recently was diagnosed with Hodgkin's lymphoma and has estimated she >will be out of pocket more than $15,000 by the time she finishes >chemotherapy. One of the worst things about this is that we work at a >hospital, and have a group policy through Blue Cross/Blue Shield. Even >though she has seen only participating providers, the number of charges >not covered is enlightening. Most of us would be better off being >illegal immigrants, who get their medical care for free. About 4 >non-English speaking patients with no insurance deliver babies here (in >Northern Georgia) compared to each non-Hispanic White. Cancer and >disability supplements with set payments are expensive jokes, that will >only stay the bankruptcy for a short time.. > >Meanwhile, the Veteran's administration (the last place anyone wants to >go for medical treatment) no longer accepts honorably discharged >Veterans into their program, unless the Vet is earning less than $18,000 >per year. Until two years ago, the program was open to all Vets. Now >that there will be more injured vets coming home, I'd need to be half >dead and competely broke before seeking help from the Veterans >administration. > >I wish you all good health, you need it. > >Jim Kohl > > >Euterpel66 at aol.com wrote: > > > >>In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, >>shovland at mindspring.com writes: >> >> --Only if you actually call it "health care". But not >> when you call it "socialist tyranny". It's all in the >> framing : ) >> >> Michael >> >>I imagine that you are one of the lucky ones whose employer provides >>yours. Mine costs me $468 a month, one third of my monthly income. My >>daughter has no healthcare insurance at all. >> >>Lorraine Rice >> >>Believe those who are seeking the truth. Doubt those who find it. >>---Andre Gide >> >>http://hometown.aol.com/euterpel66/myhomepage/poetry.html >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> > << File: ATT00000.html >> << File: ATT00001.txt >> >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From waluk at earthlink.net Sun May 29 02:01:05 2005 From: waluk at earthlink.net (G. Reinhart-Waller) Date: Sat, 28 May 2005 19:01:05 -0700 Subject: [Paleopsych] health care In-Reply-To: <01C56390.41EE64B0.shovland@mindspring.com> References: <01C56390.41EE64B0.shovland@mindspring.com> Message-ID: <42992261.20500@earthlink.net> Without funding for a filtering system, our pets suffer. Will they die because I cannot spoil them? :-D Gerry Steve Hovland wrote: >Most of the time the bowl shared by our >animals is filled with filtered water. > >Spoiled brats :-) > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: G. Reinhart-Waller [SMTP:waluk at earthlink.net] >Sent: Saturday, May 28, 2005 9:17 AM >To: The new improved paleopsych list >Subject: Re: [Paleopsych] health care > >Medical costs without insurance become prohibitive for someone who has >no formal job and works as an Independent Scholar. Your suggestions are >right on target. Should filtered water also be used for pets? > >Gerry Reinhart-Waller > > > >Steve Hovland wrote: > > > >>These days the most important health care is >>what you do for yourself. >> >>Drinking filtered water, not drinking very much >>alcohol, eating good food, and getting the right >>amount of exercise are the core. >> >>The things you can do to keep from getting >>old too fast are also the things that keep you >>healthy. >> >>"The Anti-Aging Solution" by Giampapa, Pero, >>and Zimmerman is an excellent starting point. >> >>Steve Hovland >>www.stevehovland.net >> >> >>-----Original Message----- >>From: JV Kohl [SMTP:jvkohl at bellsouth.net] >>Sent: Friday, May 27, 2005 8:21 PM >>To: The new improved paleopsych list >>Subject: Re: [Paleopsych] health care >> >>http://www.mercola.com/2005/feb/16/medical_costs.htm >> >>"About half of all bankruptcies in 2001 were the result of medical >>problems and, surprisingly, most of those (more than three-quarters) who >>went bankrupt were covered by health insurance at the start of the illness." >> >>"On average, out-of-pocket medical costs reached $13,460 for those with >>private insurance and $10,893 for those with no insurance. Ironically, >>those with the highest costs, on average about $18,000, were people who >>initially had private health insurance but lost it after becoming ill." >> >>Even those who don't go bankrupt tend to lose whatever savings they >>have. I can't imagine much worse than seeing everything I've worked for >>my entire life be spent on medical costs--and in many cases that's what >>happens. >> >>I've worked in the medical laboratory for many years and have seen test >>costs escalate far beyond most people's ability to pay. A co-worker >>recently was diagnosed with Hodgkin's lymphoma and has estimated she >>will be out of pocket more than $15,000 by the time she finishes >>chemotherapy. One of the worst things about this is that we work at a >>hospital, and have a group policy through Blue Cross/Blue Shield. Even >>though she has seen only participating providers, the number of charges >>not covered is enlightening. Most of us would be better off being >>illegal immigrants, who get their medical care for free. About 4 >>non-English speaking patients with no insurance deliver babies here (in >>Northern Georgia) compared to each non-Hispanic White. Cancer and >>disability supplements with set payments are expensive jokes, that will >>only stay the bankruptcy for a short time.. >> >>Meanwhile, the Veteran's administration (the last place anyone wants to >>go for medical treatment) no longer accepts honorably discharged >>Veterans into their program, unless the Vet is earning less than $18,000 >>per year. Until two years ago, the program was open to all Vets. Now >>that there will be more injured vets coming home, I'd need to be half >>dead and competely broke before seeking help from the Veterans >>administration. >> >>I wish you all good health, you need it. >> >>Jim Kohl >> >> >>Euterpel66 at aol.com wrote: >> >> >> >> >> >>>In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, >>>shovland at mindspring.com writes: >>> >>> --Only if you actually call it "health care". But not >>> when you call it "socialist tyranny". It's all in the >>> framing : ) >>> >>> Michael >>> >>>I imagine that you are one of the lucky ones whose employer provides >>>yours. Mine costs me $468 a month, one third of my monthly income. My >>>daughter has no healthcare insurance at all. >>> >>>Lorraine Rice >>> >>>Believe those who are seeking the truth. Doubt those who find it. >>>---Andre Gide >>> >>>http://hometown.aol.com/euterpel66/myhomepage/poetry.html >>> >>>------------------------------------------------------------------------ >>> >>>_______________________________________________ >>>paleopsych mailing list >>>paleopsych at paleopsych.org >>>http://lists.paleopsych.org/mailman/listinfo/paleopsych >>> >>> >>> >>> >>> >>> >><< File: ATT00000.html >> << File: ATT00001.txt >> >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> >> > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From shovland at mindspring.com Sun May 29 14:15:30 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 29 May 2005 07:15:30 -0700 Subject: [Paleopsych] health care Message-ID: <01C5641E.33484D20.shovland@mindspring.com> No, but all of you will be healthier with some kind of filter. The PUR filters that fit on the faucet are about $30. Steve Hovland www.stevehovland.net -----Original Message----- From: G. Reinhart-Waller [SMTP:waluk at earthlink.net] Sent: Saturday, May 28, 2005 7:01 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] health care Without funding for a filtering system, our pets suffer. Will they die because I cannot spoil them? :-D Gerry Steve Hovland wrote: >Most of the time the bowl shared by our >animals is filled with filtered water. > >Spoiled brats :-) > >Steve Hovland >www.stevehovland.net > > >-----Original Message----- >From: G. Reinhart-Waller [SMTP:waluk at earthlink.net] >Sent: Saturday, May 28, 2005 9:17 AM >To: The new improved paleopsych list >Subject: Re: [Paleopsych] health care > >Medical costs without insurance become prohibitive for someone who has >no formal job and works as an Independent Scholar. Your suggestions are >right on target. Should filtered water also be used for pets? > >Gerry Reinhart-Waller > > > >Steve Hovland wrote: > > > >>These days the most important health care is >>what you do for yourself. >> >>Drinking filtered water, not drinking very much >>alcohol, eating good food, and getting the right >>amount of exercise are the core. >> >>The things you can do to keep from getting >>old too fast are also the things that keep you >>healthy. >> >>"The Anti-Aging Solution" by Giampapa, Pero, >>and Zimmerman is an excellent starting point. >> >>Steve Hovland >>www.stevehovland.net >> >> >>-----Original Message----- >>From: JV Kohl [SMTP:jvkohl at bellsouth.net] >>Sent: Friday, May 27, 2005 8:21 PM >>To: The new improved paleopsych list >>Subject: Re: [Paleopsych] health care >> >>http://www.mercola.com/2005/feb/16/medical_costs.htm >> >>"About half of all bankruptcies in 2001 were the result of medical >>problems and, surprisingly, most of those (more than three-quarters) who >>went bankrupt were covered by health insurance at the start of the illness." >> >>"On average, out-of-pocket medical costs reached $13,460 for those with >>private insurance and $10,893 for those with no insurance. Ironically, >>those with the highest costs, on average about $18,000, were people who >>initially had private health insurance but lost it after becoming ill." >> >>Even those who don't go bankrupt tend to lose whatever savings they >>have. I can't imagine much worse than seeing everything I've worked for >>my entire life be spent on medical costs--and in many cases that's what >>happens. >> >>I've worked in the medical laboratory for many years and have seen test >>costs escalate far beyond most people's ability to pay. A co-worker >>recently was diagnosed with Hodgkin's lymphoma and has estimated she >>will be out of pocket more than $15,000 by the time she finishes >>chemotherapy. One of the worst things about this is that we work at a >>hospital, and have a group policy through Blue Cross/Blue Shield. Even >>though she has seen only participating providers, the number of charges >>not covered is enlightening. Most of us would be better off being >>illegal immigrants, who get their medical care for free. About 4 >>non-English speaking patients with no insurance deliver babies here (in >>Northern Georgia) compared to each non-Hispanic White. Cancer and >>disability supplements with set payments are expensive jokes, that will >>only stay the bankruptcy for a short time.. >> >>Meanwhile, the Veteran's administration (the last place anyone wants to >>go for medical treatment) no longer accepts honorably discharged >>Veterans into their program, unless the Vet is earning less than $18,000 >>per year. Until two years ago, the program was open to all Vets. Now >>that there will be more injured vets coming home, I'd need to be half >>dead and competely broke before seeking help from the Veterans >>administration. >> >>I wish you all good health, you need it. >> >>Jim Kohl >> >> >>Euterpel66 at aol.com wrote: >> >> >> >> >> >>>In a message dated 5/26/2005 11:01:07 P.M. Eastern Daylight Time, >>>shovland at mindspring.com writes: >>> >>> --Only if you actually call it "health care". But not >>> when you call it "socialist tyranny". It's all in the >>> framing : ) >>> >>> Michael >>> >>>I imagine that you are one of the lucky ones whose employer provides >>>yours. Mine costs me $468 a month, one third of my monthly income. My >>>daughter has no healthcare insurance at all. >>> >>>Lorraine Rice >>> >>>Believe those who are seeking the truth. Doubt those who find it. >>>---Andre Gide >>> >>>http://hometown.aol.com/euterpel66/myhomepage/poetry.html >>> >>>------------------------------------------------------------------------ >>> >>>_______________________________________________ >>>paleopsych mailing list >>>paleopsych at paleopsych.org >>>http://lists.paleopsych.org/mailman/listinfo/paleopsych >>> >>> >>> >>> >>> >>> >><< File: ATT00000.html >> << File: ATT00001.txt >> >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> >> > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Mon May 30 01:40:23 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:40:23 -0400 (EDT) Subject: [Paleopsych] NYT: (Class) When the Joneses Wear Jeans Message-ID: When the Joneses Wear Jeans Class Matters - Social Class and Status Markers in the United States of America - The New York Times - New York Times http://www.nytimes.com/2005/05/29/national/class/CONSUMPTION-FINAL.html? By [2]JENNIFER STEINHAUER BEACHWOOD, Ohio - It was 4:30 p.m., sweet hour of opportunity at the Beachwood Place Mall. Shoppers were drifting into stores in the rush before dinner, and the sales help, as if on cue, began a retail ritual: trying to tell the buyers from the lookers, the platinum-card holders from those who could barely pay their monthly minimum balance. It is not always easy. Ellyn Lebby, a sales clerk at Saks Fifth Avenue, said she had a customer who regularly bought $3,000 suits but "who looks like he should be standing outside shaking a cup." At Oh How Cute, a children's boutique, the owner, Kira Alexander, checks out shoppers' fingernails. A good manicure usually signals money. "But then again," Ms. Alexander conceded, "I don't have nice nails and I can buy whatever I want." Down the mall at the Godiva chocolate store, Mark Fiorilli, the manager, does not even bother trying to figure out who has money. Over the course of a few hours, his shoppers included a young woman with a giant diamond ring and a former airplane parts inspector living off her disability checks. "You can't make assumptions," Mr. Fiorilli said. Social class, once so easily assessed by the car in the driveway or the purse on the arm, has become harder to see in the things Americans buy. Rising incomes, flattening prices and easily available credit have given so many Americans access to such a wide array of high-end goods that traditional markers of status have lost much of their meaning. A family squarely in the middle class may own a flat-screen television, drive a BMW and indulge a taste for expensive chocolate. A wealthy family may only further blur the picture by shopping for wine at Costco and bath towels at Target, which for years has stocked its shelves with high-quality goods. Everyone, meanwhile, appears to be blending into a classless crowd, shedding the showiest kinds of high-status clothes in favor of a jeans-and-sweatsuit informality. When Vice President Dick Cheney, a wealthy man in his own right, attended a January ceremony in Poland to commemorate the liberation of Nazi death camps, he wore a parka. But status symbols have not disappeared. As luxury has gone down-market, the marketplace has simply gone one better, rolling out ever-pricier goods and pitching them to the ever-loftier rich. This is an America of $130,000 Hummers and $12,000 mother-baby diamond tennis bracelet sets, of $600 jeans, $800 haircuts and slick new magazines advertising $400 bottles of wine. Then there are the new badges of high-end consumption that may be less readily conspicuous but no less potent. Increasingly, the nation's richest are spending their money on personal services or exclusive experiences and isolating themselves from the masses in ways that go beyond building gated walls. These Americans employ about 9,000 personal chefs, up from about 400 just 10 years ago, according to the [3]American Personal Chef Association. They are taking ever more exotic vacations, often in private planes. They visit plastic surgeons and dermatologists for costly and frequent cosmetic procedures. And they are sending their children to $400-an-hour math tutors, summer camps at French chateaus and crash courses on managing money. "Whether or not someone has a flat-screen TV is going to tell you less than if you look at the services they use, where they live and the control they have over other people's labor, those who are serving them," said [4]Dalton Conley, an author and a sociologist at [5]New York University. Goods and services have always been means to measure social station. [6]Thorstein Veblen, the political economist who coined the phrase "conspicuous consumption" at the beginning of the last century, observed that it was the wealthy "leisure class," in its "manner of life and its standards of worth," that set the bar for everyone else. "The observance of these standards," Veblen wrote, "in some degree of approximation, becomes incumbent upon all classes lower in the scale." So it is today. In a [7]recent poll by The New York Times, fully 81 percent of Americans said they had felt social pressure to buy high-priced goods. But what Veblen could not have foreseen is where some of that pressure is coming from, says [8]Juliet B. Schor, a professor of sociology at [9]Boston College who has written widely on consumer culture. While the rich may have always set the standards, Professor Schor said, the actual social competition used to be played out largely at the neighborhood level, among people in roughly the same class. In the last 30 years or so, however, she said, as people have become increasingly isolated from their neighbors, a barrage of magazines and television shows celebrating the toys and totems of the rich has fostered a whole new level of desire across class groups. A "horizontal desire," coveting a neighbor's goods, has been replaced by a "vertical desire," coveting the goods of the rich and the powerful seen on television, Professor Schor said. "The old system was keeping up with the Joneses," she said. "The new system is keeping up with the Gateses." Of course only other billionaires actually can. Most Americans are staring across a widening income gap between them and the very rich, making such vertical desire all the more unrealistic. "There is a bigger gap between the average person and what they are aspiring to," Professor Schor said. But others who study consumer behavior say that the wanting and getting of material goods is not just a competitive exercise. In this view, Americans care less about emulating the top tier than about simply having a fair share of the bounty and a chance to carve out a place for themselves in society. "People like having stuff, and stuff is good for people," said [10]Thomas O'Guinn, a professor of advertising at the [11]University of Illinois who has written textbooks on marketing and consumption. "One thing modernity brought with it was all kinds of identities, the ability for people to choose who you want to be, how you want to decorate yourself, what kind of lifestyle you want. And what you consume cannot be separated from that." Falling Prices, Rising Debt Throughout the mall in this upscale suburb of Cleveland, high-priced merchandise was moving: $80 cotton rompers at Oh How Cute, $40 scented candles at Bigelow Pharmacy. And everywhere, it seemed, was the sound of cellphones, one ringing out with a salsa tune, another with bars from Brahms. Few consumer items better illustrate the democratization of luxury than the cellphone, once immortalized as the ultimate toy of exclusivity by Michael Douglas as he tromped around the 1987 movie "Wall Street" screaming into one roughly the size of a throw pillow. Now, about one of every two Americans uses a cellphone; last year, there were 176 million subscribers, almost eight times the number a decade ago, according to the [12]market research firm IDC. The number has soared because prices have correspondingly plummeted, to about an eighth of what they were 10 years ago. The pattern is a familiar one in consumer electronics. What begins as a high-end product - a laptop computer, a DVD player - gradually goes mass market as prices fall and production rises, largely because of the cheap labor costs in developing countries that are making more and more of the goods. That sort of "global sourcing" has had a similar impact across the American marketplace. The prices of clothing, for example, have barely risen in the last decade, while department store prices in general fell 10 percent from 1994 to 2004, the federal government says. Even where luxury-good prices have remained forbiddingly high, some manufacturers have come up with strategies to cast more widely for customers, looking to middle-class consumers, whose incomes have generally risen in recent years; the median family income in the United States grew 17.6 percent from 1983 to 2003, when adjusted for inflation. One way makers of luxury cars have tapped into this market is by introducing cheaper versions of their cars, trying to lure younger, less-affluent buyers in the hope that they may upgrade to more prestigious models as their incomes grow. Mercedes-Benz, BMW and Audi already offer cars costing about $30,000 and now plan to introduce models that will sell for about $25,000. Entry-level luxury cars are the fastest growing segment of that industry. "The big new trend that is coming to the U.S. is 'subluxury' cars," said David Thomas, editor of [13]Autoblog, an online automotive guide. "The real push now is to go a step lower, but the car makers won't say 'lower.' " The luxury car industry is just one that has made its products more accessible to the middle class. The cruise industry, once associated with the upper crust, is another. "The cruise business has totally evolved," said [14]Oivind Mathisen, editor of the newsletter [15]Cruise Industry News, "and become a business that caters to moderate incomes." The luxury end makes up only 10 percent of the cruise line market now, Mr. Mathisen said. Yet today's cruise ships continue to trade on the vestiges of their upper-class mystique, even while offering new amenities like on-board ice skating and wall-climbing. Though dinner with the captain may be a thing of the past, the ships still pamper guests with spas, boutiques and sophisticated restaurants. All that can be had for an average of $1,500 a week per person, a price that has gone almost unchanged in 15 years, Mr. Mathisen said. The industry has kept prices down in part by buying bigger ships, the better to accommodate a broader clientele. But affordable prices are only one reason the marketplace has blurred. Americans have loaded up on expensive toys largely by borrowing and charging. They now owe about $750 billion in revolving debt, according to the Federal Reserve, a six-fold increase from two decades ago. That huge jump can be traced in part to the credit industry's explosive growth. Over the last 20 years, the industry became increasingly lenient about whom it was willing to extend credit to, more sophisticated about assessing credit risks and increasingly generous in how much it would let people borrow, as long as those customers were willing to pay high fees and risk living in debt. As a result, to take one example, millions of Americans who could not have dreamed of buying their own homes two decades ago are now doing so in record numbers because of a sharp drop in mortgage interest rates, a surge in the number of mortgages granted and the creation of the sub-prime lending industry, which gives low-income people access to credit at high cost. "Creditors love the term the 'democratization of credit,' " said Travis B. Plunkett, the legislative director of the [16]Consumer Federation of America, a consumer lobbying group. "Over all, it has certainly had a positive effect. Many families that never had access to credit now do. The problem is that a flood of credit is now available to many financially vulnerable families and extended in a reckless and aggressive manner in many cases without thought to implications. The creditors say it has driven the economy forward and helped many families improve their financial lives, but they omit talking about the other half of the equation." The Marketers' Response Marketers have had to adjust their strategies in this fluid world of consumerism. Where once they pitched advertisements primarily to a core group of customers - men earning $35,000 to $50,000 a year, say - now they are increasingly fine-tuning their efforts, trying to identify potential customers by interests and tastes as well as by income level. "The market dynamics have changed," said [17]Idris Mootee, a marketing expert based in Boston. "It used to be clearly defined by how much you can afford. Before, if you belonged to a certain group, you shopped at Wal-Mart and bought the cheapest coffee and bought the cheapest sneakers. Now, people may buy the cheapest brand of consumer goods but still want Starbucks coffee and an iPod." Merchandisers, for example, might look at two golfers, one lower middle class, the other wealthy, and know that they read the same golf magazine, see the same advertisements and possibly buy the same quality driver. The difference is that one will be splurging and then play on a public course while the other will not blink at the price and tee off at a private country club. Similarly, a middle-income office manager may save her money to buy a single luxury item, like a Chanel jacket, the same one worn by a wealthy homemaker who has a dozen others like it in her $2.5 million house. Marketers also know that today's shoppers have unpredictable priorities. Robert Gross, who was wandering the Beachwood mall with his son David, said he couldn't live without his annual cruise. Mr. Gross, 65, also prizes his two diamond pinkie rings, his racks of cashmere sweaters and his Mercedes CLK 430. "My license plate reads BENZ4BOB," he said. "Does that tell you what kind of person I am?" But a taste for luxury goods did not stop Mr. Gross, an accountant, from scoffing as David paid $30 for a box of Godiva chocolates for his wife. The elder Mr. Gross had been to a local chocolate maker. "I went to Malley's," he said, "and bought my chocolate half price." Yet virtually no company that has built a reputation as a purveyor of luxury goods will want to lose its foothold in that territory, even as it lowers prices on some items and sells them to a wider audience. If one high-end product has slipped into the mass market, then a new one will have to take its place at the top. Until the early 1990's, Godiva sold only in Neiman Marcus and a few other upscale stores. Today it is one of those companies whose customers drift in from all points along the economic spectrum. Its candy can now be found in 2,500 outlets, including Hallmark card stores and middle-market department stores like Dillard's. "People want to participate in our brand because we are an affordable luxury," said Gene Dunkin, president of Godiva North America, a unit of the Campbell Soup Company. "For under $1 to $350, with an incredible luxury package, we give the perception of a very expensive product." But the company is also trying simultaneously to hold on to the true luxury market, which has increasingly been seduced away by small, expensive artisan chocolate makers, many from Europe, that are opening around the country. Two years ago, Godiva introduced its most expensive line ever, "G," handmade chocolates selling for $100 a pound. Today it is available only in holiday seasons and only at selected stores. The New Status Symbols While the rest of the United States may appear to be catching up with the Joneses, the richest Joneses have already moved on. Some have slipped out of sight, buying bigger and more lavish homes in neighborhoods increasingly insulated from the rest of Americans. But the true measure of upper class today is in the personal services indulged in. Professor Conley, the New York University sociologist, refers to these less tangible badges of status as "positional goods." Consider a couple who hire a baby sitter to pick up their children from school while they both work, he said. Their status would generally be lower than the couple who could pick up their children themselves, because the second couple would have enough earning power to allow one parent to stay at home while the other worked. But the second couple would actually occupy the second rung in this after-school hierarchy. "In the highest group of all is the parent who has a nanny along," Professor Conley said. Status among people in the top tier, he said, "is the time spent being waited on, being taken care of in nail salons, and how many people who work for them." From 1997 to 2002, revenues from hair, nail and skin care services jumped by 42 percent nationwide, Census Bureau data shows. Revenues from what the bureau described as "other personal services" increased 74 percent. Indeed, in some cases, services and experiences have replaced objects as the true symbols of high status. "Anyone can buy a one-off expensive car," said Paul Nunes, who with Brian Johnson wrote [18]"Mass Affluence," a book on marketing strategies. "But it is lifestyle that people are competing on more now. It is which sports camps do your kids go to and how often, which vacations do you take, even how often do you do things like go work for Habitat for Humanity, which is a charitable expense people can compete with." In the country's largest cities, otherwise prosaic services have been transformed into status symbols simply because of the price tag. In New York last year, one salon introduced an $800 haircut, and a Japanese restaurant, Masa, opened with a $350 prix fixe dinner (excluding tax, tips and beverages). The experience is not just about a good meal, or even an exquisite one; it is about a transformative encounter in a Zen-like setting with a chef who decides what will be eaten and at what pace. And it is finally about exclusivity: there are only 26 seats. Today, one of the most sought-after status symbols in New York is a [19]Masa reservation. And that is how the marketplace works, Professor Conley says. For every object of desire, another will soon come along to trump it, fueling aspirations even more. "Class now is really like three-card monte," he said. "The moment the lower-status aspirant thinks he has located the nut under the shell, it has actually shifted, and he is too late. " References 2. http://query.nytimes.com/search/query?ppds=bylL&v1=JENNIFER%20STEINHAUER&fdq=19960101&td=sysdate&sort=newest&ac=JENNIFER%20STEINHAUER&inline=nyt-per 3. http://www.personalchef.com/association.htm 4. http://www.nyu.edu/fas/Faculty/ConleyDalton.html 5. http://www.nyu.edu/ 6. http://en.wikipedia.org/wiki/Thorstein_Veblen 7. http://www.nytimes.com/packages/html/national/20050515_CLASS_GRAPHIC/index_04.html 8. http://www2.bc.edu/~schorj/ 9. http://www.bc.edu/ 10. http://www.comm.uiuc.edu/faculty/OGuinn.html 11. http://www.uillinois.edu/ 12. http://www.idc.com/ 13. http://www.autoblog.com/ 14. http://www.cruiseindustrynews.com/index.php?option=com_contact&task=view&contact_id=3&Itemid=3 15. http://www.cruiseindustrynews.com/ 16. http://www.consumerfed.org/ 17. http://www.highintensitymarketing.com/ 18. http://harvardbusinessonline.hbsp.harvard.edu/b02/en/common/item_detail.jhtml;jsessionid=LF0ATY5RLB5JKAKRGWDR5VQBKE0YIIPS?id=8584SL 19. http://www.nytimes.com/2004/12/29/dining/29REST.html From checker at panix.com Mon May 30 01:40:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:40:33 -0400 (EDT) Subject: [Paleopsych] NYT: (Class) God and Man in the Ivy League (4 Letters) Message-ID: God and Man in the Ivy League (4 Letters) http://www.nytimes.com/2005/05/27/opinion/l27class.html?pagewanted=print To the Editor: Re "On a Christian Mission to the Top: Evangelicals Set Their Sights on the Ivy League" ("Class Matters" series, front page, May 22): As a Columbia student, I was amused to read this article. Although the Christian Union may intend to "reclaim the Ivy League for Christ," I and the overwhelming majority of my friends are increasingly skeptical of organized religion and its minions. Considering the Bush administration's perverse manipulation of Christianity to invade Iraq, and the increasing blurring of church and state, I am ever wary of those who proselytize on my secular campus. Deena Guzder Sugar Land, Tex., May 22, 2005 To the Editor: The Christian Union wants to reclaim the Ivy League for Christ, and evangelical Republicans are using the legislature and the judiciary to create a United States of Christ. It's infuriating that evangelicals are going to such lengths to assert their power. College provides a forum for expression of different opinions and varying religious views. It is spiritually disrespectful and a violation of the premises of a liberal arts education to impose any one religion upon the rest of the student body. As Brown University parents, we are appalled that these students and their mentors view the campus as a place to proselytize and recruit. Colleges are meant to open people's minds, not close them. Students may attend programs such as Hillel, Newman and Christian Houses, but these are not a replacement for other fascinating and expansive opportunities to meet and learn from people very different from themselves. Beryl Minkle Haakon Chevalier Cambridge, Mass., May 23, 2005 To the Editor: It is understandable that it makes some of us uneasy when evangelicals set out to reclaim the Ivy League for Christ. Not so long ago these universities had mandatory Christian prayer and a quota on admissions of Jews. Brown University, under the leadership of its president, Ruth Simmons, just instituted need-blind admissions, which will open doors for evangelicals and others of modest means. I hope that the evangelicals on campus will recognize that tolerance benefits all of us, and I invite them to walk down the hill and visit First Unitarian Church, where I am a member and all are welcome. Nancy Green Providence, R.I., May 23, 2005 To the Editor: Lest readers think that the Christian Union's effort to reclaim the Ivy League for Christ represents the only recent religious activity at Brown University, I would like to point out the rise of Interfaith House, one of several interfaith initiatives here. Interfaith House, which I helped found, is a growing residential community of about 30 students from a variety of faiths, including students active in evangelical groups. As it begins its third year next fall, it will seek to enrich the hearts and minds of its members and the Brown community through discussions on topics including compassion, conscience and leading a religious or spiritual life. Julian Leichty Providence, R.I., May 23, 2005 From checker at panix.com Mon May 30 01:40:43 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:40:43 -0400 (EDT) Subject: [Paleopsych] NYT: (Class) Life Without a College Degree (6 Letters) Message-ID: Life Without a College Degree (6 Letters) http://www.nytimes.com/2005/05/29/opinion/l29class.html To the Editor: Re "The College Dropout Boom" ("Class Matters" series, front page, May 24): You comment that New York and some other states are linking higher education dollars to graduation rates rather than simply to admissions. This is yet another recipe for a collapse of standards because of pressures to pass students without requiring proof of learning. Such policies affect the functioning within institutions where departments give high grades to entice enrollments and thus enhance their financing and faculty lines (students vote with their feet). Departments where the average course grades approach 3.5 are dishonest and illustrate the failure of faculty to distinguish between those students who are capable and those who are not. At CUNY senior colleges, we often see community college graduates and transfers with high averages, yet fundamental knowledge is lacking. These 3-plus cumulative averages are meaningless and do little but provide the student with a false sense of accomplishment. Yes, the institutions may receive their financing, but at what cost? Peter C. Chabora Flushing, Queens, May 24, 2005 The writer is a biology professor at Queens College, CUNY. To the Editor: Here's an idea to create an additional incentive for elite colleges to recruit working-class students: Why doesn't U.S. News & World Report revise its highly influential rankings to give greater weight to economic diversity in the student body as a measure of the academic quality of the institution? Most people who use the rankings just jump to the list. They won't notice that methodological change, but they and certainly the colleges will notice if their ranking drops. It's a simple change that could help lead to a stronger meritocracy in this country by giving more Americans the chance to pursue that crucial college degree. Izzat Jarudi Washington, May 24, 2005 To the Editor: As a financial aid student at Yale and a leader in the recent successful campaign for financial aid reform there, I believe that universities' recent moves to provide increased financial support to low-income students are admirable. At Yale, pressure from students, alumni, workers and community members impelled administrators to act, just as similar pressure led to the admission of women, blacks and other underrepresented groups to many schools in the past. But if universities are truly to take on what Lawrence H. Summers, the president of Harvard, calls "the most serious domestic problem in the United States today," then changes to financial aid are not enough. Until the children of university employees can afford the education their parents make possible, universities will be not a "powerful weapon" of change but preservers of class inequality. Phoebe Rounds New Haven, May 24, 2005 To the Editor: As a working-class girl who was able to graduate from an elite liberal arts college and to complete a Ph.D. at an Ivy League university, I know both the extraordinary blessings and some deep personal costs of a class-changing college education. Nevertheless, your article reflects another class bias in seeming to suggest that the only good education is one gained at an elite institution. I have taught at CUNY for more than 20 years, and the best students here match up with those I have taught and met at elite institutions around the country. Almost all of our students are first-generation college attendees, though. Ironically, they need even more the support that permeates middle-class kids' lives and gets reinforced in college dormitories. The class change for our students has to be negotiated every day as they go from their families, who may not fully understand their college experience, to our campuses, where we are bringing them to another way of looking at the world. I admire our students' accomplishments every day; I just wish that we did not try to measure their success by the same standards that apply to more privileged institutions. Public support for urban higher education is an investment in preserving a humane and diverse public. Joan C. Tronto New York, May 24, 2005 The writer is a political science professor at Hunter College, CUNY. To the Editor: You suggest that the outlook for so-called college dropouts is gloomy, explaining that less than half of low-income students graduate within five years and saying that while many who leave college plan to return, few actually do. But conventional time intervals for tracking students are a big part of the problem. Twenty-year follow-up data analyzed by me and Paul Attewell on low-income students in a national survey demonstrate that graduation rates calculated after even six years would classify as dropouts many students who eventually earn degrees. More than a quarter of B.A. earners needed more than six years to cross the academic finish line. Forty percent of these delayed-degree recipients took 11 or more years. Tracking periods are increasingly out of touch with the realities of college-going today. Many low-income students leave college so that they can work to defray tuition costs and/or support families. But they often return and in time complete their degrees. David E. Lavin New York, May 25, 2005 The writer is a sociology professor at the City University Graduate Center. To the Editor: As an adjunct professor at a four-year liberal arts college that is 10 minutes from Chilhowie, Va., the focus of your article, I would emphasize the role of local culture that instills in many students in southwest Virginia just one kind of knowledge: the knowledge that they cannot succeed. As they struggle with tuition, academics and new ideas, these students are not buoyed by the determination to become the first in their families to graduate from college. Rather, with regard to higher education, they often embrace a fatalism that is part of family tradition and local custom. Claudia Keenan Abingdon, Va., May 25, 2005 From checker at panix.com Mon May 30 01:41:05 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:41:05 -0400 (EDT) Subject: [Paleopsych] SW: On the Scheme of Animal Phyla Message-ID: Evolutionary Biology: On the Scheme of Animal Phyla http://scienceweek.com/2005/sw050603-3.htm The following points are made by M. Jones and M. Blaxter (Nature 2005 434:1076): 1) Despite the comforting certainty of textbooks and 150 years of argument, the true relationships of the major groups (phyla) of animals remain contentious. In the late 1990s, a series of controversial papers used molecular evidence to propose a radical rearrangement of animal phyla [1-3]. Subsequently, analyses of whole-genome sequences from a few species showed strong, apparently conclusive, support for an older view[4-6]. New work [7] now provides evidence from expanded data sets that supports the newer evolutionary tree, and also shows why whole-genome data sets can lead phylogeneticists seriously astray. 2) Traditional trees group together phyla of bilaterally symmetrical animals that possess a body cavity lined with mesodermal tissue, the "coelom" (for example, the human pleural cavity), as Coelomata. Those without a true coelom are classified as Acoelomata (no coelom) and Pseudocoelomata (a body cavity not lined by mesoderm). We call this tree the A-P-C hypothesis. Under A-P-C, humans are more closely related to the fruitfly Drosophila melanogaster than either is to the nematode roundworm Caenorhabditis elegans[5,6]. 3) In contrast, the new trees [1-3,7] suggest that the basic division in animals is between the Protostomia and Deuterostomia (a distinction based on the origin of the mouth during embryo formation). Humans are deuterostomes, but because flies and nematodes are both protostomes they are more closely related to each other than either is to humans. The Protostomia can be divided into two "superphyla": Ecdysozoa (animals that undergo ecdysis or moulting, including flies and nematodes) and Lophotrochozoa (animals with a feeding structure called the lophophore, including snails and earthworms). We call this tree the L-E-D hypothesis. In this new tree, the coelom must have arisen more than once, or have been lost from some phyla. 4) Molecular analyses have been divided in their support for these competing hypotheses. Trees built using single genes from many species tend to support L-E-D, but analyses using many genes from a few complete genomes support A-P-C [5,6]. The number of species represented in a phylogenetic study can have two effects on tree reconstruction. First, without genomes to represent most animal phyla, genome-based trees provide no information on the placement of the missing taxonomic groups. Current genome studies do not include any members of the Lophotrochozoa. More notably, if a species' genome is evolving rapidly, tree reconstruction programs can be misled by a phenomenon known as long-branch attraction. 5) In long-branch attraction, independent but convergent changes (homoplasies) on long branches are misconstrued as "shared derived" changes, causing artefactual clustering of species with long branches. Because these artefacts are systematic, confidence in them grows as more data are included, and thus genome-scale analyses are especially sensitive to long-branch attraction. Long branches can arise in two ways. One is when a distantly related organism is used as an "outgroup" to root the tree of the organisms of interest. The other is when one organism of interest has a very different, accelerated pattern of evolution compared with the rest. References (abridged): 1. Aguinaldo, A. M. A. et al. Nature 387, 489-493 (1997) 2. Winnepenninckx, B. et al. Mol. Biol. Evol. 12, 1132-1137 (1995) 3. Adoutte, A., Balavoine, G., Lartillot, N. & de Rosa, R. Trends Genet. 15, 104-108 (1999) 4. Mushegian, A. R., Garey, J. R., Martin, J. & Liu, L. X. Genome Res. 8, 590-598 (1998) 5. Blair, J. E., Ikeo, K., Gojobori, T. & Hedges, S. B. BMC Evol. Biol. 2, 7 (2002) 6. Wolf, Y. I., Rogozin, I. B. & Koonin, E. V. Genome Res. 14, 29-36 (2004) 7. Philippe, H., Lartillot, N. & Brinkmann, H. Mol. Biol. Evol. 22, 1246-1253 (2005) Nature http://www.nature.com/nature -------------------------------- Related Material: EVOLUTION: GENOMES AND THE TREE OF LIFE The following points are made by K.A. Crandall and J.E. Buhay (Science 2004 306:1144): 1) Although we have not yet counted the total number of species on our planet, biologists in the field of systematics are assembling the "Tree of Life" (1,2). The Tree of Life aims to define the phylogenetic relationships of all organisms on Earth. Driskell et al (3) recently proposed a computational method for assembling this phylogenetic tree. These investigators probed the phylogenetic potential of ~300,000 protein sequences sampled from the GenBank and Swiss-Prot genetic databases. From these data, they generated "supermatrices" and then super-trees. 2) Supermatrices are extremely large data sets of amino acid or nucleotide sequences (columns in the matrix) for many different taxa (rows in the matrix). Driskell et al (3) constructed a supermatrix of 185,000 protein sequences for more than 16,000 green plant taxa and one of 120,000 sequences for nearly 7500 metazoan taxa. This compares with a typical systematics study of, on a good day, four to six partial gene sequences for 100 or so taxa. Thus, the potential data enrichment that comes with carefully mining genetic databases is large. However, this enrichment comes at a cost. Traditional phylogenetic studies sequence the same gene regions for all the taxa of interest while minimizing the overall amount of missing data. With the database supermatrix method, the data overlap is sparse, resulting in many empty cells in the supermatrix, but the total data set is massive. 3) To solve the problem of sparseness, the authors built a "super-tree" (4). The supertree approach estimates phylogenies for subsets of data with good overlap, then combines these subtree estimates into a supertree. Driskell et al (3) took individual gene clusters and assembled them into subtrees, and then looked for sufficient taxonomic overlap to allow construction of a supertree. For example, using 254 genes (2777 sequences and 96,584 sites), the authors reduced the green plant supermatrix to 69 taxa from 16,000 taxa, with an average of 40 genes per taxon and 84% missing sequences! This represents one of the largest data sets for phylogeny estimation in terms of total nucleotide information; but it is the sparsest in terms of the percentage of overlapping data. 4) Yet even with such sparseness, the authors are still able to estimate robust phylogenetic relationships that are congruent with those reported using more traditional methods. Computer simulation studies (5) recently showed that, contrary to the prevailing view, phylogenetic accuracy depends more on having sufficient characters (such as amino acids) than on whether data are missing. Clearly, building a super-tree allows for an abundance of characters even though there are many missing entries in the resulting matrix. References (abridged): 1. M. Pagel, Nature 401, 877 (1999) 2. A new NSF program funds computational approaches for "assembling the Tree of Life" (AToL). Total AToL program funding is $13 million for fiscal year 2004. NSF, Assembling the Tree of Life: Program Solicitation NSF 04-526 (www.nsf.gov/pubs/2004/nsf04526/nsf04526.pdf) 3. A. C. Driskell et al., Science 306, 1172 (2004) 4. M. J. Sanderson et al., Trends Ecol. Evol. 13, 105 (1998) 5. J. Wiens, Syst. Biol. 52, 528 (2003) Science http://www.sciencemag.org -------------------------------- Related Material: EVOLUTIONARY BIOLOGY: PHYLOGENETIC TREES AND MICROBES The following points are made by W. Martin and T. M. Embley (Nature 2004 431:134): 1) Charles Darwin (1809-1882) described the evolutionary process in terms of trees, with natural variation producing diversity among progeny and natural selection shaping that diversity along a series of branches over time. But in the microbial world things are different, and various schemes have been devised to take both traditional and molecular approaches to microbial evolution into account. For example, Rivera and Lake(1), based on analysis of whole-genome sequences, call for a radical departure from conventional thinking. 2) Unknown to Darwin, microbes use two mechanisms of natural variation that disobey the rules of tree-like evolution: lateral gene transfer and endosymbiosis. Lateral gene transfer involves the passage of genes among distantly related groups, causing branches in the tree of life to exchange bits of their fabric. Endosymbiosis -- one cell living within another -- gave rise to the double-membrane-bounded organelles of eukaryotic cells: mitochondria (the powerhouses of the cell) and chloroplasts. At the endosymbiotic origin of mitochondria, a free-living proteobacterium came to reside within an archaebacterially related host. This event involved the genetic union of two highly divergent cell lineages, causing two deep branches in the tree of life to merge outright. To this day, biologists cannot agree on how often lateral gene transfer and endosymbiosis have occurred in life's history; how significant either is for genome evolution; or how to deal with them mathematically in the process of reconstructing evolutionary trees. The report by Rivera and Lake(1) bears on all three issues: Instead of a tree linking life's three deepest branches (eubacteria, archaebacteria and eukaryotes), they uncover a ring. 3) The ring comes to rest on evolution's sorest spot -- the origin of eukaryotes. Biologists fiercely debate the relationships between eukaryotes (complex cells that have a nucleus and organelles) and prokaryotes (cells that lack both). For a decade, the dominant approach has involved another intracellular structure called the ribosome, which consists of complexes of RNA and protein, and is present in all living organisms. The genes encoding an organism's ribosomal RNA (rRNA) are sequenced, and the results compared with those for rRNAs from other organisms. The ensuing tree(2) divides life into three groups called "domains". The usefulness of rRNA in exploring biodiversity within the three domains is unparalleled, but the proposal for a natural system of all life based on rRNA alone has come increasingly under fire. 4) Ernst Mayr(3), for example, argued forcefully that the rRNA tree errs by showing eukaryotes as sisters to archaebacteria, thereby obscuring the obvious natural division between eukaryotes and prokaryotes at the level of cell organization. A central concept here is that of a tree's "root", which defines its most ancient branch and hence the relationships among the deepest-diverging lineages. The eukaryote-archaebacteria sister-grouping in the rRNA tree hinges on the position of the root. The root was placed on the eubacterial branch of the rRNA tree based on phylogenetic studies of genes that were duplicated in the common ancestor of all life(2). But the studies that advocated this placement of the root on the rRNA tree used, by today's standards, overly simple mathematical models and lacked rigorous tests for alternative positions(4). 5) One discrepancy is already apparent in analyses of a key data set used to place the root, an ancient pair of related proteins, called elongation factors, that are essential for protein synthesis(5). Although this data set places the root on the eubacterial branch, it also places eukaryotes within the archaebacteria, not as their sisters(5). Given the uncertainties of deep phylogenetic trees based on single genes(4), a more realistic view is that we still don't know where the root on the rRNA tree lies and how its deeper branches should be connected. References (abridged): 1. Rivera, M. C. & Lake, J. A. Nature 431, 152-155 (2004) 2. Woese, C., Kandler, O. & Wheelis, M. L. Proc. Natl Acad. Sci. USA 87, 4576-4579 (1990) 3. Mayr, E. Proc. Natl Acad. Sci. USA 95, 9720-9723 (1998) 4. Penny, D., Hendy, M. D. & Steel, M. A. in Phylogenetic Analysis of DNA Sequences (eds Miyamoto, M. M. & Cracraft, J.) 155-183 (Oxford Univ. Press, 1991) 5. Baldauf, S., Palmer, J. D. & Doolittle, W. F. Proc. Natl Acad. Sci. USA 93, 7749-7754 (1996) Nature http://www.nature.com/nature From checker at panix.com Mon May 30 01:41:24 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:41:24 -0400 (EDT) Subject: [Paleopsych] SW: Selection and the Origin of Species Message-ID: Evolutionary Biology: Selection and the Origin of Species http://scienceweek.com/2005/sw050603-4.htm The following points are made by A.Y. Albert and D. Schluter (Current Biology 2005 15:R283): 1) Why are there so many species on earth? Answering this question requires an understanding of how species form. An obvious place to start looking for answers is Darwin's ON THE ORIGIN OF SPECIES BY MEANS OF NATURAL SELECTION (1859). But his title is deceptive: Darwin's book is about adaptation and the origin of varieties and has surprisingly little to say about selection and the origin of species -- that mystery of mysteries 2) To be fair to Darwin, it was not for another 80 years or so that the modern view of the species was developed. The "biological species concept" defines a species as one or more populations of potentially interbreeding organisms that are reproductively isolated from other such groups. Humans and chimps are today separate species not only because we are genetically and phenotypically distinct, but because we are reproductively isolated. Neither finds the other attractive when choosing a mate ("premating isolation") and very likely, hybrids are inviable or sterile ("postmating isolation"). Reproductive isolation is therefore the most salient evolved feature of a species, at least in sexual organisms. Even "good" species may hybridize once in a while, but to meet the species criterion the flow of genes between them must be negligible. The study of speciation is therefore the study of how reproductive isolation evolves, premating or postmating, between populations. 3) Natural selection is the differential survival or reproductive success of individuals differing in phenotype within a population. Sexual selection, by contrast, is the differential mating success of phenotypically different individuals. These two processes are the most potent drivers of evolutionary change within populations. 4) Our understanding of the process of speciation has increased greatly since Darwin first proposed a central role for natural selection. Much of what we now know has come from research conducted over the past two decades. The picture emerging is that speciation is a process that results from the same forces responsible for most change within species: natural and sexual selection. Nonetheless, there are still many areas that require investigation. The 'top down' or phenotypic approach to studying speciation has found evidence for selection on ordinary phenotypic characters shown also to underlie premating and postmating isolation. This approach has yielded little, however, about the genetic basis of reproductive isolation. For example, we do not know yet if species differences are based on many genes of small phenotypic effect, or if few genes of large effect are most important in causing divergence and reproductive isolation. This has made it difficult to pinpoint exactly how natural selection has led to divergence in most cases. Recent studies of speciation in monkeyflowers and other taxa are helping to overcome this gap. References (abridged): 1. Coyne, J.A. and Orr, H.A. (2004). Speciation. Sinauer, Sunderland, MA 2. Orr, H.A., Masly, J.P., and Presgraves, D.C. (2004). Speciation genes. Curr. Opin. Genet. Dev. 14, 675-679 3. Panhuis, T.M., Butlin, R., Zuk, M., and Tregenza, T. (2001). Sexual selection and speciation. Trends Ecol. Evol. 16, 364-371 4. Rieseberg, L.H. (1997). Hybrid origins of plant species. Annu. Rev. Ecol. Syst. 28, 359-389 5. Schluter, D. (2000). The Ecology of Adaptive Radiation. Oxford University Press, Oxford Current Biology http://www.current-biology.com -------------------------------- Related Material: ECOLOGY: ON SPECIES DIVERSITY IN THE TROPICS The following points are made by S.L. Pimm and J.H. Brown (Science 2004 304:831): 1) Pointed disagreements persist about why the tropics have more species than other latitudes(1). The many hypotheses (2-4) reflect three deeply different approaches. Two date back to the time of Alfred Russel Wallace (1823-1913) (2): One stresses ecological processes, such as a location's temperature and rainfall, the other historical factors, such as whether a region was covered in ice during recent glaciations. The third approach is a newcomer that explains species richness as a simple, statistical consequence of the observation that some species have large geographical ranges, whereas others have small ranges. This explanation echoes a classic debate about the patterns of communities (5). By "pattern", ecologists mean such features as how many species a community houses, or how similar those species are morphologically. 2) Most species live in the tropics and, in particular, within moist forests. Why do warm, wet places generate diversity? "There are more niches," goes one argument, "as demonstrated by their being more species to fill them," goes its circular conclusion. Warm, wet places are proposed to be more productive and to support more individuals, which in turn permit more species to coexist. Unfortunately, tropical richness increases much faster than expected with the increase in individuals. 3) Perhaps species are preserved longer or are born more frequently in the tropics. Wallace (2) suggested that the tropics avoided the devastation of periodic ice ages. Higher birthrates would fill the tropics with "young species" -- those with many "siblings" in the same genus. The tropics, however, simply have more of every taxonomic level. More than half of all bird families are tropical. 4) Based on the excellent fossil record for marine bivalves, the tropics are both the primary source of diversity and the accumulator of species. Species appear in the tropics, then expand outwards. High latitudes are a "sink" -- that is, places from which species disappear forever. Tropical oceans house enormous diversity today because it is here that the geographic ranges of old and mostly widespread species overlap with those of young and spatially restricted ones. 5) Proponents of the third approach ask us to imagine a child's pencil box wide enough for many pencils. Barely used pencils --analogous to species with large ranges -- will span almost the entire length of the box. Other pencils will be worn to mere stubs -- analogous to species with small ranges. The length of the box is any well-defined gradient. Now shake the box end-to-end to randomize the pencils. Pencils larger than half the box's length must encompass its middle. With enough long pencils, more pencils overlap at the middle than at the ends -- producing the "mid-domain effect". Such unavoidable constraints generate the expected statistical distribution of numbers of species along the gradient against which to compare empirical patterns. References (abridged): 1) R. D. Keynes, Ed., The Beagle Record: Selections from the Original Pictorial Records and Written Accounts of the Voyage of the H.M.S. Beagle (Cambridge Univ. Press, Cambridge, 1979) 2. A. R. Wallace, Tropical Nature and Other Essays (Macmillan, London 1878) 3. E. R. Pianka, Am. Nat. 100, 33 (1966) 4. M. R. Willig, D. M. Kaufman, R. D. Stevens, Annu. Rev. Ecol. Evol. Syst. 34, 273 (2003) 5. R. Lewin, Science 221, 636 (1983) Science http://www.sciencemag.org -------------------------------- Related Material: ON BIOLOGICAL SPECIES The following points are made by L. Margolis and D. Sagan (citation below): 1) Charles Darwin's landmark book The Origin of Species, which presented to scientists and the lay public alike overwhelming evidence for the theory of natural selection, ironically never explains where new species come from. 2) Species are names given to extremely similar organisms, whether animals, plants, fungi, or microorganisms. Because we need to identify poisons, predators, shelter materials, fuel, food, and other necessities, we have long bestowed names on living and once-living objects... Until the Renaissance, however, names of live beings varied from place to place and were seldom precisely defined. The confusion of local names and inconsistent descriptions led the Swedish naturalist Carolus von Linne (1709 1789) to bring rigor and international comprehensibility to the descriptions. Since Linnaeus (his Latin name) imposed order on some 10,000 species of live beings, scientists use a first name (the genus -- the larger, more inclusive group) and a second name (the species -- the smaller, less inclusive group) to refer to either live or fossil organisms. 3) Most Linnaean names are Latin or Greek. By today's rules the species and genus names are introduced into the scientific literature with a "diagnosis", which is a brief description of salient properties of the organism: its size, shape, and other aspects of its body (its morphology); its habitat and way of life; and what it has in common with other members of its genus. The diagnosis appears in a published scientific paper that describes the organism to science for the first time. The paper also includes details beyond the diagnosis, called the "description". To be a valid name not only must the names, diagnosis, and description be published but a sample of the body of the organism itself must be deposited in a natural history museum, culture collection, herbarium, or other acknowledged repository of biological specimens. 4) Fossils are dead remains, evidence of former life. The word comes from "fosse" in French, something dug up from the ground. Fossil species, like the enigmatic trilobite Pamdoxides paradoxis-simus, also are given names and grouped on the basis of morphological similarities and differences. 5) The word "species" comes from the Latin word "speculare", to see -- like spectacles or special. Everyone, knowingly or not, uses the morphological concept of species -- dogs look like dogs, they are dogs, they are all classified as Canus familiaris. The problems come when we try to name coyotes (Canus latrans), wolves (Canus lupus, gray wolf, or Canus rufus, red wolf), and other closely related animals. Zoologists, those who professionally study animals, have imposed a distinct concept of species, which they call the "biological species concept". Coyotes and dogs in nature do not mate to produce fully fertile offspring. They are "reproductively isolated". The zoological definition of species refers to organisms that can hybridize -- that can mate and produce fertile offspring. Thus organisms that interbreed (like people, or like bulls and cows) belong to the same species. Botanists, who study plants, also find this definition useful. L. Margolis and D. Sagan: Acquiring Genomes: A Theory of the Origins of Species. Basic Books 2002, p.3. From checker at panix.com Mon May 30 01:41:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:41:33 -0400 (EDT) Subject: [Paleopsych] SW: On Laughter Message-ID: Psychology: On Laughter http://scienceweek.com/2005/sw050603-5.htm The following points are made by Jaak Panksepp (Science 2005 308:5718): 1) Research suggests that the capacity for human laughter preceded the capacity for speech during evolution of the brain. Indeed, neural circuits for laughter exist in very ancient regions of the brain [1] and ancestral forms of play and laughter existed in other animals eons before we humans came along. Recent studies in rats, dogs, and chimps [2,3] are providing evidence that laughter and joy may not be uniquely human traits. 2) The capacity to laugh emerges early in child development, and perhaps in mammalian brain-mind evolution as well. Indeed, young children, whose semantic sense of humor is marginal, laugh and shriek abundantly in the midst of their other rough-and-tumble activities. If one looks carefully, laughter is especially evident during chasing, with the chasee typically laughing more than the chaser. As every aspiring comedian knows, success is only achieved if receivers exhibit more laughter than transmitters. The same behavior patterns are evident in the "play panting" of young chimps as they mischievously chase, mouth, and tickle each other [2]. 3) Laughter seems to hark back to the ancestral emotional recesses of our animalian past [3,4]. We know that many other mammals exhibit play sounds, including tickle-induced panting, which resembles human laughter [2,4,5], even though these utterances are not as loud and persistent as our sonographically complex human chuckles. However, it is the discovery of "laughing rats" that could offer a workable model with which to systemically analyze the neurobiological antecedents of human joy [3]. When rats play, their rambunctious shenanigans are accompanied by a cacophony of 50-kHz chirps that reflect positive emotional feelings. Sonographic analysis suggests that some chirps, like human laughs, are more joyous than others. 4) Could sounds emitted by animals during play be an ancestral form of human laughter? If rats are tickled in a playful way, they readily emit these 50-kHz chirps [3]. The tickled rats became socially bonded to the experimenters and were rapidly conditioned to seek tickles. They preferred spending time with other animals that chirped a lot rather than with those that did not [3]. Indeed, chirping in rats could be provoked by neurochemically "tickling" dopamine reward circuits in the brain, which also light up during human mirth. Perhaps laughter will provide a new measure for analyzing natural reward/desire circuits in the brain, which are also activated during drug craving. References (abridged): 1. K. Poeck, in Handbook of Clinical Neurology, P. J. Vinken, G. W. Bruyn, Eds. (North Holland, Amsterdam, 1969), vol. 3 2. T. Matsusaka, Primates, 45, 221 (2004) 3. J. Panksepp, J. Burgdorf, Physiol. Behav. 79, 533 (2003) 4. G. M. Burghardt, The Genesis of Animal Play (MIT Press, Cambridge, MA, 2005) 5. R. R. Provine, Laughter (Viking, New York, 2000) Science http://www.sciencemag.org -------------------------------- Related Material: ANIMAL BEHAVIOR: ON ANTHROPOMORPHISM The following points are made by Clive D. Wynne (Nature 2004 428:606): 1) The complexity of animal behavior naturally prompts us to use terms that are familiar from everyday descriptions of our own actions. Charles Darwin (1809-1882) used mentalistic terms freely when describing, for example, pleasure and disappointment in dogs; the cunning of a cobra; and sympathy in crows. Darwin's careful anthropomorphism, when combined with meticulous description, provided a scientific basis for obvious resemblances between the behavior and psychology of humans and other animals. It raised few objections. 2) The 1890s saw a strong reaction against ascribing conscious thoughts to animals. In the UK, the canon of Conwy Lloyd Morgan (1852-1936) forbade the explanation of animal behavior with "a higher psychical faculty" than demanded by the data. In the US, Edward Thorndike (1874-1949) advocated replacing the use of anecdotes in the study of animal behavior with controlled experiments. He argued that when studied in controlled and reproducible environments, animal behavior revealed simple mechanical laws that made mentalistic explanations unnecessary. 3) This rejection of anthropomorphism was one of the few founding principles of behaviorism that survived the rise of ethological and cognitive approaches to studying animal behavior. But after a century of silence, recent decades have seen a resurgence of anthropomorphism. This movement was led by ethologist Donald Griffin, famous for his discovery of bat sonar. Griffin argued that the complexity of animal behavior implies conscious beliefs and desires, and that an anthropomorphic explanation can be more parsimonious than one built solely on behavioral laws. Griffin postulated, "Insofar as animals have conscious experiences, this is a significant fact about their nature and their lives." Animal communication particularly impressed Griffin as implying animal consciousness. 4) Griffin has inspired several researchers to develop ways of making anthropomorphism into a constructive tool for understanding animal behavior. Gordon Burghardt was keen to distinguish the impulse that prompts children to engage in conversations with the family dog (naive anthropomorphism) from "critical anthropomorphism", which uses the assumption of animal consciousness as a "heuristic method to formulate research agendas that result in publicly verifiable data that move our understanding of behavior forward." Burghardt points to the death-feigning behavior of snakes and possums as examples of complex and apparently deceitful behaviors that can best be understood by assuming that animals have conscious states. 5) But anthropomorphism is not a well-developed scientific system. On the contrary, its hypotheses are generally nothing more than informal folk psychology, and may be of no more use to the scientific psychologist than folk physics to a trained physicist. Although anthropomorphism may on occasion be a source of useful hypotheses about animal behavior, acknowledging this does not concede the general utility of an anthropomorphic approach to animal behavior.(1-4) References: 1. Blumberg, M. S. & Wasserman, E. A. Am. Psychol. 50, 133-144 (1995) 2. De Waal, F. B. M. Phil. Top. 27, 255-280 (1999) 3. Mitchell, R. W. et al. Anthropomorphism, Anecdotes and Animals (State Univ. New York Press, New York, 1997) 4. Wynne, C. D. L. Do Animals Think? (Princeton Univ. Press, Princeton, New Jersey, 2004) Nature http://www.nature.com/nature -------------------------------- Related Material: ON ANIMAL SELF-AWARENESS The following points are made by Marc Bekoff (Nature 2002 419:255): 1) Researchers are interested in animal awareness because they are curious to discover what animals might know about themselves. There are, however, long-held and polarized views about the degree of self-awareness in animals. Some people believe that only great apes have "rich" notions of self --knowing who they are and/or having a "theory of mind", which means being able to infer the states of minds of others --whereas others argue that it is methodologically too difficult to address this question because animal (like human) minds are subjective and private. Many in this latter category do not attribute any sense of self to animals other than humans, and some, dismissing behavioral and neurobiological research on animal cognition, wonder whether animals are conscious of anything at all. 2) What might animals know about themselves? Most studies of animal self-awareness have been narrowly paradigm-driven. The "red spot" technique was first used by Gordon Gallup to study animal self-awareness in chimpanzees; it and variations have been used on great apes and monkeys, as well as on a few dolphins and elephants. For primates, a spot is placed on the forehead of an anesthetized individual and self-directed movements towards the spot are scored after he or she awakens and catches sight of themselves in a mirror, a high score indicating the presence of some degree of self-awareness. But in some cases, the data are derived from tests on small numbers of individuals, many of whom fail it because they do not make self-directed movements towards the spot. Those who pass the test might not be representative of wild relatives because they have had extensive human contact and previous experience with mirrors, factors that might influence their trainability and willingness to use a mirror. Those who fail the test might show some sense of 'self' in other contexts, and other individual differences might also play a role. 3) The concept of animal self-awareness remains open to different interpretations, but we will probably learn more about the mysteries of "self" and "body-ness" by using non-invasive neuroimaging techniques in combination with cognitive ethological studies. If we look at "self-awareness" as "body-awareness", we might also discover more about how animals think and the perceptual and neurobiological processes underlying various cognitive capacities. Darwin's ideas about evolutionary continuity, together with empirical data ("science sense") and common sense, caution against the unyielding claim that humans --and perhaps other great apes and cetaceans -- are the only species in which some sense of self has evolved.(1-5) References (abridged): 1. Bekoff, M. Minding Animals: Awareness, Emotions, and Heart (Oxford Univ. Press, New York & London, 2002). 2. Bekoff, M., Allen, C. & Burghardt, G. M. (eds) The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition (MIT Press, Cambridge, Massachusetts, 2002); see especially essays on self-awareness by Gallup, G. G., Anderson, J. R. & Shillito, D. J.; Mitchell, R. W.; Shumaker, R. W. & Swartz, K. B. 3. Mitchell, R. W. in Handbook of Self and Identity (eds Leary, M. R. & Tangney, J.) 567 593 (Guilford, New York, 2002). 4. Reiss, D. Nature 418, 369 370 (2002). 5. Rilling, J. K. et al. Neuron 35, 395 405 (2002). Nature http://www.nature.com/nature From checker at panix.com Mon May 30 01:41:43 2005 From: checker at panix.com (Premise Checker) Date: Sun, 29 May 2005 21:41:43 -0400 (EDT) Subject: [Paleopsych] SW: On Social Signals in Rodents Message-ID: Animal Behavior: On Social Signals in Rodents http://scienceweek.com/2005/sw050527-2.htm The following points are made by Leslie B. Vosshall (Current Biology 2005 15:R255): 1) Animals use odors to communicate precise information about themselves to other members of their species. For instance, domesticated dogs intently sample scent marks left by other dogs, allowing them to determine the age, gender, sexual receptivity, and exact identity of the animal that left the mark behind.[1,2] Social communication in rodents is equally robust.[3-5] Male hamsters efficiently choose new female sexual partners over old ones, a phenomenon known as the "Coolidge Effect". The onset of estrus and successful fetal implantation in female mice are both modulated by male odors. Mice have the ability to discriminate conspecifics that differ in MHC odortype and can determine whether others of their species are infected by viruses or parasites, presumably a skill of use in selecting a healthy mate. 2) Such social odors are typically produced in urine or secreted from scent glands distributed over the body. Both volatile and non-volatile cues are known to be produced. The accessory olfactory system, comprising the vomeronasal organ and the accessory olfactory bulb, responds largely to non-volatile cues, while the main olfactory system receives volatile signals. Although mammalian pheromones are classically thought to activate the accessory olfactory system, several newly described pheromones are volatile and may act through the main olfactory system. Chemical signals have a number of advantages in social communication over signals that act on other sensory modalities: they are energetically cheap to produce, often being metabolic by-products; they are volatile and can therefore be broadcast within a large territory; and they can continue to emit signal after the animal has moved to a new location. 3) What are the specific, behaviorally active chemical signals present in urine? What sensory neurons respond to these cues? Can a single such compound be behaviorally active? A recent paper by Lin et al [6] succeeds spectacularly in answering all three questions. The authors applied chemistry, electrophysiology and behavior to this problem, and identified biologically active volatiles in male urine that activate both male and female main olfactory bulb mitral cells. They have elucidated the chemical identity of a single such male-specific urine component that both activates olfactory bulb mitral cells and elicits behaviors in female mice. The new study builds on earlier work from other laboratories that described regions in the olfactory bulb activated upon exposure to whole mouse urine. References (abridged): 1. Bekoff, M. (2001). Observations of scent-marking and discriminating self from others by a domestic dog (Canis familiaris): tales of displaced yellow snow. Behav. Processes 55, 75-79 2. Mekosh-Rosenbaum, V., Carr, W.J., Goodwin, J.L., Thomas, P.L., D'Ver, A., and Wysocki, C.J. (1994). Age-dependent responses to chemosensory cues mediating kin recognition in dogs (Canis familiaris). Physiol. Behav. 55, 495-499 3. Dulac, C. and Torello, A.T. (2003). Molecular detection of pheromone signals in mammals: from genes to behavior. Nat. Rev. Neurosci. 4, 551-562 4. Novotny, M.V. (2003). Pheromones, binding proteins and receptor responses in rodents. Biochem. Soc. Trans. 31, 117-122 5. Restrepo, D., Arellano, J., Oliva, A.M., Schaefer, M.L., and Lin, W. (2004). Emerging views on the distinct but related roles of the main and accessory olfactory systems in responsiveness to chemosensory signals in mice. Horm. Behav. 46, 247-256 6. Lin, D.Y., Zhang, S.Z., Block, E., and Katz, L.C. (2005). Encoding social signals in the mouse main olfactory bulb. Nature 2005 Feb 20[Epub ahead of print] PMID: 15724148 Current Biology http://www.current-biology.com -------------------------------- Related Material: ANIMAL BEHAVIOR: ON ANIMAL PERSONALITIES The following points are made by S.R. Dall (Current Biology 2004 14:R470): 1) Psychologists recognize that individual humans can be classified according to how they differ in behavioral tendencies [1]. Furthermore, anyone who spends time watching non-human animals will be struck by how, even within well-established groups of the same species, individuals can be distinguished readily by their behavioral predispositions. Evolutionary biologists have traditionally assumed that individual behavioral differences within populations are non-adaptive "noise" around (possibly) adaptive average behavior, though since the 1970s it has been considered that such differences may stem from competition for scarce resources [2]. 2) It is becoming increasingly evident, however, that across a range of taxa -- including primates and other mammals as well as birds, fish, insects and cephalopod molluscs -- behavior varies non-randomly among individuals along particular axes [3]. Comparative psychologists and behavioral biologists [3-5] are documenting that individual animals differ consistently in their aggressiveness, activity, exploration, risk-taking, fearfulness and reactivity, suggesting that such variation is likely to have significant ecological and evolutionary consequences [4,5] and hence be a focus for selection. From evolutionary and ecological viewpoints, non-random individual behavioral specializations are coming to define animal personalities [3], although they are also referred to as behavioral syndromes, coping styles, strategies, axes and constructs [3-5]. 3) The evolution of animal personality differences is poorly understood. Ostensibly, it makes sense for animals to adjust their behavior to current conditions, including their own physiological condition, which can result in behavioral differences if local conditions vary between individuals. It is unclear, however, why such differences should persist when circumstances change. In fact, even in homogenous environments interactions between individuals can favor the adoption of alternative tactics. For instance, competition for parental attention in human families may encourage later-born children to distinguish themselves by rebelling. In the classic Hawk Dove game model of animal conflicts over resources, if getting into escalated fights costs more than the resource is worth, a stable mix of pacifist (dove) and aggressive (hawk) tactics can evolve. This is because, as hawks become common, it pays to avoid fighting and play dove, and vice versa. 4) There are, however, two ways in which evolutionarily stable mixtures of tactics can be maintained by such frequency-dependent payoffs: individuals can adopt tactics randomly with a fixed probability that generates the predicted mix in a large population; alternatively, fixed proportions of individuals can play tactics consistently. Only the latter would account for animal personality differences. It turns out that consistent hawks and doves can be favored if the outcomes of fights are observed by future opponents and influence their decisions --being persistently aggressive will then discourage fights, as potential opponents will expect to face a costly contest if they challenge for access to the resource. At least in theory, therefore, personality differences can evolve when the fitness consequences of behavior depend both on an individual's behavioral history and the behavior of other animals. References (abridged): 1. Pervin, L. and John, O.P. (1999). Handbook of Personality. (: Guilford Press) 2. Wilson, D.S. (1998). Adaptive individual differences within single populations. Philos. Trans. R. Soc. Lond. B Biol. Sci 353, 199-205 3. Gosling, S.D. (2001). From mice to men: what can we learn about personality from animal research. Psychol. Bull. 127, 45-86 4. Sih, A., Bell, A.M., Johnson, J.C. and Ziemba, R.E. (2004). Behavioral syndromes: an integrative review. Q. Rev. Biol. in press. 5. Sih, A., Bell, A.M. and Johnson, J.C. (2004). Behavioral syndromes: an ecological and evolutionary overview. Trends Ecol. Evol. in press. Current Biology http://www.current-biology.com -------------------------------- Related Material: ANIMAL BEHAVIOR: ON ANTHROPOMORPHISM The following points are made by Clive D. Wynne (Nature 2004 428:606): 1) The complexity of animal behavior naturally prompts us to use terms that are familiar from everyday descriptions of our own actions. Charles Darwin (1809-1882) used mentalistic terms freely when describing, for example, pleasure and disappointment in dogs; the cunning of a cobra; and sympathy in crows. Darwin's careful anthropomorphism, when combined with meticulous description, provided a scientific basis for obvious resemblances between the behavior and psychology of humans and other animals. It raised few objections. 2) The 1890s saw a strong reaction against ascribing conscious thoughts to animals. In the UK, the canon of Conwy Lloyd Morgan (1852-1936) forbade the explanation of animal behavior with "a higher psychical faculty" than demanded by the data. In the US, Edward Thorndike (1874-1949) advocated replacing the use of anecdotes in the study of animal behavior with controlled experiments. He argued that when studied in controlled and reproducible environments, animal behavior revealed simple mechanical laws that made mentalistic explanations unnecessary. 3) This rejection of anthropomorphism was one of the few founding principles of behaviorism that survived the rise of ethological and cognitive approaches to studying animal behavior. But after a century of silence, recent decades have seen a resurgence of anthropomorphism. This movement was led by ethologist Donald Griffin, famous for his discovery of bat sonar. Griffin argued that the complexity of animal behavior implies conscious beliefs and desires, and that an anthropomorphic explanation can be more parsimonious than one built solely on behavioral laws. Griffin postulated, "Insofar as animals have conscious experiences, this is a significant fact about their nature and their lives." Animal communication particularly impressed Griffin as implying animal consciousness. 4) Griffin has inspired several researchers to develop ways of making anthropomorphism into a constructive tool for understanding animal behavior. Gordon Burghardt was keen to distinguish the impulse that prompts children to engage in conversations with the family dog (naive anthropomorphism) from "critical anthropomorphism", which uses the assumption of animal consciousness as a "heuristic method to formulate research agendas that result in publicly verifiable data that move our understanding of behavior forward." Burghardt points to the death-feigning behavior of snakes and possums as examples of complex and apparently deceitful behaviors that can best be understood by assuming that animals have conscious states. 5) But anthropomorphism is not a well-developed scientific system. On the contrary, its hypotheses are generally nothing more than informal folk psychology, and may be of no more use to the scientific psychologist than folk physics to a trained physicist. Although anthropomorphism may on occasion be a source of useful hypotheses about animal behavior, acknowledging this does not concede the general utility of an anthropomorphic approach to animal behavior.(1-4) References: 1. Blumberg, M. S. & Wasserman, E. A. Am. Psychol. 50, 133-144 (1995) 2. De Waal, F. B. M. Phil. Top. 27, 255-280 (1999) 3. Mitchell, R. W. et al. Anthropomorphism, Anecdotes and Animals (State Univ. New York Press, New York, 1997) 4. Wynne, C. D. L. Do Animals Think? (Princeton Univ. Press, Princeton, New Jersey, 2004) Nature http://www.nature.com/nature From shovland at mindspring.com Mon May 30 16:50:44 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 30 May 2005 09:50:44 -0700 Subject: [Paleopsych] nuclear magnetic resonance spectroscopy Message-ID: <01C564FD.0D1FD2A0.shovland@mindspring.com> http://www.ch.ic.ac.uk/local/organic/nmr.html Steve Hovland www.stevehovland.net From shovland at mindspring.com Mon May 30 20:37:53 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 30 May 2005 13:37:53 -0700 Subject: [Paleopsych] Bruce Ames PhD- citations on nutrition and aging Message-ID: <01C5651C.C8BDD1A0.shovland@mindspring.com> http://www.chori.org/investigators/ames_publications.html From checker at panix.com Mon May 30 22:12:59 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:12:59 -0400 (EDT) Subject: [Paleopsych] NYT: Op-Ed: Got Toxic Milk? Message-ID: Got Toxic Milk? New York Times Op-Ed, 5.5.30 http://www.nytimes.com/2005/05/30/opinion/30wein.html By LAWRENCE M. WEIN Stanford, Calif. WHILE the anthrax scare at Washington post offices this year proved to be a false alarm, it was a reminder of how vulnerable Americans are to biological terrorism. In general, two threats are viewed as the most dangerous: anthrax, which is as durable as it is deadly, and smallpox, which is transmitted very easily and kills 30 percent of its victims. But there is a third possibility that, while it seems far more mundane, could be just as deadly: terrorists spreading a toxin that causes botulism throughout the nation's milk supply. Why milk? In addition to its symbolic value as a target - a glass of milk is an icon of purity and healthfulness - Americans drink more than 6 billion gallons of it a year. And because it is stored in large quantities at centralized processing plants and then shipped across country for rapid consumption, it is a uniquely valuable medium for a bioterrorist. For the last year, a graduate student, Yifan Liu, and I have been studying how such an attack might play out, and here is the situation we consider most likely: a terrorist, using a 28-page manual called "Preparation of Botulism Toxin" that has been published on several jihadist Web sites and buying toxin from an overseas black-market laboratory, fills a one-gallon jug with a sludgy substance containing a few grams of botulin. He then sneaks onto a dairy farm and pours its contents into an unlocked milk tank, or he dumps it into the tank on a milk truck while the driver is eating breakfast at a truck stop. This tainted milk is eventually piped into a raw-milk silo at a dairy-processing factory, where it is thoroughly mixed with other milk. Because milk continually flows in and out of silos, approximately 100,000 gallons of contaminated milk go through the silo before it is emptied and cleaned (the factories are required to do this only every 72 hours). While the majority of the toxin is rendered harmless by heat pasteurization, some will survive. These 100,000 gallons of milk are put in cartons and trucked to distributors and retailers, and they eventually wind up in refrigerators across the country, where they are consumed by hundreds of thousands of unsuspecting people. It might seem hard to believe that just a few grams of toxin, much of it inactivated by pasteurization, could harm so many people. But that, in the eye of the terrorists, is the beauty of botulism: just one one-millionth of a gram may be enough to poison and eventually kill an adult. It is likely that more than half the people who drink the contaminated milk would succumb. The other worrisome factor is that it takes a while for botulism to take effect: usually there are no symptoms for 48 hours. So, based on studies of consumption, even if such an attack were promptly detected and the government warned us to stop drinking milk within 24 hours of the first reports of poisonings, it is likely that a third of the tainted milk would have been consumed. Worse, children would be hit hardest: they drink significantly more milk on average than adults, less of the toxin would be needed to poison them and they drink milk sooner after its release from dairy processors because it is shipped directly to schools. And what will happen to the victims? First they will experience gastrointestinal pain, which is followed by neurological symptoms. They will have difficulty seeing, speaking and walking as paralysis sets in. Most of those who reach a hospital and get antitoxins and ventilators to aid breathing would recover, albeit after months of intensive and expensive treatment. But our hospitals simply don't have enough antitoxins and ventilators to deal with such a widespread attack, and it seems likely that up to half of those poisoned would die. As scary as this possibility is, we have actually been conservative in some of our assumptions. The concentration of toxin in the terrorists' initial gallon is based on 1980's technology and it's possible they could mix up a more potent brew; there are silos up to four times as large as the one we based our model on, and some feed into several different processing lines that would contaminate more milk; and the assumption that the nationwide alarm could go out within 24 hours of the first reported symptoms is very optimistic (two major salmonella outbreaks in the dairy industry, in 1985 and 1994, went undetected for weeks and sickened 200,000 people). What can we do to avoid such a horror? First, we must invest in prevention. The Food and Drug Administration has some guidelines - tanks and trucks holding milk are supposed to have locks, two people are supposed to be present when milk is transferred - but they are voluntary. Let's face it: in the hands of a terrorist, a dairy is just as dangerous as a chemical factory or nuclear plant, and voluntary guidelines are not commensurate with the severity of the threat. We need strict laws - or at least more stringent rules similar to those set by the International Organization for Standardization in Geneva and used in many countries - to ensure that our milk supply is vigilantly guarded, from cow to consumer. Second, the dairy industry should improve pasteurization so that it is far more potent at eliminating toxins. Finally, and most important, tanks should be tested for toxins as milk trucks line up to unload into the silo. The trucks have to stop to be tested for antibiotic residue at this point anyway, and there is a test that can detect all four types of toxin associated with human botulism that takes less than 15 minutes. Yes, to perform the test four times, once for each toxin, on each truck would cost several cents per gallon. But in the end it comes down to a simple question: isn't the elimination of this terrifying threat worth a 1 percent increase in the cost of a carton of milk? One other concern: although milk may be the obvious target, it is by no means the only food product capable of generating tens of thousands of deaths. The government needs to persuade other food-processing industries - soft drinks, fruit juices, vegetable juices, processed-tomato products - to study the potential impact of a deliberate botulin release in their supply chains and take steps to prevent and mitigate such an event. Americans are blessed with perhaps the most efficient food distribution network in history, but we must ensure that the system that makes it so easy to cook a good dinner doesn't also make it easy for terrorists to kill us in our homes. Lawrence M. Wein is a professor of management science at Stanford Business School. From checker at panix.com Mon May 30 22:13:11 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:13:11 -0400 (EDT) Subject: [Paleopsych] NYT: British Medical Experts Campaign for Long, Pointy Knife Control Message-ID: British Medical Experts Campaign for Long, Pointy Knife Control [Switch blades have long been illegal in New York State, so precedents for knife control in this country are already in place.] http://www.nytimes.com/2005/05/27/science/27knife.html By [2]JOHN SCHWARTZ Warning: Long, pointy knives may be hazardous to your health. The authors of an editorial in the latest issue of the British Medical Journal have called for knife reform. The editorial, "Reducing knife crime: We need to ban the sale of long, pointed kitchen knives," notes that the knives are being used to stab people as well as roasts and the odd tin of Spam. The authors of the essay - Drs. Emma Hern, Will Glazebrook and Mike Beckett of the West Middlesex University Hospital in London - called for laws requiring knife manufacturers to redesign their wares with rounded, blunt tips. The researchers noted that the rate of violent crime in Britain rose nearly 18 percent from 2003 to 2004, and that in the first two weeks of 2005, 15 killings and 16 nonfatal attacks involved stabbings. In an unusual move for a scholarly work, the researchers cited a January headline from The Daily Express, a London tabloid: "Britain is in the grip of knives terror - third of murder victims are now stabbed to death." Dr. Hern said that "we came up with the idea and tossed it into the pot" to get people talking about crime reduction. "Whether it's a sensible solution to this problem or not, I'm not sure." In the United States, where people are more likely to debate gun control than knife control, partisans on both sides sounded amused. Wayne LaPierre, executive vice president of the National Rifle Association, asked, "Are they going to have everybody using plastic knives and forks and spoons in their own homes, like they do in airlines?" Peter Hamm, a spokesman for the Brady Campaign to Prevent Gun Violence, which supports gun control, joked, "Can sharp stick control be far behind?" He said people in his movement were "envious" of England for having such problems. "In America, we can't even come to an agreement that guns are dangerous and we should make them safer," he said. The authors of the editorial argued that the pointed tip is a vestigial feature from less mannered ages, when people used it to spear meat. They said that they interviewed 10 chefs in England, and that "none gave a reason why the long, pointed knife was essential," though short, pointed knives were useful. An American chef, however, disagreed with the proposal. "This is yet another sign of the coming apocalypse," said Anthony Bourdain, the executive chef at Les Halles and the author of "Kitchen Confidential." A knife, he said, is a beloved tool of the trade, and not a thing to be shaped by bureaucrats. A chef's relationship with his knives develops over decades of training and work, he said, adding, "Its weight, its shape - these are all extensions of our arms, and in many ways, our personalities." He compared the editorial to efforts to ban unpasteurized cheese. "Where there is no risk," he said, "there is no pleasure." From checker at panix.com Mon May 30 22:13:16 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:13:16 -0400 (EDT) Subject: [Paleopsych] Guardian: East is east - get used to it Message-ID: East is east - get used to it http://www.guardian.co.uk/print/0,3858,5198003-103677,00.html As Japan has shown, and China will too, the west's values are not necessarily universal Martin Jacques Friday May 20, 2005 Not so long ago, Japan was the height of fashion. Then came the post-bubble recession and it rapidly faded into the background, condemned as yesterday's story. The same happened to the Asian tigers: until 1997 they were the flavour of the month, but with the Asian financial crisis they sank into relative obscurity. No doubt the same fate will befall China in due course, though perhaps a little less dramatically because of its sheer size and import. These vagaries tell us nothing about east Asia, but describe the fickleness of western attitudes towards the region's transformation. A combination of curiosity and a fear of the unknown fuel a swelling interest, and then, when it appears that it was a false alarm, old attitudes of western-centric hubris reassert themselves: the Asian tigers were victims of a crony culture and Japan was simply too Japanese. During Japan's crisis, western - mainly American - witch doctors advised that the only solution was to abandon Japanese customs like lifetime employment and adopt more Anglo-Saxon practices such as shareholder value. The age-old western habit of believing that its arrangements - of the neo-liberal variety, in this instance - are always best proved as strong as ever: it is in our genes. The fact that the US was at the time in the early stages of its own bubble might have suggested a little humility was in order. In the event, Japan largely ignored the advice and has emerged from its long, post-bubble recession looking remarkably like it did before the crisis. Japan has long been part of the advanced world. It was the only non-western country to begin its industrialisation in the 19th century, following the Meiji Restoration in 1867. It has the second largest economy and enjoys one of the highest standards of living in the world. By any standards, it is a fully paid-up member of the exclusive club of advanced nations. Yet Japan is quite unlike any western society. In terms of the hardware of modernity - cars, computers, technology, motorways and the rest - Japan is, unsurprisingly, largely familiar. However, in terms of social relations - the way in which society works, the values that imbue it - it is profoundly different. Even a casual observer who cannot understand Japanese will almost immediately notice the differences: the absence of antisocial behaviour, the courtesy displayed by the Japanese towards each other, the extraordinary efficiency and orderliness that characterise the stuff of everyday life, from public transport to shopping. For those of a more statistical persuasion, it is reflected in what are, by western standards, extremely low crime rates. Not least, it finds expression in the success of Japanese companies. This has wrongly been attributed to an organisational system, namely just-in-time production, which, it was believed, could be imitated and applied with equal effect elsewhere. But the roots of the success of a company such as Toyota lie much deeper: in the social relations that typify Japanese society and that allow a very different kind of participation by the workforce in comparison with the west. As a result, non-Japanese companies have found it extremely difficult to copy these ideas with anything like the same degree of success. So how do we explain the differences between Japan and the west? The heart of the matter lies in their different ethos. Individualism animates the west, now more than ever. In contrast, the organising principle of Japanese society is a sense of group identity, a feeling of being part of a much wider community. Compared with western societies, Japan is a dense lattice-work of responsibilities and obligations within the family, the workplace, the school and the community. As Deepak Lal argues in his book Unintended Consequences, the Japanese sense of self is quite distinct from the western notion of individualism. As a result, people behave in very different ways and have very different expectations, and their behaviour is informed by very different values. This finds expression in a multitude of ways. Following the recent train crash in which 106 people died, the president of the operating company, JR West, was forced to resign: this is the normal and expected response of a company boss when things go seriously wrong. Income differentials within large corporations are much less than in their Anglo-Saxon equivalents, because it is group cohesion rather than individual ego that is most valued. Even during the depth of the recession, the jobless figure never rose much above 5%: it was regarded as wrong to solve a crisis by creating large-scale unemployment. Even those who do the more menial tasks - shop assistants, security staff, station attendants and canteen workers - display a pride in their work and a courtesy that is in striking contrast to the surly and resentful attitude prevalent in Britain and other western societies. In a survey conducted by the Japanese firm Dentsu, 68% of Americans and 60% of Britons identified with "a society in which everyone can freely compete according to his/her will and abilities" compared with just 22% of Japanese. In the same survey, only 15% of Japanese agreed with the proposition that "it's all right to break the rules, depending on the circumstances", compared with 37% of Americans and 39% of Britons. This finds rather bizarre expression - to an Englishman at least - in the way pedestrians invariably wait for the pedestrian lights to turn to green even when there is not the slightest sign of an approaching vehicle. Even the preferred choice of car reflects the differing ethos: whereas in the US and Britain, the fashionable car of choice is a 4x4 - the very embodiment of a "bugger you and the environment" individualism - the equivalent in Japan is the tiny micro-car, much smaller than a Ford Ka - a genre that is neither made nor marketed in the UK. The differences are legion, and not always for the better. Japan, for example, is still blighted by a rigid and traditional sexual division of labour. In a survey on the gender gap published last week by the World Economic Forum, Japan came 38th out of 58 countries, an extraordinarily low ranking for a developed nation. Or take democracy, that hallowed and allegedly universal principle of our age. Japan has universal suffrage, but the idea of alternating parties in government is almost entirely alien. Real power is exercised by factions within the ruling Liberal Democrats rather than by the other political parties, which, as a consequence, are largely marginal. We should not be surprised: in a society based on group culture rather than individualism, "democracy" is bound to be a very different kind of animal. Far from conforming to the western model then, Japan remains profoundly different. And so it has always been. After the Meiji Restoration it deliberately sought to engineer a modernisation that was distinctively Japanese, drawing from its own traditions as well as borrowing from the west. Globalisation notwithstanding, this is still strikingly the case. Indeed, Japan remains unusually and determinedly impervious to many of the pressures of globalisation. The lesson here, perhaps, is that we should expect the same to be true, in some degree or another, of the Asian tigers - and ultimately China too. That is not to say they will end up looking anything like Japan: China and Japan, for example, are in many respects chalk and cheese. But they will certainly be very different from the west because, like Japan, they come from very different histories and cultures. ? Martin Jacques is a visiting professor at the International Centre for Chinese Studies at Aichi University in Japan [4]martinjacques1 at aol.com From checker at panix.com Mon May 30 22:15:25 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:15:25 -0400 (EDT) Subject: [Paleopsych] LAT: Postmodern Fog Has Begun to Lift Message-ID: Postmodern Fog Has Begun to Lift http://www.latimes.com/news/opinion/commentary/la-oe-dickstein26may26,0,3489982,print.story?coll=la-news-comment-opinions In an era of uncertainty, reality makes a comeback. By Morris Dickstein May 26, 2005 For most people, "reality," though sometimes elusive, is as palpable as their morning coffee or the gigantic heads carved into Mt. Rushmore. It is simply there. From their point of view, the phrase "get real" means "shape up," and the fabricated thrills of "reality TV" seem appealingly authentic. But for many contemporary academics, especially those who bought into postmodern theory in the last few decades, the idea of the "real" raises serious problems. Reality depends on those who are perceiving it, on social forces that have conditioned their thinking, and on whoever controls the flow of information that influences them. They believe with Nietzsche that there are no facts, only interpretations. Along with notions like truth or objectivity, or moral concepts of good and evil, there's hardly anything more contested in academia today. Both sides have a point here. No one could survive for a day if he or she really tried to live by the relentless relativism and skepticism preached by postmodernists, in which everything is shadowed by uncertainty or exposed as ideology. But it is also true that the media revolutions of the last century, while they hugely expanded our access to knowledge, created far more effective tools by which that knowledge could be manipulated. In this conflict, the master strategists in the White House, though they claim to stand by traditional values, are very much in the camp of postmodernism. In the New York Times Magazine last October, for example, a "senior advisor" to President Bush told Ron Suskind that journalists and scholars belong to "what we call the reality-based community," devoted to "the judicious study of discernible reality." They have no larger vision, no sense of the openings created by American dominance. "We're an empire now, and when we act, we create our own reality." He might have added that there are many ways to simulate reality: staying on message, for instance, impervious to correction and endlessly reiterating it while saturating the media environment. Ideologues, whether they're politicians or intellectuals, dismiss any appeal to disinterested motives or objective conditions. They see reality itself, including the electorate, as thoroughly malleable. Like the media spin that is its sinister double, postmodernism didn't spring up from nowhere in the 1970s. Today we can look back on the 20th century as an age of turbulent dislocation and uncertainty. Not only did its wars and genocides uproot whole populations, but its philosophic and scientific ideas, from Einstein and Freud to Wittgenstein and Derrida, uprooted centuries of moral and religious ideals. Modern artists -- beginning with Picasso, Stravinsky and Joyce -- reflected these changes by revolutionizing the medium in which they worked, leaving some in their audience exhilarated and others dumbfounded at seeing older forms of representation turned inside out. Postmodern theorists, promoting a fluid sense of identity, were only the latest step in unhinging art and discourse from any stable sense of the real world. Just as political upheaval left people physically insecure and globalization left them economically insecure, postmodernism was part of a complex of changes that left them feeling morally insecure, uncertain about who they were or what they really knew. For some, there was a newfound freedom in all this. But many Americans today, sensing that the foundations of their world have crumbled, feel a deep nostalgia for something solid and real. Surrounded by a media culture, adrift in virtual reality, they seek assurance from their own senses. They turn to what John Dewey called "the quest for certainty." I see evidence of this in my own field of literary studies, which has long been in the vanguard of postmodernism. In his book "After Theory," a widely discussed obituary for decades of obfuscation that he himself had helped to promote, Terry Eagleton mocks "a certain postmodern fondness for not knowing what you think about anything." To understand the changes that shook the modern world, my students and colleagues have returned in recent years to long-neglected writers in the American realist tradition, including William Dean Howells, Theodore Dreiser, Stephen Crane, Sinclair Lewis, Edith Wharton and Willa Cather. For readers like me who grew up in the second half of the 20th century on the unsettling innovations of modernism, and who were attuned to its atmosphere of crisis and disillusionment, the firm social compass of these earlier writers has come as a surprise. Like Henry James before them, they saw themselves less as lonely romantic outposts of individual sensibility than as keen observers of society. They described the rough transition from the small town to the city, from rural life to industrial society, from a more homogeneous but racially divided population to a nation of immigrants. They recorded dramatic alterations in religious beliefs, moral values, social and sexual mores and class patterns. Novels like Dreiser's "Sister Carrie" and Wharton's "House of Mirth" showed how fiction paradoxically could serve fact and provide a more concrete sense of the real world than any other form of writing. This is how most readers have always read novels, not simply for escape, and certainly not mainly for art, but to get a better grasp of the world around them and the world inside them. Now that the overload of theory, like a mental fog, has begun to lift, perhaps professional readers will catch up with them. _____________________ Morris Dickstein teaches English at the Graduate Center of the City University of New York. His new book, "A Mirror in the Roadway: Literature and the Real World," is just out from Princeton University Press. From checker at panix.com Mon May 30 22:15:33 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:15:33 -0400 (EDT) Subject: [Paleopsych] TLS: Edward N. Luttwak: The good in barbed wire Message-ID: Edward N. Luttwak: The good in barbed wire http://www.the-tls.co.uk/this_week/story.aspx?story_id=2110936 5.5.25 BARBED WIRE An ecology of modernity Reviel Netz 267pp. | Middletown, CT: Wesleyan University Press. $24.95; distributed in the UK by Eurospan. ?17.50. | 0 8195 6719 1 Barbed wire is important in my life - the cattle ranch I run in the Bolivian Amazon could not exist without it. In Britain as in other advanced countries, it is mostly fences of thin unbarbed wire enlivened by a low-voltage current that keep cattle from wandering off, but in the Bolivian Amazon they have no electrical supply to transform down, and in any case the cost, over many perimeter miles, would be prohibitive and the upkeep quite impossible. Ours is a wonderful land of lush savannahs and virgin forest, but it is just not valuable enough to be demarcated by anything more expensive than strands of barbed wire held up by wooden posts driven into the ground. Invented and patented by Joseph F. Glidden in 1874, an immediate success in mass production by 1876, barbed wire, first of iron and then steel, did much to transform the American West, before doing the same in other prairie lands from Argentina to Australia. Actually, cheap fencing transformed the primordial business of cattle-raising itself. Solid wooden fences or even stone walls can be economical enough for intensive animal husbandry, in which milk and traction as well as meat are obtained by constant labour in stable and field to feed herbivores without the pastures they would otherwise need. Often the animals are tethered or just guarded, without any fences or walls. But in large-scale raising on the prairie or savannah, if there are no fences then the cattle must be herded, and that requires constant vigilance to resist the herbivore instinct of drifting off to feed - and also constant motion. As the animals eat up the vegetation where they are gathered, the entire herd must be kept moving to find more. That is what still happens in the African savannah of the cattle herdsmen, and what was done in the American West as in other New World prairies, until barbed wire arrived to make ranching possible. One material difference between ranging in open country and ranching is that less labour is needed, because there is less need for vigilance within the fence. Another measurable difference is that cattle can do more feeding to put on weight, instead of losing weight when driven from place to place. But the increased productivity of ranching as opposed to ranging is actually of an entirely different order. African herders must be warriors to protect their cattle from their like as well as from the waning number of animal predators, but chiefly to maintain their reputation for violence which in turn assures their claim to the successive pastures they must have through the seasons. It was almost the same for the ranging cowboys of the American West, and while their own warrior culture was somewhat less picturesque than that of the Nuer or Turkana, it too was replete with the wasted energies of endemic conflict over land, water and sometimes even the cattle itself. Ranchers are not cream puffs either, but they can use their energies more productively because in most places - including the Bolivian Amazon for all its wild remoteness - their fences are property lines secured by the apparatus of the law, which itself can function far more easily among property-owning ranchers than among warrior nomads and rangers. Skills too are different. African herdsmen notoriously love their cattle to perdition but their expertise is all in the finding of pasture and water in semi-arid lands, as well as in hunting and war, and they are not much good at increasing fertility, and hardly try to improve breeds. It was the same in the American West, where the inception of today's highly elaborate cattle-raising expertise that makes red meat excessively cheap had to await the stability of ranching, and the replacement of the intrepid ranger by the more productive cowboy. Barbed wire is important therefore, and the story of how it was so quickly produced by automatic machines on the largest scale, efficiently distributed to customers necessarily remote from urban centres, marketed globally almost immediately, and finally used to change landscapes and societies, is certainly very interesting. But for all this, the reader will have to turn to Henry D. and Frances T. McCallum's The Wire That Fenced the West rather than the work at hand, in spite of its enthusiastic dust-jacket encomia from Noam Chomsky ("a deeply disturbing picture of how the modern world evolved"), Paul F. Starrs ("beautifully grim") and Lori Gruen, for whom the book is all about "structures of power and violence". The reason is that Reviel Netz, the author of Barbed Wire: An ecology of modernity, prefers to write of other things. For Netz, the raising of cattle is not about producing meat and hides from lands usually too marginal to yield arable crops, but rather an expression of the urge to exercise power: "What is control over animals? This has two senses, a human gain, and an animal deprivation". To tell the truth, I had never even pondered this grave question, let alone imagined how profound the answer could be. While that is the acquisitive purpose of barbed wire, for Professor Netz it is equally - and perhaps even more - a perversely disinterested expression of the urge to inflict pain, "the simple and unchanging equation of flesh and iron", another majestic phrase, though I am not sure if equation is quite the right word. But if that is our ulterior motive, then those of us who rely on barbed- wire fencing for our jollies are condemned to be disappointed, because cattle does not keep running into it, suffering bloody injury and pain for us to gloat over, but instead invisibly learns at the youngest age to avoid the barbs by simply staying clear of the fence. Fortunately we still have branding, "a major component of the culture of the West" and of the South too, because in Bolivia we also brand our cattle. Until Netz explained why we do it - to enjoy the pain of "applying the iron until - and well after - the flesh of the animal literally burns", I had always thought that we brand our cattle because they cannot carry notarized title deeds anymore than they can read off-limits signs. Incidentally, I have never myself encountered a rancher who expensively indulges in the sadistic pleasure of deeply burning the flesh of his own hoofed capital, opening the way for deadly infection; the branding I know is a quick thrust of the hot iron onto the skin, which is not penetrated at all, and no flesh burns. We finally learn who is really behind all these perversities, when branding is "usefully compared with the Indian correlate": Euro-American men, of course, as Professor Netz calls us. "Indians marked bison by tail-tying: that is, the tails of killed bison were tied to make a claim to their carcass. Crucially, we see that for the Indians, the bison became property only after its killing." We on the other hand commodify cattle "even while alive". There you have it, and Netz smoothly takes us to the inevitable next step: "Once again a comparison is called for: we are reminded of the practice of branding runaway slaves, as punishment and as a practical measure of making sure that slaves - that particular kind of commodity - would not revert to their natural free state. In short, in the late 1860s, as Texans finally desisted from the branding of slaves, they applied themselves with ever greater enthusiasm to the branding of cows." Texans? Why introduce Texans all of a sudden, instead of cowboys or cattlemen? It seems that for Professor Netz in the epoch of Bush II, Texans are an even more cruel sub-species of the sadistic race of Euro-American men (and it is men, of course). As for the "enthusiasm", branding too is hard work, and I for one have yet to find the vaqueros who will do it for free, for the pleasure of it. By this point in the text some trivial errors occur, readily explained by a brilliantly distinguished academic career that has understandably precluded much personal experience in handling cattle. Professor Netz writes, for example, that "moving cows over long distances is a fairly simple task. The mounted humans who controlled the herds - frightening them all the way to Chicago . . .". Actually, it is exhausting work to lead cattle over any distance at all without causing drastic weight loss - even for us in Bolivia when we walk our steer to the market, in spite of far more abundant grass and water than Texas or even the upper Midwest ever offered, at the rate of less than nine miles a day to cover a mere 200 kilometres, instead of several times that distance to reach Chicago. Used as we are to seeing our beautiful Nelor cattle grazing contentedly in a slow ambling drift across the pastures, it is distressing to drive them even at the calmest pace for the shortest distances; they are so obviously tense and unhappy, and of course they lose weight with each unwanted step. As for "frightening them all the way to Chicago", that is sheer nonsense: nothing is left of cattle stampeded a few days, let alone all the way to Chicago. Unfortunately, his trivial error makes it impossible for Netz to understand the difference between ranging and ranching that he thinks he is explaining. All this and more besides (horses are "surrounded by the tools of violence") occurs in the first part of a book that proceeds to examine at greater length the cruelty of barbed wire against humans. He starts with the battlefield - another realm of experience that Netz cannot stoop to comprehend. He writes that barbed wire outranks the machine gun in stopping power, evidently not knowing that infantry can walk over any amount of barbed wire if it is not over-watched by adequate covering fires, and need not waste time cutting through the wires one by one. Nowadays well-equipped troops have light-alloy runners for this, as other purposes, but in my day, our sergeants trained us to cross rolls of barbed wire by simply stepping over the backs of prone comrades, who were protected well enough from injury from the barbs by the thick wool of their British battle dress - because the flexible rolls gave way of course. Perhaps because the material is rather directly derived from standard sources, no such gross errors emerge in the still larger part of the book devoted to the evils of the barbed wire of the prison camps, and worse, of Boer War British South Africa, Nazi Germany and the Soviet Union (Guantanamo no doubt awaits a second edition). It is reassuring if not exactly startling to read that Professor Netz disapproves of prison camps, concentration camps and extermination camps, that he is not an enthusiast of either the Soviet Union or Nazi Germany, while being properly disapproving of all imperialisms of course. But it does seem unfair to make barbed wire the protagonist of these stories as opposed to the people who employed barbed wire along with even more consequential artefacts such as guns. After all, atrocities as extensive as the Warsaw Ghetto with its walled perimeter had no need of barbed wire, any more than the various grim fortresses and islands in which so many were imprisoned, tortured and killed without being fenced in. There is no need to go on. Enough of the text has been quoted to identify the highly successful procedures employed by Reviel Netz, which can easily be imitated - and perhaps should be by as many authors as possible, to finally explode the entire genre. First, take an artefact, anything at all. Avoid the too obviously deplorable machine gun or atom bomb. Take something seemingly innocuous, say shoelaces. Explore the inherent if studiously unacknowledged ulterior purposes of that "grim" artefact within "the structures of power and violence". Shoelaces after all perfectly express the Euro-American urge to bind, control, constrain and yes, painfully constrict. Compare and contrast the easy comfort of the laceless moccasins of the Indian - so often massacred by booted and tightly laced Euro-Americans, as one can usefully recall at this point. Refer to the elegantly pointy and gracefully upturned silk shoes of the Orient, which have no need of laces of course because they so naturally fit the human foot - avoiding any trace of Orientalism, of course. It is all right to write in a manner unfriendly or even openly contemptuous of entire populations as Professor Netz does with his Texans at every turn ("ready to kill. . . they fought for Texan slavery against Mexico"), but only if the opprobrium is always aimed at you-know-who, and never at the pigmented. Clinch the argument by evoking the joys of walking on the beach in bare and uncommodified feet, and finally overcome any possible doubt by reminding the reader of the central role of high-laced boots in sadistic imagery. That finally unmasks shoelaces for what they really are - not primarily a way of keeping shoes from falling off one's feet, but instruments of pain, just like the barbed wire that I have been buying all these years not to keep the cattle in, as I imagined, but to torture it, as Professor Netz points out. The rest is easy: the British could hardly have rounded up Boer wives and children without shoelaces to keep their boots on, any more than the very ordinary men in various Nazi uniforms could have done such extraordinary things so industriously, and not even Stalin could have kept the Gulag going with guards in unlaced Indian moccasins, or elegantly pointy, gracefully upturned, oriental shoes. From checker at panix.com Mon May 30 22:18:09 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:18:09 -0400 (EDT) Subject: [Paleopsych] New Statesman: John Gray reviews: Peter Watson: Ideas: a history from fire to Freud Message-ID: John Gray reviews: Peter Watson: Ideas: a history from fire to Freud http://www.newstatesman.com/Bookshop/300000098646 Peter Watson: Ideas: a history from fire to Freud Weidenfeld & Nicolson, 822pp, ?30 ISBN 029760726X The history of ideas has a history of its own, and it is not long. Peter Watson believes the first person to conceive of intellectual history may have been Francis Bacon, which places the birth of the subject in the late 16th century. In Greece and China more than 2,000 years ago, there were sceptics who doubted whether the categories of human thought could correctly represent the world, but the recognition that these categories change significantly over time is distinctly modern. Thanks to thinkers such as Vico and Herder, Hegel and Marx, Nietzsche and Foucault, the notion that ideas have a history is an integral part of the way we think today, and it surfaces incongruously in unlikely places. Thinkers of the right may rant against moral relativism and look back with nostalgia to a time when basic concepts seemed fixed for ever, but these days the right is committed to a militant belief in progress - and so to accepting that seemingly permanent features of the conceptual landscape may turn out to be no more than a phase in history. Given the importance of the history of ideas to the way we understand ourselves, you might expect it to be a flourishing discipline, but that is far from the case. As Isaiah Berlin used to say, it is an orphan subject. Ever sceptical of abstraction, historians complain that it slips easily into loose generalisation. For philosophers, who tend to assume that questions asked hundreds or even thousands of years ago about knowledge and the good life are essentially the same as the ones we ask today, it is irrelevant. Very few economists know anything much about the history of their discipline, and the same is true of many social scientists. At a time of grinding academic specialisation, intellectual history seems a faintly dilettantish, semi-literary activity, and the incentive structures that surround a university career do not encourage its practice. More fundamentally, the history of ideas is a casualty of the growth of knowledge. Anyone who aspires to study it on anything other than a miniaturist scale needs to know a great deal about a wide range of subjects - in many of which knowledge is increasing almost by the day. In these circumstances, a universal history of ideas seems an impossibly daunting project. Yet in Ideas: a history from fire to Freud, Watson gives us an astonishing overview of human intellectual development which covers everything from the emergence of language to the discovery of the unconscious, including the idea of the factory and the invention of America, the eclipse of the idea of the soul in 19th-century materialism and the continuing elusiveness of the self. In a book of such vast scope, a reader could easily get lost, but the narrative has a powerful momentum. Watson holds to a consistently naturalistic philosophy in which humanity is seen as an animal species developing in the material world. For him, human thought develops as much in response to changes in the natural environment - such as shifts in climate and the appearance of new diseases - as from any internal dynamism of its own. This overarching perspective informs and unifies the book, and the result is a masterpiece of historical writing. Watson's sympathy for naturalism enables him to spot some crucial and neglected turns in the history of thought. Nowadays, naturalistic philosophies are usually connected with those Enlightenment beliefs which hold that humanity progresses through the use of reason. Watson notes, however, that Spinoza, a pivotal thinker who may well have had a greater role in shap- ing the early Enlightenment than better-known figures such as Thomas Hobbes and Rene Descartes, took a different view. He never imagined that human life as a whole could be rational, and in a lovely passage quoted by Watson he wrote: "Men are not conditioned to live by reason alone, but by instinct. So they are no more bound to live by the dictates of an enlightened mind than a cat is bound to live by the laws of nature of a lion." In Spinoza's view, the capacity for rational inquiry may be what distinguishes human beings from other animals, but it is not the force that drives their lives - like other animal species, humans are moved by the energy of desire. This view reappeared in the 20th century in the work of Sigmund Freud, who took the further step of recognising that much of human mental life is unconscious. In conjunction with later work in cognitive science showing that there are many vitally important mental processes to which we can never consciously gain access, Spinoza's naturalism has helped shape a view of human beings that is different from the one we inherit from classical Greek philosophy and from most Enlightenment thinkers. One of the curiosities of intellectual life is the persistent neglect by philosophers of non-western traditions. No doubt this is partly ignorance on their part. Beyond a smattering of Plato and Aristotle and a few scraps from the British empiricists, most English-speaking philosophers know practically nothing of their own intellectual traditions, and no one would expect them to have any acquaintance with the larger intellectual inheritance of mankind. A more fundamental reason may be the view of the human subject found in some non-western philosophies. The ideas of personal identity and free will we inherit from Christianity have often been questioned, but they continue to mould the way we think, and any view of human life from which they are altogether absent remains unfamiliar and troubling. Watson is refreshingly free from the cultural parochialism that still disables so much western thought. Ranging freely across time and space, his survey includes some enlightening vignettes of Chinese and Indian thought, and he gives a useful account of Vedic traditions in which human individuality is regarded as an illusion. For those who want something more engaging than the dreary Plato-to-Nato narrative that dominates conventional histories of ideas, this wide range of reference will be invaluable. Inevitably there are gaps in Watson's account. His treatment of Buddhist philosophy is cursory - a surprising omission, given his naturalistic viewpoint. He concludes with some interesting thoughts on the failure of scientific research to find anything resembling the human self, as understood in western traditions. He asks whether the very idea of an "inner self" may not be misconceived, and concludes: "Looking 'in', we have found nothing - nothing stable anyway, nothing enduring, nothing we can all agree upon, nothing conclusive - because there is nothing to find." This conclusion is also mine, but it was anticipated more than 2,000 years ago in the Buddhist doctrine of anatman, or no-soul. The thoroughgoing rejection of any idea of the soul was one of the ideas through which Buddhism distinguished itself from orthodox Vedic traditions, which also viewed personal identity as an illusion but affirmed an impersonal world soul: an idea that Buddhists have always rejected. For them, human beings are like other natural processes, in that they are devoid of substance and have no inherent identity. The view of the human subject suggested by recent scientific research seems less strange when one notes how closely it resembles this ancient Buddhist view. Modern science seems to be replicating an account of the insubstantiality of the person that has been central to other intellectual traditions for millennia. It is an interesting comment on prevailing ideas of intellectual progress that one should be able to find such remarkable affinities between some of humanity's oldest and newest ideas. John Gray's most recent book is Heresies: against progress and other illusions (Granta) From checker at panix.com Mon May 30 22:18:14 2005 From: checker at panix.com (Premise Checker) Date: Mon, 30 May 2005 18:18:14 -0400 (EDT) Subject: [Paleopsych] LAT: Pinpointing civilization's shortcomings Message-ID: Pinpointing civilization's shortcomings http://www.calendarlive.com/books/cl-et-book27may27,0,381714,print.story?coll=cl-books-util BOOK REVIEW Pinpointing civilization's shortcomings Our Culture, What's Left of It The Mandarins and the Masses Theodore Dalrymple Ivan R. Dee: 344 pp., $27.50 By Anthony Day Special to The Times May 27, 2005 In his new book of essays on political, social and artistic aspects of modern life, Theodore Dalrymple falls upon his victims with singular ferocity, raging against modern British life in general and Labor Prime Minister Tony Blair in particular, as well as multiculturalism, the welfare state, modern art (Joan Miro and contemporary British art especially), D.H. Lawrence and Virginia Woolf. What? Mrs. Woolf a menace to civilization? Yes indeed, Dalrymple contends in "Our Culture, What's Left of It." Were Woolf around today, he writes, "she would at least have had the satisfaction of observing that her cast of mind -- shallow, dishonest, resentful, envious, snobbish, self-absorbed, trivial, philistine, and ultimately brutal -- had triumphed among the elites of the Western world." Was the Bloomsbury circle really so powerful? Dalrymple, a physician who works in British prisons and inner-city hospitals, believes it was, and that the "unacknowledged legislators of the world" are novelists, playwrights, film directors, journalists, artists and even pop singers under the influence of "outdated or defunct ideas of economists and social philosophers." Woolf's primal error, Dalrymple argues, was to confuse the domineering English male to whom she objected in the early 1930s with Adolf Hitler, who was even then threatening English liberty. In one essay he improbably juxtaposes writer Ivan Turgenev and political philosopher Karl Marx. Both, he notes, were born in 1818 and died in 1883; both attended Berlin University at overlapping times and were affected by German philosopher Georg Wilhelm Friedrich Hegel and his dialectic method of reasoning (thesis, antithesis, synthesis); both were in Brussels at the outbreak of the 1848 revolution, and both were spied upon by the secret police. Each had an illegitimate child, each lived and died in exile. The point of his comparison? The aloof and difficult Marx claimed to know the common people, but it was the gentle Turgenev who really did, as his sympathetic writings about Russian serfs and other members of the lower classes prove. This is not a new idea about Marx, but Dalrymple writes as if it were and hammers it relentlessly, slashing away at Marx as he does at Woolf. And poor Turgenev, who does not need Marx as a foil to be celebrated for his compassion and wisdom. By this point in his essays, Dalrymple has revealed that his mother fled to England to escape the Nazis, a fact that may help to explain his contempt for "elitists" like Woolf, who were safe in their languid upper-class self-absorption. He also discloses that his father was "a Communist by conviction." That may account for Dalrymple's zeal for annihilation, which he demonstrates toward his intellectual opponents. It resembles nothing so much as one communist intellectual's accusing another of misunderstanding the true Marxian faith. And perhaps that is behind Dalrymple's posture as implacable reactionary. He plays the part well. He believes that "an oppositional attitude toward traditional social rules is what wins the modern intellectual his spurs, in the eyes of other intellectuals.... What is good for the bohemian sooner or later becomes good for the unskilled worker, the unemployed, the welfare recipient -- the very people most in need of boundaries to make their lives tolerable or allow them hope of improvement." And the result, he argues, is "moral, spiritual and emotional squalor, engendering fleeting pleasures and prolonged suffering." Dalrymple protests that not all criticism of social conventions is wrong, but that critics, "including writers of imaginative literature, should always be aware that civilization needs conservation at least as much as it needs change ... and that immoderate criticism ... is capable of doing much, indeed devastating, harm." Dalrymple's conservative credentials are impeccable. Most of the essays in this collection are drawn from the City Journal, the publication of the Manhattan Institute of New York, for which he is a contributing editor. His previous collection of essays, "Life at the Bottom," was widely praised. And he has written for the London Spectator for more than a dozen years. Readers who enjoy those sorts of publications will be enthralled with these essays. But just as Marx should not be rejected altogether, neither should non-conservatives reject everything Dalrymple has to say. Having worked in the Muslim world and among Muslim immigrants in Britain, he has some provocative ideas about the conflict between Islam and the West. He passionately denounces the frequent cruel treatment of Muslim women by Muslim men and argues that modern Muslims must "either abandon their cherished religion or ... remain forever in the rear of human technical advance." And surely one need not be a right-wing reactionary to find objectionable the appearance of the punk rocker, in all his skin-pierced glory, who is pictured on the dust jacket of "Our Culture, What's Left of It." ______________ Anthony Day, former editorial page editor of The Times, is a regular contributor to Book Review. From ljohnson at solution-consulting.com Tue May 31 04:01:29 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Mon, 30 May 2005 22:01:29 -0600 Subject: [Paleopsych] NYT: British Medical Experts Campaign for Long, Pointy Knife Control In-Reply-To: References: Message-ID: <429BE199.3090501@solution-consulting.com> Next on the agenda: Large, dangerous rocks must be outlawed. But when rocks are outlawed, only outlaws will have rocks. Lynn Premise Checker wrote: > British Medical Experts Campaign for Long, Pointy Knife Control > [Switch blades have long been illegal in New York State, so precedents > for knife control in this country are already in place.] > http://www.nytimes.com/2005/05/27/science/27knife.html > > By [2]JOHN SCHWARTZ > > Warning: Long, pointy knives may be hazardous to your health. > > The authors of an editorial in the latest issue of the British Medical > Journal have called for knife reform. The editorial, "Reducing knife > crime: We need to ban the sale of long, pointed kitchen knives," notes > that the knives are being used to stab people as well as roasts and > the odd tin of Spam. > > The authors of the essay - Drs. Emma Hern, Will Glazebrook and Mike > Beckett of the West Middlesex University Hospital in London - called > for laws requiring knife manufacturers to redesign their wares with > rounded, blunt tips. > > The researchers noted that the rate of violent crime in Britain rose > nearly 18 percent from 2003 to 2004, and that in the first two weeks > of 2005, 15 killings and 16 nonfatal attacks involved stabbings. In an > unusual move for a scholarly work, the researchers cited a January > headline from The Daily Express, a London tabloid: "Britain is in the > grip of knives terror - third of murder victims are now stabbed to > death." Dr. Hern said that "we came up with the idea and tossed it > into the pot" to get people talking about crime reduction. "Whether > it's a sensible solution to this problem or not, I'm not sure." > > In the United States, where people are more likely to debate gun > control than knife control, partisans on both sides sounded amused. > Wayne LaPierre, executive vice president of the National Rifle > Association, asked, "Are they going to have everybody using plastic > knives and forks and spoons in their own homes, like they do in > airlines?" > > Peter Hamm, a spokesman for the Brady Campaign to Prevent Gun > Violence, which supports gun control, joked, "Can sharp stick control > be far behind?" He said people in his movement were "envious" of > England for having such problems. "In America, we can't even come to > an agreement that guns are dangerous and we should make them safer," > he said. > > The authors of the editorial argued that the pointed tip is a > vestigial feature from less mannered ages, when people used it to > spear meat. They said that they interviewed 10 chefs in England, and > that "none gave a reason why the long, pointed knife was essential," > though short, pointed knives were useful. > > An American chef, however, disagreed with the proposal. "This is yet > another sign of the coming apocalypse," said Anthony Bourdain, the > executive chef at Les Halles and the author of "Kitchen Confidential." > > A knife, he said, is a beloved tool of the trade, and not a thing to > be shaped by bureaucrats. A chef's relationship with his knives > develops over decades of training and work, he said, adding, "Its > weight, its shape - these are all extensions of our arms, and in many > ways, our personalities." > > He compared the editorial to efforts to ban unpasteurized cheese. > "Where there is no risk," he said, "there is no pleasure." > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From shovland at mindspring.com Tue May 31 13:41:32 2005 From: shovland at mindspring.com (Steve Hovland) Date: Tue, 31 May 2005 06:41:32 -0700 Subject: [Paleopsych] Revolutionary nanotechnology illuminates brain cells at work Message-ID: <01C565AB.C8FC7B50.shovland@mindspring.com> http://www.physorg.com/news4321.html May 30, 2005 Until now it has been impossible to accurately measure the levels of important chemicals in living brain cells in real time and at the level of a single cell. Scientists at the Carnegie Institution's Department of Plant Biology and Stanford University are the first to overcome this obstacle by successfully applying genetic nanotechnology using molecular sensors to view changes in brain chemical levels. The sensors alter their 3-dimensional form upon binding with the chemical, which is then visible via a process known as fluorescence resonance energy transfer, or FRET. In a new study, the nanosensors were introduced into nerve cells to measure the release of the neurotransmitter glutamate -- the major brain chemical that increases nerve-cell activity in mammalian brains.